"turns out that LLM summaries are actually useful"—false.
-
"turns out that LLM summaries are actually useful"—false. LLMs have no ability to determine interesting content, which is a fundamental assertion of the post. this is the exact assertion upon which "AI detectors" rest, which regularly create false positives because they use tech that fundamentally makes shit up. this is a faux technical argument and this user works for google so they really need to try harder not to advertise their utility in any way shape or form https://weatherishappening.network/@wordshaper/113356903751134576
-
If an LLM summary of your work is accurate that indicates what you wrote doesn't really have much interesting information in it and maybe you should try harder.
in addition to what you stated, this is also... a really weird non-objective measure. In addition to there being no correlation between text and meaning in an LLM, there is absolutely no correlation between text and worth. what the fuck. -
Asta [AMP]replied to Asta [AMP] last edited by [email protected]
@[email protected] Just because something you've said might be statistically plausible doesn't mean it isn't worth saying! What the hell.
-
@aud it's posed as being nominally anti-management though so it's ok to make arguments about how the machine measures people's worth as long as you have a convenient enemy right? i love a convenient enemy
-
@[email protected] nothing quite like getting to "agree" with one of the criticisms of the AI hype cycle while continuing to feed into it! my bread is buttered on both sides and they're both delicious.
-
If an LLM spits out a particular block of text, it means it has seen variations of that block of text before, and probably quite often. That doesn't mean it isn't true, but if true, it's almost certainly unoriginal.
-
@[email protected] @[email protected] the first part is literally what I said; the second part is entirely irrelevant. “Novel” or “originality” and the value it holds is decided only by the conversation itself and isn’t something you can quantify in general.
For instance: I’m repeating myself because my point was ignored. That doesn’t make it less important. -
@[email protected] @[email protected] you can’t quantify relevancy or importance with “novelty” or “originality” at any scale in this context, much less a global one.
-
Asta [AMP]replied to BenAveling last edited by [email protected]
@[email protected] @[email protected]
but if true, it's almost certainly unoriginal.
but not only is this irrelevant, it isn't even necessarily true. "Original" is defined as much by form as context.
I think this kinda hits the fundamental problem with LLMs in a huge way, which is their application and that they are being forced into places they shouldn't be. 2 + 2 = 4 isn't "novel", but it is correct, and it is the right answer when that problem comes up. "Originality" and "novelty" are concepts that seem important as a rule only when you need to make the stock numbers go up. But a phrase or concept that seems insipid in one place is precisely the right building block or key to expand in another. So it's not even a "measure" of originality. -
@BenAveling @aud @hipsterelectron "originality" defined as "has this block of text or close to it ever been uttered by humans" is not really a useful measure of anything though. The exact same statement in 2 different contexts can have wildly divergent meanings. (Ex: the first person to say "good luck" sarcastically was imo making an original statement even though many presumably had said the words prior.)
Determining non-trivial originality of text requires more inputs than the text itself