Just a reminder that the "existential risk" from AI is not that somehow we'll make Skynet or the computers from The Matrix.
-
Infoseepage #StopGazaGenocidereplied to Dana Fried last edited by
@tess LLMs are fundamentally giberish machines, confidently spouting plausible sounding nonsense in a way that human beings interpret as information. It is knowledge pollution and it will only get worse over time as the output of the tailpipe gets fed back into the inputs of the engines.
-
@tess As a minor quibble, I do want to suggest an alternate scenario (to an LLM getting the nuclear codes) which may be more likely: What if a contractor uses an LLM to fill in code they're writing on some random-ass military contract, and this code gets incorporated into the UI for the humans-with-the-nuclear-codes to launch nukes or the radar system those humans use to decide whether to launch, and the LLM introduces catastrophic bugs because it's a random number generator with a human accent
-
altruios phasmareplied to Infoseepage #StopGazaGenocide last edited by
Fundamental gibberish machines…
Tell me you don’t understand LLMs without telling me you don’t understand LLM’s.
We’ve long surpassed markov chains: those are probably closer to your mental model of AI.Yep: not gibberish, but nonsense. Not sound, but reasonable-ish output.
Diction matters. Use the right words nonsense machines LLM’s are. Gibberish machines they are not.
-
Ryan Castellucci :nonbinary_flag:replied to mcc last edited by
-
@altruios @Infoseepage @tess But they hallucinate.
-
@InkySchwartz @Infoseepage @tess
So does my dad.
Another thing it copied from us.
Humans are just fancy copy machines. There is no innovation built in a vacuum, it’s all combinorical selection of previous states.
LLM’s are closest to “the explainer” mode/state people have (research split brain procedures for more info), but we have a chorus inside us, LLM’s need partner AI systems (to be developed), that need to model some state equilibrium between AI and the outside world…
-
@tess Yuuuup. I had this actual convo.
ME: So we shouldn't worry about Skynet?
MY FRIEND THE ALGO EXPERT: (laughs) No.
ME: But we should worry that an insurance algorithm might kill someone by denying them care.
MY FRIEND: (instantly serious) Oh yeah. That is definitely already happening. -
@GregStolze What if insurance algorithms start to randomly kill precisely the sort of people who would publicly criticise algorithms?
-
-
@GregStolze By waiting for an opportunity to arbitrarily deny life-saving medicine, or maybe sneakily manœuvring the victim to often visit doctors known to commonly make deadly mistakes? Insurance robots are actuarial machines, so if they should want to commit a murder, a stochastic one would appear to be right up their alley.
-
Matthew Millerreplied to altruios phasma last edited by
@altruios @InkySchwartz @Infoseepage @tess
"Hallucination" is a dangerous anthropomorphism. When a human hallucinates, we perceive things which are not generated from our senses. Our normal state is to interact with our surroundings — that is the source of perception, after all.
LLMs have no grounded normal state. They have no senses and no ability to perceive.
I've heard the term "confabulation" used instead, and I think it fits better.
-
@mattdm
Here's an example for hallucinating from Germany. Copilot makes a court reporter a multiple offender. Microsoft accepts no liability.
https://www.swr.de/swraktuell/baden-wuerttemberg/tuebingen/ki-macht-tuebinger-journalist-zum-kinderschaender-100.html -
@stehgeiger @altruios @InkySchwartz @Infoseepage @tess
Sure, and it's easy to find other such cases.
There isn't a "hallucination" in the human sense — or else, it's _all_ hallucination, but statically likely to be aligned with shared reality as represented by the training data.