As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.
-
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.
A thread, with links:
Chirag Shah and I wrote about this in two academic papers:
2022: https://dl.acm.org/doi/10.1145/3498366.3505816
2024: https://dl.acm.org/doi/10.1145/3649468We also have an op-ed from Dec 2022:
https://iai.tv/articles/all-knowing-machines-are-a-fantasy-auid-2334>>
-
Prof. Emily M. Bender(she/her)replied to Prof. Emily M. Bender(she/her) last edited by
Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words.
>>
-
Prof. Emily M. Bender(she/her)replied to Prof. Emily M. Bender(she/her) last edited by
If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance.
Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.
>>
-
Prof. Emily M. Bender(she/her)replied to Prof. Emily M. Bender(she/her) last edited by
But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they'd still be a terrible technology for information access.
Setting things up so that you get "the answer" to your question cuts off the user's ability to do the sense-making that is critical to information literacy.
>>
-
Prof. Emily M. Bender(she/her)replied to Prof. Emily M. Bender(she/her) last edited by
That sense-making includes refining the question, understanding how different sources speak to the question, and locating each source within the information landscape.
>>
-
Prof. Emily M. Bender(she/her)replied to Prof. Emily M. Bender(she/her) last edited by
Imagine putting a medical query into a standard search engine and receiving a list of links including one to a local university medical center, one to WebMD, one to Dr. Oz, and one to an active forum for people with similar medical issues.
If you have the underlying links, you have the opportunity to evaluate the reliability and relevance of the information for your current query --- and also to build up your understanding of those sources over time.
>>
-
Prof. Emily M. Bender(she/her)replied to Prof. Emily M. Bender(she/her) last edited by
If instead you get an answer from a chatbot, even if it is correct, you lose the opportunity for that growth in information literacy.
The case of the discussion forum has a further twist: Any given piece of information there is probably one you'd want to verify from other sources, but the opportunity to connect with people going through similar medical journeys is priceless.
>>
-
Prof. Emily M. Bender(she/her)replied to Prof. Emily M. Bender(she/her) last edited by
Finally, the chatbots-as-search paradigm encourages us to just accept answers as given, especially when they are stated in terms that are both friendly and authoritative.
But now more than ever we all need to level-up our information access practices and hold high expectations regarding provenance --- i.e. citing of sources.
The chatbot interface invites you to just sit back and take the appealing-looking AI slop as if it were "information". Don't be that guy.
/fin
-
Prof. Emily M. Bender(she/her)replied to Prof. Emily M. Bender(she/her) last edited by
Sunday's thread on why chatbots & LLMs are a bad solution for information access, with replies to the most common types of counterarguments I encountered in my mentions.
https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/
-
Riley S. Faelanreplied to Prof. Emily M. Bender(she/her) last edited by
@emilymbender Unless the chatbot will be able to do a good infodump.