It's like a running joke that nobody likes AI but companies are shoving it into products anyways, but as far as I can tell that's not true: most (many?) people accept it, have mostly replaced search engines with it, and trust anything it tells them.
-
It's like a running joke that nobody likes AI but companies are shoving it into products anyways, but as far as I can tell that's not true: most (many?) people accept it, have mostly replaced search engines with it, and trust anything it tells them. I've had multiple people recently go to their phone and say "ChatGPT says…" to answer simple questions. If I point out that it's wrong a lot they get defensive and tell me how rare that is and how it's mostly right without actually verifying this.
-
Sam Whitedreplied to Sam Whited last edited by [email protected]
Normally I blame valuing convenience above everything else for this sort of thing, but as far as I can tell it's not even more convenient or easier for them, a search of the web, or Wikipedia, would have turned up more accurate answers to most of their queries (ie. "What time is this local business open?"). I don't know how to counteract this though, or convince people to care about the harm it does, or even convince them that it's not actually helping them.
-
@sam We had an Alexa to play with a few months ago, and a lot of what it told us was just rubbish. Not sure how Alexa works, but it was easier to just look it up ourselves.
-
Sam Whitedreplied to Cycling_Liz last edited by [email protected]
@Cycling_Liz oooh, that actually gives me an idea: I wonder if people are so accepting and uncritical of it because they're already used to this pattern from stuff like Alexa. Even before all this LLM nonsense took off, they had a device that could search the web and respond to simple queries, so this just seems like an upgraded version of that, they're already used to using it, so they use the new thing without thinking about it (if that makes sense?)
I'll have to mull that over; thanks!
-
@sam as an observer of college-level approaches, it's been disheartening to watch the wavering media literacy over the last 15 years - from showing distrust of convenient but unpredictably biased sources like Wikipedia, to 2016-inflected ramping up of good basic approaches like Caulfield's SIFT, and then mostly chucking in the towel in the face of polarization post 2020. Throughout it all, students default is to pick the first answer they see, critical evaluation is mostly not at hand at all.
-
@sam critical evaluation / media literacy is certainly opposed to convenience. The easy markers have also largely degraded - we all would have increased hesitations about relying on any non-evalulated marker (academic journal? major news org? recognizable media brand domain? eep) at this point, not to absolve LLMs untrustworthiness but they are more alike the rest in needing cross-verification and applied critical thinking. Asking folks to do more work, or may as well take the first answer.
-
@loppear I think what sets LLMs apart from those is the conversational UI. People interpret "able to write complete sentences that are grammatically correct" (a thing LLMs are good at) with "has the ability to synthesize ideas and context into something new" (a thing they can't do). They may not think about it in those terms, but it gives your brain an easy way to launder "I don't want to think critically" into "this machine has thought critically, and I trust it to have more expertise than me"
-
@Luke O. @Sam Whited
I'm not sure when it started because I think I managed to insulate myself from it, but at some point social media started to encourage sharing text as screenshots, usually without "receipts" (that's citation/reference information for old people like myself). That was bad. LLM Chatbots are worse, not least because they undermine the ability of even motivated users to document sources. These two trends together are a very very bad combination. -
@sam and that they present as bias-washing, either explicitly as Google/Bing "summaries" of below results or by marketing as a synthesis of all the sources. Easily/intentionally misinterpreted as already doing the hard work of cross-evaluation of multiple results. So yes in that sense (and plenty others) they are much worse for the information ecosystem.
-
-
@sam recently I asked some domain experts (bike mechanics!) on a group chat a clarification question about brake bleeds. Then I had to flat-out say "your brains burn down fewer forests than LLM answers" when they one of them sent me a ChatGPT response.
I don't know if the eco angle works, but that's where I ended up.
-
@creek that hurts me so much; I feel like 99% of people I know wouldn't care about the environmental impact part, so I've started not using that as much in my complaints even though that to me seems like the single biggest externality that should be avoided.