It's interesting how the rhetoric around #AI shifts around and now companies are using phrases like "embrace #AI or face extinction".
-
It's interesting how the rhetoric around #AI shifts around and now companies are using phrases like "embrace #AI or face extinction". I'm thinking of Adobe's recent move to force artists to use the AI features in their products, under the threat they are "not going to be successful" if they don't; or Rosetta announcing that linguists need to use Rosetta's AI features or face the "extinction" of the languages they work on.
It's a short step from "extinction"/"unsuccessful" ("low fitness") to "elimination". The latter word is what is meant. The passive voice/inevitability framing purposely obscures the agency of the literal, nameable human beings who are attempting to bring this reality into existence. "Embrace #AI or we will do our best to eliminate your profession, your livelihood, and you" is more precise and brings out the hostility of the threat these corporate statements attempt to hide.
This dehumanizing and ultimately eugenic idea frequently hides in plain sight like this. Sometimes evolutionary or genetic language and metaphors are used. Don't accept it. These folks may try to create this reality but that doesn't mean they'll succeed and it doesn't mean we need to surrender and let them succeed.
#GenerativeAI #GenAI #AI #eugenics
-
Relentless repetition of dehumanizing language is a pillar of coercive control, which is really what generative AI has been and will continue to be about.
We don't need to live in a world where things we don't want are regularly forced upon us by people who have more power than we do. We can reject that world even when we have little choice but to navigate it.
#AI #GenAI #GenerativeAI #CoerciveControl
-
I've spoken to quite a few people in and around #tech who've expressed sentiments along the lines of the magic being gone, it not being fun anymore, or generally it changing in negative ways in the last few years especially. I think the expressions by tech leaders of open hostility towards workers and users, as well as the open embrace of #eugenics -- which is fundamentally a hateful ideology -- by so many tech leaders go partway towards explaining this shift. My read is that folks are sensing that the sector has become hostile to most people, and the experience of that sensation is that the magic is gone, or the products aren't interesting, or it's not fun anymore, etc. I would say that something more sinister is going on.
-
@abucci Yeah, ever since ChatGPT there was a widespread line "use AI or be left behind" which severely troubles me! But you're saying the language is escalating?
-
I feel like it is. It seems to be changing form from "left behind" to "you'll become a failure/obsolete". Certainly the Adobe and Rosetta quotes I cited have the latter, more threatening, tone.
Neither are true, though, not as written nor factually. You won't be "left behind". Instead, bosses/employers/owners will actively and knowingly modify job descriptions, fire or lay people off, etc. to bring about conditions where people they've chosen to exclude are "behind". This isn't gravity, it's human decisionmaking, and the decisions could be made differently. That's why I characterize rhetoric like this as threatening: the powers that be are threatening to take people's livelihoods away and use AI as an excuse, in an attempt to avoid blame for their destructive choices. That's also why I characterize it as coercive control: they are trying to control people's behavior and choices using veiled threats combined with trying to erode people's sense of self worth ("you'll end up a failure, a loser, if you don't do what we want").
-
@abucci Absolutely...
Sigh, being well known as a techie this has been a frustrating time for me! People want to talk with me about "AI" , but they don't accept my educated opinions on it. Locking me into frustrating arguments.
I'm now wondering if the phrase "coercive misinformation justifying exploitation of humans & planet" will help me sidestep it...
-
@[email protected] I'm sorry about that. I find myself in frustrating arguments about AI as well. That's partly why I post about it here, to refine my views.
I feel like these AI conversations can end up frustrating for everyone involved partly because there is a host of prejudices and blind spots that come together around this technology. For instance, a whole lot of people, especially Americans, are energy blind--they don't have a very good sense for how much energy it takes to do stuff, where that energy comes from, and what it takes to generate and transmit energy. Some of the more outlandish "proposals" about powering the hyperscale data centers behind generative AI ambitions would have us build more nuclear reactors than there is uranium to fuel, or deploy more batteries than we have minerals to build. Most people aren't informed enough to say "uh, that's physically impossible to do?" and some end up believing these prognostications. I also feel that a whole lot of people aren't really aware of just how much outright theft of material has occurred to create things like ChatGPT. Just straight up taking such an enormous amount of material that creators put so much hard work into is unsustainable. It will have knock-on effects, like people refusing to put their work on the internet for fear of having it stolen, that degrade life for everyone (it's a kind of tragedy of the commons perpetrated by a small number of rapacious actors). Is that the world we want to live in just to have a whiz bang toy that a lot of people say makes their jobs harder?
-
@abucci I feel this so, so much.
And I feel helpless against it.
Copyright © 2024 NodeBB | Contributors