I once worked at a company that sold industry-specific core-business software to deep-pocketed corps who couldn’t / wouldn’t / shouldn’t roll their own.
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
So Megacorp’s new AI customer support tool describes features that don’t exist, or tells people to eat nails and glue, or is just •wrong•.
Guess what? Their hapless, undertrained, poverty-wage, treated-like-dirt humans who used to handle all the support didn’t actually help people either. Megacorp demanded throughput so high and incentivized ticket closure so much that their support staff were already leading people on wild goose chases, cussing them out, and/or quitting on the spot.
6/
-
Gen AI doesn’t cuss people out, doesn’t quit on the spot, and has extremely high throughput. It leads people on wild goose chases •far• more efficiently than the humans. And hell, sometimes, just by dumb luck, it’s actually right! Like…maybe more than half the time!
When your previous baseline is the self-made nightmare of late stage capitalism tech support, that is •amazing•.
7/
-
And you can control it (sort of)! And it protects you from liability (maybe)! And all it takes is money and environmental disaster!
Run that thought process across other activites where corps are deploying gen AI.
I suspect a lot of us, despite living in this modern corporate hellscape, still fail to understand just how profoundly •broken• the operations of big businesses truly are, how much they function on fakery and deception and nonsense.
So gen AI is fake? So what. So is business.
8/
-
Jeff Miller (orange hatband)replied to Paul Cantrell last edited by
@inthehands oh ouch you just hit me in (current job) better than Excel and (previous job) automated response substituting when it should be augmenting human support agents.
-
I am hamming this up for cynical dramatic effect, but I do think there’s a serious thought here: so much activity within business delivers so little of actual value to the world that replacing slow human nonsense crap with fast automated nonsense crap seems like a win.
Trying to imagine the world through MBA goggles on, it seems perfectly rational.
When people consider gen AI, I ask them to ask themselves: “Does it matter if it’s wrong?” Often, the answer is “no.”
9/
-
If you’ll indulge another industry story — sorry, this thread is going to get absurdly long — let me tell you about one of the worst clients I ever had:
Group of brothers. They’d made fuck-you money in marketing or something. They founded a startup with a human benefit angle, do some good for the world, yada yada.
Common now, but new-ish idea at the time: gamified online health & well-being platform that a company (or maybe insurer, whatever) offers to its employees.
10/
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
The big brilliant idea at the heart of the product they were building? The Life Score: a number that quantifies your overall well-being, a number that you can try to raise by doing healthy activities.
How exactly was this number to be calculated? Eh, details.
11/
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
They had this elaborate business plan: the market opportunity, the connections, the moving parts — and in the middle of this giant world-domination scheme, a giant hole. Just black box (currently empty) labeled “magic number that makes people get healthier.”
The core feature of their product, the lynchpin that would make the entire thing actually useful, was just a big-ass TBD.
12/
-
I was hired to implement, but quickly realized they had no idea what they wanted me to build. Worse: they hadn't hired any of the people (like, say, a health actuary or a behavioral psychologist) who would be remotely qualified to help them figure it out. The architect of their giant system was a chemical engineer of some kind who was trying to get into tech. Lots of big ideas about what it would •look like•, but nobody in sight had a clue how this thing would actually •work•. Zero R&D.
13/
-
No worries. Designers were cranking out UI! Marketers were…marketing! Turning the Life Score from vague founder notion to working system was a troublesome afterthought.
So…like a fool, I tried to help them suss it out. It turned out they •did• sort of have a notion:
1. Intake questionnaire about your lifestyle
2. Assign points to responses
3. System suggests healthy activities
4. Each activity adds points to your score if you do it14/
-
@inthehands Agreed, and there's another level of fakery here that interests me. I suspect a bunch of the corporate "AI" projects are just taking advantage of the hype wave to rebuild something that needed rebuilding. That key people know the "AI" benefit is zero, but it's the only way to get the rest of the project done.
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
And then, like a •damn• fool, I pointed out to them the gaping chasm between (2) and (4). Think about it: at the start, the score measures (however dubiously) the state of your health. But after you do some activities, the score measures how many activities you did.
The score •changes meaning• after intake. And it's designed to go up over time. Even if your health is getting worse.
And like an •utter• damn fool, I thought this was a flaw.
15/
-
It was only after the whole contract crashed and burned (they were, it turns out, truly awful people) that I realized that my earnest data-conscious questions were threatening their whole model.
Their product was there to make the “healthy” line go up. Not to actually make people healthy, no! Just to make the line go up.
It was an offer of plausible deniability: for users, for their employers, for everyone. We can all •pretend• we’re getting healthier! Folks will pay good money for that.
16/
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
Of •course• their whole business plan had a gaping hole at the center. That was the point! If that Life Score is •accurate•, if it actually describes the real-world state of a person’s health in any kind of meaningful way, that wrecks the whole thing.
Now, of course, there would be no Paul to ask them annoying questions about the integrity of their metrics. They’d just build it with gen AI.
17/
-
Would gen AI actually be a good way to help people get healthy with this product? No. But that was never the goal.
Would gen AI have been a good option for these rich people trying to get richer by building a giant hoax box that lets a bunch of parties plausibly claim improved employee health regardless of reality? Hell yes.
18/
-
Again, my gen AI question: Does it matter if it’s wrong?
I mean, in some situations, yes…right? Like, say, vehicles? that can kill people?
Tesla’s out there selling these self-crashing cars that are •clearly• not ready for prime time, and trap people inside with their unopenable-after-accident doors and burn them alive. And they’re •still• selling crap-tons of those things.
If it doesn’t matter to •them•, how many biz situations are there where “fake and dangerous” is 100% acceptable?
19/
-
Does it matter if it’s wrong?
In the nihilism of this current stage of capitalism, “no” sure looks like a winning bet.
/end
-
@inthehands your CS lectures must be excellent if these thought provoking threads are anything to go by. I've never considered any of that before, really interesting.
-
A Scape Of Goats 🍉replied to Paul Cantrell last edited by
@inthehands it's not broken, it's built that way. it is that way because the bosses, who make decisions about these things, don't have to deal with the consequences of their decisions because they don't do the work.
-
Paul Cantrellreplied to A Scape Of Goats 🍉 last edited by
@diedofheartbreak
Or maybe benefit and/or harm to real actual human beings are simply not the consequences for n question.