I once worked at a company that sold industry-specific core-business software to deep-pocketed corps who couldn’t / wouldn’t / shouldn’t roll their own.
-
I am hamming this up for cynical dramatic effect, but I do think there’s a serious thought here: so much activity within business delivers so little of actual value to the world that replacing slow human nonsense crap with fast automated nonsense crap seems like a win.
Trying to imagine the world through MBA goggles on, it seems perfectly rational.
When people consider gen AI, I ask them to ask themselves: “Does it matter if it’s wrong?” Often, the answer is “no.”
9/
-
If you’ll indulge another industry story — sorry, this thread is going to get absurdly long — let me tell you about one of the worst clients I ever had:
Group of brothers. They’d made fuck-you money in marketing or something. They founded a startup with a human benefit angle, do some good for the world, yada yada.
Common now, but new-ish idea at the time: gamified online health & well-being platform that a company (or maybe insurer, whatever) offers to its employees.
10/
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
The big brilliant idea at the heart of the product they were building? The Life Score: a number that quantifies your overall well-being, a number that you can try to raise by doing healthy activities.
How exactly was this number to be calculated? Eh, details.
11/
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
They had this elaborate business plan: the market opportunity, the connections, the moving parts — and in the middle of this giant world-domination scheme, a giant hole. Just black box (currently empty) labeled “magic number that makes people get healthier.”
The core feature of their product, the lynchpin that would make the entire thing actually useful, was just a big-ass TBD.
12/
-
I was hired to implement, but quickly realized they had no idea what they wanted me to build. Worse: they hadn't hired any of the people (like, say, a health actuary or a behavioral psychologist) who would be remotely qualified to help them figure it out. The architect of their giant system was a chemical engineer of some kind who was trying to get into tech. Lots of big ideas about what it would •look like•, but nobody in sight had a clue how this thing would actually •work•. Zero R&D.
13/
-
No worries. Designers were cranking out UI! Marketers were…marketing! Turning the Life Score from vague founder notion to working system was a troublesome afterthought.
So…like a fool, I tried to help them suss it out. It turned out they •did• sort of have a notion:
1. Intake questionnaire about your lifestyle
2. Assign points to responses
3. System suggests healthy activities
4. Each activity adds points to your score if you do it14/
-
@inthehands Agreed, and there's another level of fakery here that interests me. I suspect a bunch of the corporate "AI" projects are just taking advantage of the hype wave to rebuild something that needed rebuilding. That key people know the "AI" benefit is zero, but it's the only way to get the rest of the project done.
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
And then, like a •damn• fool, I pointed out to them the gaping chasm between (2) and (4). Think about it: at the start, the score measures (however dubiously) the state of your health. But after you do some activities, the score measures how many activities you did.
The score •changes meaning• after intake. And it's designed to go up over time. Even if your health is getting worse.
And like an •utter• damn fool, I thought this was a flaw.
15/
-
It was only after the whole contract crashed and burned (they were, it turns out, truly awful people) that I realized that my earnest data-conscious questions were threatening their whole model.
Their product was there to make the “healthy” line go up. Not to actually make people healthy, no! Just to make the line go up.
It was an offer of plausible deniability: for users, for their employers, for everyone. We can all •pretend• we’re getting healthier! Folks will pay good money for that.
16/
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
Of •course• their whole business plan had a gaping hole at the center. That was the point! If that Life Score is •accurate•, if it actually describes the real-world state of a person’s health in any kind of meaningful way, that wrecks the whole thing.
Now, of course, there would be no Paul to ask them annoying questions about the integrity of their metrics. They’d just build it with gen AI.
17/
-
Would gen AI actually be a good way to help people get healthy with this product? No. But that was never the goal.
Would gen AI have been a good option for these rich people trying to get richer by building a giant hoax box that lets a bunch of parties plausibly claim improved employee health regardless of reality? Hell yes.
18/
-
Again, my gen AI question: Does it matter if it’s wrong?
I mean, in some situations, yes…right? Like, say, vehicles? that can kill people?
Tesla’s out there selling these self-crashing cars that are •clearly• not ready for prime time, and trap people inside with their unopenable-after-accident doors and burn them alive. And they’re •still• selling crap-tons of those things.
If it doesn’t matter to •them•, how many biz situations are there where “fake and dangerous” is 100% acceptable?
19/
-
Does it matter if it’s wrong?
In the nihilism of this current stage of capitalism, “no” sure looks like a winning bet.
/end
-
@inthehands your CS lectures must be excellent if these thought provoking threads are anything to go by. I've never considered any of that before, really interesting.
-
A Scape Of Goats 🍉replied to Paul Cantrell last edited by
@inthehands it's not broken, it's built that way. it is that way because the bosses, who make decisions about these things, don't have to deal with the consequences of their decisions because they don't do the work.
-
Paul Cantrellreplied to A Scape Of Goats 🍉 last edited by
@diedofheartbreak
Or maybe benefit and/or harm to real actual human beings are simply not the consequences for n question. -
@tehstu
That’s kind! My classroom lectures / discussions are much more cheerful: lots more “let’s be excellent to one another and make some cool things.” -
Extend this thought experiment to political campaign funding and tech billionaires.
Silicon Valley bought an election win for a set of GOP crooks because "innovation" has been redefined as:
1. Successful scams & frauds
2. Tax evasion
3. Corporate welfare & subsidies
4. Monopolies
5. Regulatory capture
6. Pollution & climate denial
7. DeregulationSilicon Valley does not want saleable products that generate revenue.
They want Saudi cash. They want Russian oligarchs...
1/2
-
2/2
...money laundering through their VC's and hedge funds.
They want their cut off Chinese IP theft & ubiquitous surveillance capabilities.
They want to preserve patriarchy & white supremacy, plus the wealth it generates for them.
https://whatever.scalzi.com/2024/02/21/the-big-idea-cory-doctorow-4/
How Big Tech Got So Damn Big
If Silicon Valley CEOs were all exceptional, you’d expect the industry itself to be unique in its success and durability. It’s not.
WIRED (www.wired.com)
-
@inthehands crying that this is the shithole world we’ve created for ourselves and we can’t make it stop.