I once worked at a company that sold industry-specific core-business software to deep-pocketed corps who couldn’t / wouldn’t / shouldn’t roll their own.
-
I was hired to implement, but quickly realized they had no idea what they wanted me to build. Worse: they hadn't hired any of the people (like, say, a health actuary or a behavioral psychologist) who would be remotely qualified to help them figure it out. The architect of their giant system was a chemical engineer of some kind who was trying to get into tech. Lots of big ideas about what it would •look like•, but nobody in sight had a clue how this thing would actually •work•. Zero R&D.
13/
-
No worries. Designers were cranking out UI! Marketers were…marketing! Turning the Life Score from vague founder notion to working system was a troublesome afterthought.
So…like a fool, I tried to help them suss it out. It turned out they •did• sort of have a notion:
1. Intake questionnaire about your lifestyle
2. Assign points to responses
3. System suggests healthy activities
4. Each activity adds points to your score if you do it14/
-
@inthehands Agreed, and there's another level of fakery here that interests me. I suspect a bunch of the corporate "AI" projects are just taking advantage of the hype wave to rebuild something that needed rebuilding. That key people know the "AI" benefit is zero, but it's the only way to get the rest of the project done.
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
And then, like a •damn• fool, I pointed out to them the gaping chasm between (2) and (4). Think about it: at the start, the score measures (however dubiously) the state of your health. But after you do some activities, the score measures how many activities you did.
The score •changes meaning• after intake. And it's designed to go up over time. Even if your health is getting worse.
And like an •utter• damn fool, I thought this was a flaw.
15/
-
It was only after the whole contract crashed and burned (they were, it turns out, truly awful people) that I realized that my earnest data-conscious questions were threatening their whole model.
Their product was there to make the “healthy” line go up. Not to actually make people healthy, no! Just to make the line go up.
It was an offer of plausible deniability: for users, for their employers, for everyone. We can all •pretend• we’re getting healthier! Folks will pay good money for that.
16/
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
Of •course• their whole business plan had a gaping hole at the center. That was the point! If that Life Score is •accurate•, if it actually describes the real-world state of a person’s health in any kind of meaningful way, that wrecks the whole thing.
Now, of course, there would be no Paul to ask them annoying questions about the integrity of their metrics. They’d just build it with gen AI.
17/
-
Would gen AI actually be a good way to help people get healthy with this product? No. But that was never the goal.
Would gen AI have been a good option for these rich people trying to get richer by building a giant hoax box that lets a bunch of parties plausibly claim improved employee health regardless of reality? Hell yes.
18/
-
Again, my gen AI question: Does it matter if it’s wrong?
I mean, in some situations, yes…right? Like, say, vehicles? that can kill people?
Tesla’s out there selling these self-crashing cars that are •clearly• not ready for prime time, and trap people inside with their unopenable-after-accident doors and burn them alive. And they’re •still• selling crap-tons of those things.
If it doesn’t matter to •them•, how many biz situations are there where “fake and dangerous” is 100% acceptable?
19/
-
Does it matter if it’s wrong?
In the nihilism of this current stage of capitalism, “no” sure looks like a winning bet.
/end
-
@inthehands your CS lectures must be excellent if these thought provoking threads are anything to go by. I've never considered any of that before, really interesting.
-
A Scape Of Goats 🍉replied to Paul Cantrell last edited by
@inthehands it's not broken, it's built that way. it is that way because the bosses, who make decisions about these things, don't have to deal with the consequences of their decisions because they don't do the work.
-
Paul Cantrellreplied to A Scape Of Goats 🍉 last edited by
@diedofheartbreak
Or maybe benefit and/or harm to real actual human beings are simply not the consequences for n question. -
@tehstu
That’s kind! My classroom lectures / discussions are much more cheerful: lots more “let’s be excellent to one another and make some cool things.” -
Extend this thought experiment to political campaign funding and tech billionaires.
Silicon Valley bought an election win for a set of GOP crooks because "innovation" has been redefined as:
1. Successful scams & frauds
2. Tax evasion
3. Corporate welfare & subsidies
4. Monopolies
5. Regulatory capture
6. Pollution & climate denial
7. DeregulationSilicon Valley does not want saleable products that generate revenue.
They want Saudi cash. They want Russian oligarchs...
1/2
-
2/2
...money laundering through their VC's and hedge funds.
They want their cut off Chinese IP theft & ubiquitous surveillance capabilities.
They want to preserve patriarchy & white supremacy, plus the wealth it generates for them.
https://whatever.scalzi.com/2024/02/21/the-big-idea-cory-doctorow-4/
How Big Tech Got So Damn Big
If Silicon Valley CEOs were all exceptional, you’d expect the industry itself to be unique in its success and durability. It’s not.
WIRED (www.wired.com)
-
@inthehands crying that this is the shithole world we’ve created for ourselves and we can’t make it stop.
-
@[email protected] @[email protected] you know we really could make it stop... it's just the rich fucks with infinitely more resources than every other human kinda don't want to stop it right now smh
-
@inthehands off topic to this thread, but damn Paul, you've been posting some amazing and on point thoughts and stories the past few days. Thanks for sharing!
-
🔏 Matthias Wiesmannreplied to Paul Cantrell last edited by
@inthehands well Goodhart’s law applies, even if it was a crappy metric to start with.
-
Óscar Morales Vivóreplied to Paul Cantrell last edited by
@inthehands or as I usually put it, the bullshit machine looks awful nice to the folks that have made it with bullshit.