I once worked at a company that sold industry-specific core-business software to deep-pocketed corps who couldn’t / wouldn’t / shouldn’t roll their own.
-
@inthehands Jesus Christ, Paul.
BANG!
*thud* -
Plsik (born in 320 ppm)replied to Paul Cantrell last edited by
@inthehands I have to say that the level of my pessimism about our civilization has risen again after reading this thread. And that's a good thing. I'm almost never pessimistic enough when I look back in time. Good job, thanks.
-
@inthehands That's an almost impressive perversion of a perfectly good argument.
Somebody had interviewed the CEO of a software development company (can't remember who or which; doesn't matter). In a time when "you must fix all bugs before release" was popular dogma, they asked whether he was comfortable with shipping releases that he knew contained bugs.
"Absolutely!" he said with a big smile.
Then he explained. Their customers got more value from having the software as it was, even with those bugs, than from not having it. Perfect, good, enemy, value of "now" vs "eventually, when it's perfect" and all that.
I think he was also enthusiastic about fixing those bugs to improve the quality over time. I'm taking that approach, in any case.
-
Very interesting! @pluralistic said that AI works for ”low-stakes low-value tasks” like ”political disinformation, spam, fraud, academic cheating, nonconsensual porn, dialog for video-game NPCs” but that ”none of them seem likely to generate enough revenue(…)to justify the billions spent(…)nor the trillions in valuation(…)” and that there are pbly no ”low-stakes, high-value tasks”
[https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle]
But maybe you’re right and most business is indeed low-stake, high-value -
@inthehands
Two other factors why products sucked back then, which I suspect may recall grim memories , were the executives who just wanted "it all computerising" with no idea of what "it" was either at the "as is" or "to be" stage and a furious impatience with any enquiry about it; and those (sometimes the same) who required full backwards compatibility with pen and paper because they'd be running both in parallel but laying off people on the strength of computerised efficiency. -
@inthehands you are absolutely right, this happens in tech, and all business. It just might not be what happens with genAI though. GenAI might actually be the hoax us cynics think it is and be a total wipeout of hundreds of billions of dollars (and emissions and wrecked jobs.) The church of Altman looks a lot more like a hoax than not. Part of it "sucking less" is like a psychic.
-
@inthehands Sounds like chip design/CAD software. I remember in a startup my boss installing a new Cadence component and hitting a problem with it, and getting the response from support that they hadn't had anyone get that far before.
-
@inthehands in a sane world, after you realized the whole health score was a hoax, you'd be able (or even required) to report them to some kind of institution that would send inspections and lawsuits their way.
We don't live in a sane world.
-
@inthehands Also, regarding the customer support example:
Yes, it saves time for the company, and it was valuable human time that was being spent on pointless things.But it makes the *customer* waste *more time* doing pointless things
Moreover, it creates an assymmetry, where the company can spend relatively little time to make the customer waste a lot of time.
So AI is a weapon.
And as such, it should be regulated.
-
@inthehands This thread started out as incredibly deflating and ended up flatly horrifying.
-
@inthehands You're giving me so many flashbacks -- and not the good time -- to being an internal architect in "Big IT" in the early 00's
-
@Npars01
Yep, this is all at the heart of it. It’s the same thing that brought us the 2008 crash: too much investor money looking for returns that don’t exist. Then it was just investor wanting more mortgages to invest in than there were mortgages to reasonably offer. But this time, it’s broader, and I fear much worse. -
George Ellenburg (he/him/his)replied to Paul Cantrell last edited by
@[email protected] I'm calling it. You either worked for Oracle or IBM.
-
@inthehands thanks Paul, your posts helped set to rest some confusion I had about gen AI: how can people, knowing it is bad for their own businesses when it inevitably and continually fails and implicates them, still want to incorporate it into their products.
Your thread answers this question!
-
-
@rrdot
Cheers! Screaming into the void may be futile, but I guess it’s nice when somebody enjoys the concert?! -
Paul Cantrellreplied to 🔏 Matthias Wiesmann last edited by
@thias
Absolutely. Getting to the place where Goohart is the relevant problem was my foolish dream. -
@peter
Yeah, the first example is the only one in the thread that's •not• bullshit. Point is that (1) if you say it’s terrible, you have to ask “As compared to what?,” and point of the later examples is (2) the true business goal isn’t always what you think it is.If we want to understand the function of LLMs for corps, we have to work through those questions. And given the amount of work that’s •already• BS, I’m not necessarily shorting the LLMs even though they’re BS too.
-
Advanced Persistent Teapotreplied to DeManiak 🇿🇦 🐧 last edited by
@kaasbaas @inthehands like the well being of staff for example
-
@stew_sims
Yeah. For example: some of those ancient mainframe systems are just rock solid: somebody built it right in like 1972 in COBOL or whatever, and it’s far wiser to do hardware maint to keep the same code running than to attempt a rewrite!Point is that in my first example, yes, the bad product really •was• better. “Compared to what?” and “Toward what goal?” are both questions that can have very surprising answers.