TL;DR: induced demand isn’t just for highways
-
@inthehands I think you’re right on all of this. I do wonder if perhaps it would still represent an improvement. After all, one of the biggest mistakes people make (IMNSHO) is they compare a flawed outcome to a utopian possibility, not the way something will actually happen otherwise.
I’m less worried right now about the flawed self drive solutions than I am flawed legal structures around them. This situation in Cali where nobody currently can be fined for their misbehavior is untenable.
-
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
One of the good-and-bad things that happens when we move human activity into software is a •narrowing of the problem space•.
Humans are full of ad hoc decisions. We fudge. We finagle. We mess up, but we also fix up. Humans are the adaptable part of complex systems. Human are both producers of and defenders against failure. (https://how.complexsystems.fail/)
When you moving a task into software, one of the central questions is “What happens to that human flexibility?”
-
@inthehands Will read the article later, but you've nailed my thoughts. I was really hopeful for self driving cars, figuring the predictable nature would be exactly what we need. I rarely drive these days, but on a short school run broke hard twice this week to avoid T boning other cars. That's what I hoped computers would avoid.
What we got is Teslas rolling through stop signs and doing % over the posted limit because that's "what we all do anyway".
If I could never drive again, I would.
-
Usually, at least if we’re doing a good job, the answer is “we split it:”
One part of the problem becomes simpler, less flexible, more consistent. We make up rules: “every item has exactly one price,” or “every has one price per discount-item combination,” or “every item has N SKUs, each of which has one price per….“ The rules evolve, they adapt, they grow — but they remain consistent until we update them.
The beauty and the peril of software is consistency: •it follows those rules we invent•.
-
Beauty? Because consistency can really pay off.
Peril? Because sometimes we need exceptions.
I said we “split” the problem. Software takes one part of the job, a version of the problem that is simplified so that machine consistency is •possible•. The other part of the job: human intervention. We build software to loop in humans to say, “eh, damaged item, I’m giving you a discount” or whatever. •If• we’re doing it right.
Consistency with a dash of human intervention.
-
One classic way this goes wrong is when we forget the “human intervention” part.
You end up with these Kafkaesque nightmares where somebody is stuck in an infinite product return loop or their insurance claim is denied or the state thinks they’re dead or they get a zillion parking tickets because their custom license plate spells “NULL” (https://arstechnica.com/cars/2019/08/wiseguy-changes-license-plate-to-null-gets-12k-in-parking-tickets/)…and a human is stuck in process hell because •the software just does that• and software is hard to change.
-
I thought •that• was where self-driving cars were going to land: narrowed problem space, sometimes they fail, but at least they’re really consistent. Not great, but again, arguably an improvement over human drivers.
But nooooo. Now, thanks to the Glorious Dawn of AI Megahype, we have companies falling over themselves to replace all those annoying expensive humans…with •randomness•.
This is just bonkers to me.
-
I mean, software is…kind of terrible. It’s expensive to build and maintain. It constantly throws our bad assumptions back in our faces. It removes the human flexibility that keeps systems afloat, unless we work hard to prevent that.
But at least it’s consistent.
Whatever it does, it •keeps doing that thing• with a high degree of reliability. It doesn’t forget to write things down, or lose that scrap of paper, or show up to work high. When it fails, 99.9% it’s because humans told it to.
-
That consistency is the whole appeal of computers. Without that, why would any organization ever want to delegate anything to software?!
And now we have executives falling over themselves to replace it with “random human-imitating chaos machine?”
Really?
Really?!?
I just…Do you even…What do you think…
[the remainder of this thread is incoherent muttering]
-
concept of a display namereplied to Paul Cantrell last edited by
@inthehands one of the big assumptions behind AI hype -- the unspoken presupposition -- is that the 99.9% reliability of traditional software will be complemented by the apparent capacities of generative systems and all the exponential possibilities entailed therein
in practice, because the generative systems are making stuff up, they're going to pollute traditional software into uselessness with absolute garbage inputs.
they're fundamentally two different things, and they cannot interface
-
@donw
Making •drivers• liable for accidents they cause, regardless of whether they chose to delegate their driving to a machine, would whip this whole thing into shape real damn fast. -
Replies from @stfp and @donw highlight the issue of liability and accountability, which is spot on, a central question here:
https://h4.io/@stfp/113068916131522497
https://mastodon.coffee/@donw/113068874134659387 -
Re this from @thedansimonson, the phrase “information pollution” has been rattling around in my head a lot lately:
https://lingo.lol/@thedansimonson/113068984297050648AI-generated nonsense. Google results filling with content-farmed garbage (written by humans and by AI). Steve Bannon’s “flooding the zone with shit.” GIGO.
→ all “information pollution” -
Cave Cattumreplied to concept of a display name last edited by
@thedansimonson @inthehands But the problem there is conflating "generative AI" with all of machine learning, no? It is quite possible to build reliable (safety critical) software systems that solve hard problems using machine learning AND do not "hallucinate" anything. But there is no known way to do it cheaply.
-
Re this from @thatandromeda, I also think that there’s •still• immense promise in automated driver assistance for accident prevention. For example, I’ve driven a couple of cars with radar cruise control that prevents rear-ending people at speed, and found it more helpful than not.
But that sort of thing doesn’t seem to be where the money is flowing.
Andromeda Yelton (@[email protected])
@[email protected] We used to have the same hope (plus an element of "some people cannot drive at all and this would open options"), but then my husband worked on self-driving vehicles and became an extreme self-driving vehicle hype pessimist... There ARE aspects of automation in this space that are super good (e.g. lane divergence alarms) but the lift from there to full self-driving is......big
ohai.social (ohai.social)
-
@fgcallari @thedansimonson
There is indeed tremendous unexplored potential in that space. Classifier systems (ML or not) can outperform humans for some problems, and can give an expedited first step for others. When the model turns to human augmentation instead of human replacement, things get a lot more sensible. Maybe we’ll get there on the other side of this hype cycle. -
concept of a display namereplied to Cave Cattum last edited by
@fgcallari @inthehands yes. from a cost perspective, a lot of applications of older techniques are simply ignored. the problem space wasn't exhausted, but few were willing to invest in fully exploring it from a commercial perspective.
-
@inthehands Public mass transit > self-driving cars any day
-
@belehaa
I mean, yes, for sure, that’s my tune too.Also cars will be with us for a long time, and I’m all for reducing the harm they cause. If self-driving were a route to reducing the number of pedestrians and cyclists killed by cars (and thus making walking and biking more attractive), I’d be all on board.
-
@inthehands Same! It just feels like a mighty big and as-yet-unsupported If