I hate how all AI hype is predicated on "if we can just make this not be broken then it would be an amazing product"
-
I hate how all AI hype is predicated on "if we can just make this not be broken then it would be an amazing product"
And because AI produced things look kinda close to the real deal people buy it. Cause it feels like it just needs a small improvement, even though its flaws are a fundamental part of the technology
Just don't draw the weird 6th finger. Just don't make up things when you don't have a real answer. Just don't change the environment in an AI generated game entirely if the player turns around 180 degrees
These things *feel* like they're small, solvable problems to people who don't know better. We could easily fix those things if humans were doing the work!
But AI can't. It will never be able to. It can't because not doing those things means it couldn't do anything else either. Like self-driving cars, the solution to these issues will always be 2 years out
-
Hugs4friends βΎπΊπ¦ π΅πΈπ·replied to Eniko | Kitsune Tails out now! last edited by
@eniko Sooner or later, they have to give up on the octagonal wheels that keep threatening to fall off. But, will it be too late? Is it already too late?
-
caraneareplied to Eniko | Kitsune Tails out now! last edited by
@eniko
Feel the perfect illustration of how easily language can fool us is a study from last year. A team ran a Turing test pitting a group of volunteers against chatGPT 3.5 and GPT4. On a whim they added ELIZA into the mix at the last minute -- it beat chatGPT 3.5. A program from the mid-1960s... -
@[email protected] @[email protected] Which also shows you that the "AI requires a giant datacenter with enterprise GPUs and high power consumption" argument is complete nonsense. ELIZA doesn't require a supercomputer or the energy consumption of a small country. You can run it locally on an embedded device.
-
[email protected]replied to Hugs4friends βΎπΊπ¦ π΅πΈπ· last edited by
@[email protected] @[email protected] Why do they have to give up? DRM doesn't work, hasn't worked for decades, and will probably never work, but companies continue to spend fortunes on it because the executives believe it can work.
Why should we expect AI to be any different? No matter how much evidence you have that it doesn't work, executives will keep believing it will work, and hence they will keep throwing money at it. They can keep trying to use AI for decades to come because executives can't learn based on evidence.