Good gods the logical leap from “we got a diffusion model to hallucinate a video of doom” to “no more video game programmers”. https://mastodon.social/@arstechnica/113040739034038174
-
Good gods the logical leap from “we got a diffusion model to hallucinate a video of doom” to “no more video game programmers”. https://mastodon.social/@arstechnica/113040739034038174
-
@[email protected] I used to have a lot of respect for Ars, but their "AI" coverage in particular has read like press releases for the last few years and their forums went from "reasonably respectable" to "people called me an idiot for arguing that no, ML models do NOT just learn like a person".
-
@[email protected] "the potential here is absurd" says "app developer". Yeah! No it's not! A fucking 3 second clip that most of the time people still knew was "AI generated" is not potential, it's an exercise in waste. Christ.
-
@[email protected] "we can use computers to make video games" my dude do I have some fucking news for you
-
@aud haha yeah my brain immediately went “that is absurd but not in the way you’re implying!”
-
@[email protected] for real. I feel like their ML coverage used to be better but I have yet to read a single article by this guy that isn't just a fluffy press release. I don't blame him, necessarily (although he certainly could be at fault); I assume editorial is the problem here. blehhhhh...
-
@aud they are certainly churning out fluff rapidly
-
@cthos @aud Even the otherwise excellent 404 is writing fluff about that DOOM nonsense. I don't get what's happening.
https://www.404media.co/this-is-doom-running-on-a-diffusion-model/
-
Asta [AMP]replied to Xandra Granade 🏳️⚧️ last edited by
@[email protected] @[email protected] article: “diffusion model running doom!”
reality: “sometimes people couldn’t tell whether a 3 second video clip was the real DOOM or ML generated. They usually could though.” -
Xandra Granade 🏳️⚧️replied to Asta [AMP] last edited by
-
Asta [AMP]replied to Xandra Granade 🏳️⚧️ last edited by
@[email protected] @[email protected] does any article discuss what hardware the 20 FPS FAUXOOM is running on? Ie, how expensive it is?
(and obviously I want to talk about how you already have to have made DOOM to run a shitty diffusive model that sort of replicates frames from it but) -
@cthos The diffusion model Doom ran interactively, so it's not *just* a video of Doom. The videos were captured by actually "playing" it.
(But apparently, it had trouble with things like dead monsters staying dead when you're not looking at them, the rate you lose energy when standing in the green radioactive water, etc.)
-
@datarama you can see hits not reducing health either. But yeah, point taken.
-
@datarama that is kinda a video though, there are no deterministic game mechanics its interacting with, it’s just generating the next most likely frame. Maybe “interactive video”
-
@cthos Or, well, a really shitty game.
A couple years ago, NVidia trained a GAN to implement Pac-Man, in much the same way as this. It had some of the same weird glitchy behaviour (obviously) ... but, as one of my friends remarked, the only reason this worked at all was that *someone had already made Pac-Man*.
-
@datarama yuuup. Not arguing, just processing “aloud”.
-
@cthos Also: I can't even begin to imagine how you would use anything even resembling this to re-implement, say, Stellaris. Or Stardew Valley, for that matter.
-
@[email protected] @[email protected] Among other things, "thinking every pixel in real time" is insanely inefficient. The whole point of doing, say, 3-d modeling is to compress an otherwise enormous amount of information into a comparatively small representation that's also naturally aligned with the intended future transformations of it. AI definitely does not solve the general problem of determining the best representation for arbitrary situations of this nature.
There's an asymptotic argument to be made that this cannot be done in general, in the same way that if someone gave you the position and momenta of every subatomic particle making up a rabbit you wouldn't be able to even tell that was a rabbit you were looking at, let alone determine whether it was about to eat a carrot. And even if you somehow could, it'd take you an astronomical number of orders of magnitude more work than just looking at the dang rabbit.
-
@abucci @cthos Although I'm doubtful it works for more than gimmicks for the immediate future (and, in keeping with GenAI, stoking fear and misery in exploited people), in a sense this would just continue along the trajectory the software industry regrettably has already established: We gleefully accept and even celebrate that insane inefficiency so we can de-skill developers a little bit more.
-sigh-
2024 tech news reliably makes me want to move into a log cabin somewhere in a forest.
-