Good gods the logical leap from “we got a diffusion model to hallucinate a video of doom” to “no more video game programmers”. https://mastodon.social/@arstechnica/113040739034038174
-
-
dataramareplied to cthos 🐱 last edited by [email protected]
@cthos @abucci Star Trek imagined a future that treasured human effort. Silicon Valley imagines a future that devalues and destroys it.
The idle tech bro speculation I've seen: "We train it on ALL THE GAMES, and then it can interpolate new games out of its latent space!". I'm not sure this will work, but I am sure that the concept of fully synthetic games feels terribly depressing. I *like* interacting with worlds someone else made up, and seeing authorial intent and decisions shine through.
-
@datarama @abucci I mean, same. I want a creative vision, not regurgitated soup.
But I don’t think they’re right. It’s magical thinking that just feeding a diffusion model more and more data will eventually make it spit out new games. They keep tripping over themselves making the image models “better” but that progress seems to have already plateaued.
I’m not a data scientist though so
-
-
@[email protected] @[email protected] I'm not a data scientist, but I am a computer scientist, and there are lots of good reasons to believe that LLMs/genAI will not be able to fulfill on the promises their advocates seem to be suggesting they will.
Prompt engineering is another "basis set" for describing (a subset) of what you are already able to express with other methods. Current techniques like game engines and modeling tools and whatnot give you a certain grasp on the space of possible games; they make certain games easier to make than others. Whatever kind of genAI gobbledygook ends up being applied to game creation is simply another way of grasping the space of possible games. It too will make a possibly different set of games easier to make than others. That's a basic observation from algorithmic information theory. There's no magic here.
Since most of the genAI models I'm aware of ultimately ground out in "small world" representations, there's a strong case to be made that they will always fall well short of what human beings are capable of. By "small world", I mean they tend to have finite-width bottlenecks. For instance, according to Stephen Wolfram's account, GPT's core LLM outputs a probability distribution over approximately 50,000 tokens. While an impressive amount of English text can be built from combinations of these 50,000 units, not all of it can. Further, humans readily invent new words and patterns and give them meaning; often whenever we see a constraint (like 50,000 tokens) one of the first things we do is blast through it and create patterns outside the constraint. Arguably, this latter process is closer to what language actually IS than the token-emission processes boosters tend to fixate on. Similar comments apply to image-generating AI.
My belief is that if "game" generating AI ever becomes common, as soon as its limits are perceived people will start building tools that transcend the limits, to make games that can't be made with the generative AI. This is what matters, and this is something generative AI may never be able to do.
-
@abucci @cthos I'm not sure I entirely understand the point about the tokenization. You can express every English text using just 128 of them if you tokenize on individual characters and just use ASCII (though that's going to be very expensive - but it's something I'm sure some of all those data centers are going to be put to do).
-
@[email protected] @[email protected] Think "word" for "token" (*). There are more than 50,000 words in English. But beyond that, no matter how many tokens it has, if the number is fixed at N, it will never suffice to capture how human beings use language. We invent new words, symbols, etc. fluently as we communicate, which GPT cannot do and may never be able to do.
(*) Really it's more like words, word fragments, and punctuation, but that doesn't matter for the point.
-
@abucci @cthos I know what a token is (I've implemented tons of lexers *and* lexer generators). But I think it was Andrej Karpathy who - somewhen in 2023 - pointed out that training LLMs on Unicode code point sequences *without* a tokenizer would alleviate some of their problems - but that this would also make training much more expensive.
(I completely agree that predicting the next token or character is not at all what humans do when we communicate. It's not even what parrots do.)
-
-
-
@[email protected] Training on Unicode code point sequences would not address the problem I raised. It's still a "small world", not a "large world" / open-ended possibility space. That's the key point. @[email protected]
-