@lana @dcm crossed again!
-
-
@UlrikeHahn @dcm Yes, I'm not saying they're hard coded. I'm responding to your comment about how you feel that it would be weird if our assessment of whether there is agency or not was independent from the features of game. My argument is that it *is* entirely independent of those features, and my example showing this independence between our assessment and game features is the case of a hard coded system.
-
-
@lana @dcm I think we’re slightly speaking at cross purposes: I agree we can make an assessment of degree of independence in game play (hard coded vs. learned, for example), but I was addressing replies that seem to suggest things like ‘they don’t do things for fun’ or ‘they don’t have rich beliefs or desires’ by pointing out that those additional things have no functional role in the game. To play an Atari game requires only game relevant ‘beliefs’ or ‘actions’ etc 1/2
-
@lana @dcm this is one of these Mastodon conversations some replies won’t be visible to all, so:
I think the questions do LLMs communicate and does a Deep RL agent ‘make game play decisions’ seem very different to me and each have their own empirical and conceptual grounds.
and I still don’t see how one could even describe the Atari game play without invoking notions of planning and decision-making and I don’t see that any suggestion has come up in this overall thread
-
Dimitri Coelho Molloreplied to Ulrike Hahn last edited by
@UlrikeHahn @lana I think the answer to this more specific question about game-world beliefs, actions, etc, depends on the details of each system. E.g., a system that outputs a move purely on the basis of dataset probability distributions may not be reasons-responsive in the relevant thick sense. Classical systems using tree-search might be better candidates for this.
In general, I'm not sure I see a special issue brought up by current systems that weren't there already for classical systems. -
Ulrike Hahnreplied to Dimitri Coelho Mollo last edited by
@dcm @lana almost wholly agree. I chose the Mnih et al. (2015) paper as a focus becuase to me that was the precise moment that DeepL blew traditional symbolic approaches to AI out of the water and the world hasn’t been the same since
so I think they are different to most of what went before, but I absolutely think questions about ‘decision making’ in genAI are not best treated as something new and different and that was the other reason I chose that specific example.
-
@dcm @lana as an aside, one of the things I personally find baffling about current LLM debate is the confidence with which it’s proclaimed that “they will never do X, Y or Z”. The entire history of neural computation is littered with the corpses of such pronouncements: Minsky/Papert, Fodor/Pylyshyn, the inability of recurrent networks to scale, the supposed lack of grounding, etc etc.
I just don’t understand the confidence in light of that history