@UlrikeHahn @dcm Yes, I'm not saying they're hard coded. I'm responding to your comment about how you feel that it would be weird if our assessment of whether there is agency or not was independent from the features of game. My argument is that it *is* entirely independent of those features, and my example showing this independence between our assessment and game features is the case of a hard coded system.
Posts
-
@lana @dcm crossed again! -
I’m interested in the intuition that #AI systems *don’t* “make decisions”.@UlrikeHahn If I can deny these to a thermostat, I can probably take it all the way to Atari; I think the same arguments apply. If not, at which point does it become a decision? When I write the code in Python rather than wiring something directly? When I add the first relu function? The 100th?
It reminds me of the misunderstanding about LLMs and language. They can convincingly reproduce complex probability distributions, but have no communicative intent (the most basic function of language) -
I’m interested in the intuition that #AI systems *don’t* “make decisions”.@UlrikeHahn I'm pretty sure it's a question of whether you're granting 'agency' to an artificial system or not, rather than whether the output "looks like" a decision or not. An extreme example would be whether a thermostat is making decisions. The answer depends on your definition of agency.