I’m interested in the intuition that #AI systems *don’t* “make decisions”.
-
@UlrikeHahn I'm pretty sure it's a question of whether you're granting 'agency' to an artificial system or not, rather than whether the output "looks like" a decision or not. An extreme example would be whether a thermostat is making decisions. The answer depends on your definition of agency.
-
@lana thanks! phrased in those terms, I guess I can’t see what the wedge would be that one could drive between ‘it looks like it’s playing an Atari game’ and ‘it is playing an Atari game’ to deny game play or decision making?
-
@lana @dcm has raised elsewhere that one could have thinner (thermostat) or thicker (humans) concepts of decision making that include beliefs, desires, intentions, planning capacities, sensitivity to reasons. Can you meaningfully deny those for the Atari domain given that the relevant ‘beliefs’ , ‘intentions’, and ‘plans’ involve only game states in the first place?
-
@UlrikeHahn If I can deny these to a thermostat, I can probably take it all the way to Atari; I think the same arguments apply. If not, at which point does it become a decision? When I write the code in Python rather than wiring something directly? When I add the first relu function? The 100th?
It reminds me of the misunderstanding about LLMs and language. They can convincingly reproduce complex probability distributions, but have no communicative intent (the most basic function of language) -
@lana @dcm this question feels different from our recent discussions about language in as much as their point of departure is a full human exchange and it seems straightforward to ask how much of that is present with the computational system. But for the Atari domain, the same ‘move’ feels like making the assessment on “decision-making” or “planning” in the game dependent on features wholly irrelevant to the game. That seems weird.
-
@lana we just crossed posts again, and I just explained how and why I think it’s different from the LLM and language case ;-). One key difference vis a vis the thermostat, though, is that the deep RL agent is learning from pixel level input, so learning its representations and both input and response space are vast (so vast to be qualitatively different in my view).
-
@lana @dcm the discontinuity in the problem faced by thermostat and Atari player is precisely the fact that one seems generative and the other is not. I don’t think you can play Atari games (let alone Go) by enumeration and the whole point of these systems is that they no longer go the brute force search route of Deep Blue?
-
@UlrikeHahn
Rule execution? (no reaction time delays) -
Is there a “decision” involved in interpreting the pixel array to determine what rule applies?
-
@icastico @UlrikeHahn If the same rule applies every time and there is no variation in reaction time, probably not…
-
@knutson_brain @icastico I have a number of disorganized thoughts about ‚rule execution‘. There‘s a space where it coincides with intuition as long as the rule is ‚built in‘ in as much as it‘s then wholly a programmer that has created the choice points. I‘m already much less sure if the rule hasn‘t itself been programmed by an external source. And while variability is an operational indicator of flexibility that seems relevant, it seems both too weak and too strong. 1/2
-
@UlrikeHahn @icastico …we may have different implicit definitions of rule…coming from the brain perspective mine is based on following the output of a symbolically defined formulation (which would likely require an intact hippocampus / DLPFC) …
-
@knutson_brain @icastico that does or does not make it a ‚decision‘?
-
@knutson_brain @icastico and in your first post you spoke of rule ‚execution‘ but now speak of ‚rule following‘ - it‘s the latter notion you are invoking in both cases, ie there is a representation we would describe as a rule and that representation plays a causal role in the system? (distinguishing rule following from merely rule describable behaviour)
-
Again, is there a decision point involved in determining that the current situation is an example where rule X applies rather than rule Z?
-
@icastico @UlrikeHahn yes, but once the rule is the default, you’re no longer “deciding” (interestingly this maps onto some risky choice #fmri data we have)…
-
@knutson_brain @icastico interestingly none of these distinctions really map on to things people in JDM talk about. That literature is primarily about deviations from EUT (and to a lesser extent game theory) with a smattering of work on emotions thrown in. I also don‘t read the bulk of the neuroeconomics literature as deeply concerned with this- for at least one branch of it theories of rational choice are simply an organizing framework
-
@UlrikeHahn @icastico
Yes, we started from the EV framework, but as with other psychological processes (e.g., memory) multiple neural components seemed to be necessary, starting with affective responses. Our best back-of-the-napkin predictive framework for #FMRI is the #AIMframework, sketched in Figure 1 at the link below, but now we need to do a proper review of the evidence... -
@UlrikeHahn @icastico
Optimistically, from a #neuroeconomics standpoint, I hope a neural decomposition of the decision process will improve existing theories. -
@UlrikeHahn @icastico
Plus, it would be fun to predict both "rational" AND "irrational" choice...