I’m interested in the intuition that #AI systems *don’t* “make decisions”.
-
@lana @dcm this question feels different from our recent discussions about language in as much as their point of departure is a full human exchange and it seems straightforward to ask how much of that is present with the computational system. But for the Atari domain, the same ‘move’ feels like making the assessment on “decision-making” or “planning” in the game dependent on features wholly irrelevant to the game. That seems weird.
-
@lana we just crossed posts again, and I just explained how and why I think it’s different from the LLM and language case ;-). One key difference vis a vis the thermostat, though, is that the deep RL agent is learning from pixel level input, so learning its representations and both input and response space are vast (so vast to be qualitatively different in my view).
-
@lana @dcm the discontinuity in the problem faced by thermostat and Atari player is precisely the fact that one seems generative and the other is not. I don’t think you can play Atari games (let alone Go) by enumeration and the whole point of these systems is that they no longer go the brute force search route of Deep Blue?
-
@UlrikeHahn
Rule execution? (no reaction time delays) -
Is there a “decision” involved in interpreting the pixel array to determine what rule applies?
-
@icastico @UlrikeHahn If the same rule applies every time and there is no variation in reaction time, probably not…
-
@knutson_brain @icastico I have a number of disorganized thoughts about ‚rule execution‘. There‘s a space where it coincides with intuition as long as the rule is ‚built in‘ in as much as it‘s then wholly a programmer that has created the choice points. I‘m already much less sure if the rule hasn‘t itself been programmed by an external source. And while variability is an operational indicator of flexibility that seems relevant, it seems both too weak and too strong. 1/2
-
@UlrikeHahn @icastico …we may have different implicit definitions of rule…coming from the brain perspective mine is based on following the output of a symbolically defined formulation (which would likely require an intact hippocampus / DLPFC) …
-
@knutson_brain @icastico that does or does not make it a ‚decision‘?
-
@knutson_brain @icastico and in your first post you spoke of rule ‚execution‘ but now speak of ‚rule following‘ - it‘s the latter notion you are invoking in both cases, ie there is a representation we would describe as a rule and that representation plays a causal role in the system? (distinguishing rule following from merely rule describable behaviour)
-
Again, is there a decision point involved in determining that the current situation is an example where rule X applies rather than rule Z?
-
@icastico @UlrikeHahn yes, but once the rule is the default, you’re no longer “deciding” (interestingly this maps onto some risky choice #fmri data we have)…
-
@knutson_brain @icastico interestingly none of these distinctions really map on to things people in JDM talk about. That literature is primarily about deviations from EUT (and to a lesser extent game theory) with a smattering of work on emotions thrown in. I also don‘t read the bulk of the neuroeconomics literature as deeply concerned with this- for at least one branch of it theories of rational choice are simply an organizing framework
-
@UlrikeHahn @icastico
Yes, we started from the EV framework, but as with other psychological processes (e.g., memory) multiple neural components seemed to be necessary, starting with affective responses. Our best back-of-the-napkin predictive framework for #FMRI is the #AIMframework, sketched in Figure 1 at the link below, but now we need to do a proper review of the evidence... -
@UlrikeHahn @icastico
Optimistically, from a #neuroeconomics standpoint, I hope a neural decomposition of the decision process will improve existing theories. -
@UlrikeHahn @icastico
Plus, it would be fun to predict both "rational" AND "irrational" choice... -
@knutson_brain @icastico thanks, will read! I guess ‚predicting the irrational along with the rational‘ is what most models in JDM would describe themselves as doing, but few are about process or even mechanism in any deep sense ( prospect theory being the classic example)
-
@knutson_brain @icastico cool paper!
-
@knutson_brain @icastico one thing that is nevertheless surprising to me in all this (now that you’ve brought up rules) is it‘s not like psych didn‘t spend decades obsessed with behavioural tests of rule following, it‘s just that none of that ever connected to JDM