Again, is there a decision point involved in determining that the current situation is an example where rule X applies rather than rule Z?
Posts
-
I’m interested in the intuition that #AI systems *don’t* “make decisions”. -
I’m interested in the intuition that #AI systems *don’t* “make decisions”.Is there a “decision” involved in interpreting the pixel array to determine what rule applies?
-
Do #LLMs have mental states?A blurge follows. Apologies.
We, of course, only have objective access to the words produced (just like with people) - but I think there is evidence that the LLM’s responses are not communicatively grounded.
Yes. It gave you a shorter answer based on your explicit instructions and it was able to incorporate a new vocabulary word that you defined for it. That is literally what LLM’s are designed to do. It mapped an association between “blurge” and a semantic network of terms for “too much info”. It is an open question whether that counts as “interpretation” in the sense I was getting at. It was able to represent the meaning of your text.
For the “why” question - it’s a mixed bag. It started with a semantically and contextual appropriate response - again doing what it is designed to do - then it started confabulating and talking about its feelings and motivation. In a human-to-human interaction, those assertions would be appropriate- but in the context of the LLM talking to a human, it seems contextually inappropriate in two ways. The first is that it ignores your stated preferences to avoid blurges and in the second it attributed feelings and motivation to itself which it is unlikely to have.
One of the keys to communicative grounding is the “quality”/veracity assumption (from Grice) - I assume you are telling me what you believe to be true. If you are flouting that assumption I look for a reason for why you are telling me an untruth and interpret based on that (e.g., you are saving face, telling a joke, trying to deceive). When the LLM attributes human emotions to itself - it flouts the veracity assumption- but why? If it was communicatively grounded, it should be apparent and I should be able to interpret its “actual” meaning (knowing that we both know it isn’t emotional). Maybe it is trying to make me feel comfortable with its alien nature. Maybe it is telling a joke. Whatever interpretation I come up with, it requires that I attribute motivation to the LLM and that I attribute to it the capacity for interpretation of its utterances.
But the other option is to assume it is simply and mindlessly filling in patterns. That seems more likely. And that would not be communicatively grounded.
-
Do #LLMs have mental states?Did it? If so, it seems it would have avoided the second blurge. I don’t see evidence that it interpreted the meaning of “blurge” - hence my skepticism.
-
Do #LLMs have mental states?Incorporating your word, your name, or your input is how the predictive text generation works. It responded to a specific prompt for a more concise response and repeated the terminology you defined back to you. That is one of the patterns it was trained on. That’s impressive. People do that.
But the fact that its explanation included “feelings” based reasoning- feelings which it doesn’t have - exposes what it’s doing; it is “mindlessly” completing patterns. It is mimicking responses from text produced by people who have communicative grounding - but it isn’t collaboratively working towards mutual understanding the way those people were.
Edit: also, the second blurge, rather than a concise “by design my default response is comprehensive” shows the lack of communicative grounding related to that novel word.
-
Do #LLMs have mental states?It seems a great example of how the LLM is not collaborating actively - it is simply filling in the blanks using vacuous patterns. That second blurge about why it blurges shows how understanding of the conversation is particularly lacking. Do you think the LLM “fears” anything? Is it really telling you something about itself to guide you to mutual understanding?
-
Do #LLMs have mental states?It is not internally driven. Kids learn through RLHF too - but they also learn through RLSF (s=self or subjective). They evaluate/interpret their own performance/knowledge. I don’t think LLMs do that. And, to me, that seems essential to claiming a “mind” - I see the paper Dimitri posted puts a lot more meat on these issues.
I find the “communicative grounding” issue particularly interesting as it gets to the heart of the matter - it seems. LLMs can’t collaboratively engage as they don’t bring their own perspective / intentions/ desires to the table - and that short coming makes it hard, intuitively, to credit them with their own knowledge/ understanding - and hence a mind that could have mental states.
-
Do #LLMs have mental states?Even young children evaluate the result of their intentional acts to see if they were successful and learn from that experience. Their experiences change the algorithm they use the next time based (at least in part) on internal adjustments - not exclusively on external prompts.
-
Do #LLMs have mental states?I don’t think I would take it that far. Kids certainly interpret and know before they know they know. If you know what I mean.
-
Do #LLMs have mental states?This gets back to that term “interpret” - an LLM that makes a mistake doesn’t evaluate and correct based on examination of its own output - it responds to a new prompt indicating an error with the same “fill-in-the-blank” algorithm - perhaps informed by new information (if that was included in the new prompt). Since the LLM doesn’t know what the representation means- it can’t reinterpret its output without guidance from the user. An LLM wouldn’t re-examine its answer and go “oh wait - I forgot to carry the 1” because it doesn’t know what its answer means.
At least that is what I infer from the output I see from these systems.
-
Do #LLMs have mental states?Modern thermostats certainly have similar representation of temperature when compared to what an LLM would have. A sensor’s output would be converted to a digital representation of the ambient temperature and this would be fed to an algorithm for dynamic responses that can also include user preferences and time of day.
-
Do #LLMs have mental states?I am thinking along the lines of Searle - some sense that the LLM “knows” stuff. Representation isn’t equivalent to “knowing” in this framing. Following a complex computational algorithm isn’t the same as knowing.
-
Do #LLMs have mental states?My gut is that you need an epistemological component as part of the “mind” for there to be “interpretation
-
Do #LLMs have mental states?An industrial robot does stuff as well. Does it have mental states? The thermostat example above applies as well - it changes/reacts/acts based on “perception” /input from the environment.
I don’t know the answer here, but the ambiguity around definitions of “mental states” makes it a tricky topic. If “aboutness” plus “action” is all you need, then maybe LLMs have mental states - but then so do lots of dynamic systems that we wouldn’t generally consider to be in this conversation.
-
Do #LLMs have mental states?To follow up - I don’t see any evidence that LLMs interpret or understand what they write. There is plenty of evidence that they don’t- based solely on their output.
-
Do #LLMs have mental states?Seems to me if you are using intentionality as your marker of mental state you have to determine if the LLM - which does create a representation of things in the world - has a way to interpret them as a representations. Otherwise, you are left with asking whether the text of a book, which represents something in the world, is a mental state.
-
This morning's #workout #music is from Mulatu Astatke live in 2023 in Paris. Astatke's music from the last quarter of the 20th century has been hugely influential in the first quarter of the 21st century #Ethiopiques #EthioJazz #JazzFusion #Jazz #World...This morning's #workout #music is from Mulatu Astatke live in 2023 in Paris. Astatke's music from the last quarter of the 20th century has been hugely influential in the first quarter of the 21st century #Ethiopiques #EthioJazz #JazzFusion #Jazz #WorldMusic #Vibraphone
-
Things that people on Mastodon have sent me IRL: old laptops.I was born in NM and can affirm that a gift of NM chiles are a sign of respect.