@twsh @dingemansemark @davidschlangen @dcm 8/8 I think no coherent view of language is possible without acknowledging its existence and LLMs can simply drive into that.
-
@twsh @dingemansemark @davidschlangen @dcm 8/8 I think no coherent view of language is possible without acknowledging its existence and LLMs can simply drive into that.
Sorry this reply was so long, but I couldn’t really make the argument more concisely than this given that my previous attempts had failed to convince.
-
@UlrikeHahn @twsh @dingemansemark @dcm Yeah, I don’t disagree. (I think we had an earlier exchange about this.) Of course the sentences that are produced mean something (as do the traces of the ants in the sand that happen to spell something out); the question is whether it’s useful to assume that someone means something with them.
-
David Schlangenreplied to David Schlangen last edited by
@UlrikeHahn @twsh @dingemansemark @dcm
That’s why in my own work I’m concerned with situated systems that have a limited range of actual non-linguistic actions available. They are then at least weakly “committed” to doing what they should be able to in accordance with the language that happened. (Leave the language space, if you will.) And if they don’t do it, we at least know that they’re broken. That’s much harder to do if it’s just question answering, or text summaries, etc.
-
@davidschlangen @twsh @dingemansemark @dcm agree! that's also one good way of putting why I think it matters that ChatGPT4o can segment images and draw! (causal theories of reference and meaning being another)
-
@UlrikeHahn @dingemansemark @davidschlangen @dcm I am very sympathetic to this line of thought. Here is an objection I anticipate.
It's one thing to say that not all uses of language must be X (where X whatever is supposed to be what an LLM can't do and humans can). It's another to say that not all users of language must be at least capable of uses of language that are X.
I think some people would accept the former, weaker claim and still want to reject the latter, stronger claim.
-
David Schlangenreplied to Thomas Hodgson last edited by
@twsh @UlrikeHahn @dingemansemark @dcm That’s a great way of putting it!
And that is an issue that I think stands orthogonal to how AI systems are trained and evaluated: e.g., it’s not enough to produce true statements 90% of the time, when the 10% are obvious contradictions of or misapplications of concepts used in the 90%
There’s something holistic about language use that we never had to fully appreciate, because there never were “atomistic” language producers with isolated abilities. -
@davidschlangen @twsh @dingemansemark @dcm on the “must at least be capable of assertion” point, definitely. How compelling that position is, I guess will depend on the following (?):
1. what proportion of extant language is, in some sense, def-comm already?
2. how far from full assertion is it, if assertion can come in degrees?
3. have we already accepted assertion-less communicators elsewhere (other machine generated output, for example) -
@davidschlangen @twsh @dingemansemark @dcm one more thing I wonder about is the extent to which listener properties matter also?
the ‘no semantic meaning’ position feels at odds to me also with the fact that semantic processing seems automatic, so telling me an LLM output doesn’t have semantic meaning when I not only feel like I just assigned a meaning, but couldn’t even have stopped myself doing so seems weird.
How much like that is assertion?