@twsh @UlrikeHahn @dingemansemark @dcm That’s a great way of putting it!
And that is an issue that I think stands orthogonal to how AI systems are trained and evaluated: e.g., it’s not enough to produce true statements 90% of the time, when the 10% are obvious contradictions of or misapplications of concepts used in the 90%
There’s something holistic about language use that we never had to fully appreciate, because there never were “atomistic” language producers with isolated abilities.
Posts
-
@twsh @dingemansemark @davidschlangen @dcm 8/8 I think no coherent view of language is possible without acknowledging its existence and LLMs can simply drive into that. -
@twsh @dingemansemark @davidschlangen @dcm 8/8 I think no coherent view of language is possible without acknowledging its existence and LLMs can simply drive into that.@UlrikeHahn @twsh @dingemansemark @dcm
That’s why in my own work I’m concerned with situated systems that have a limited range of actual non-linguistic actions available. They are then at least weakly “committed” to doing what they should be able to in accordance with the language that happened. (Leave the language space, if you will.) And if they don’t do it, we at least know that they’re broken. That’s much harder to do if it’s just question answering, or text summaries, etc.
-
@twsh @dingemansemark @davidschlangen @dcm 8/8 I think no coherent view of language is possible without acknowledging its existence and LLMs can simply drive into that.@UlrikeHahn @twsh @dingemansemark @dcm Yeah, I don’t disagree. (I think we had an earlier exchange about this.) Of course the sentences that are produced mean something (as do the traces of the ants in the sand that happen to spell something out); the question is whether it’s useful to assume that someone means something with them.