A lot of the current hype around LLMs revolves around one core idea, which I blame on Star Trek:
-
A lot of the current hype around LLMs revolves around one core idea, which I blame on Star Trek:
Wouldn't it be cool if we could use natural language to control things?
The problem is that this is, at the fundamental level, a terrible idea.
There's a reason that mathematics doesn't use English. There's a reason that every professional field comes with its own flavour of jargon. There's a reason that contracts are written in legalese, not plain natural language. Natural language is really bad at being unambiguous.
When I was a small child, I thought that a mature civilisation would evolve two languages. A language of poetry, that was rich in metaphor and delighted in ambiguity, and a language of science that required more detail and actively avoided ambiguity. The latter would have no homophones, no homonyms, unambiguous grammar, and so on.
Programming languages, including the ad-hoc programming languages that we refer to as 'user interfaces' are all attempts to build languages like the latter. They allow the user to unambiguously express intent so that it can be carried out. Natural languages are not designed and end up being examples of the former.
When I interact with a tool, I want it to do what I tell it. If I am willing to restrict my use of natural language to a clear and unambiguous subset, I have defined a language that is easy for deterministic parsers to understand with a fraction of the energy requirement of a language model. If I am not, then I am expressing myself ambiguously and no amount of processing can possibly remove the ambiguity that is intrinsic in the source, except a complete, fully synchronised, model of my own mind that knows what I meant (and not what some other person saying the same thing at the same time might have meant).
The hard part of programming is not writing things in some language's syntax, it's expressing the problem in a way that lacks ambiguity. LLMs don't help here, they pick an arbitrary, nondeterministic, option for the ambiguous cases. In C, compilers do this for undefined behaviour and it is widely regarded as a disaster. LLMs are built entirely out of undefined behaviour.
There are use cases where getting it wrong is fine. Choosing a radio station or album to listen to while driving, for example. It is far better to sometimes listen to the wrong thing than to take your attention away from the road and interact with a richer UI for ten seconds. In situations where your hands are unavailable (for example, controlling non-critical equipment while performing surgery, or cooking), a natural-language interface is better than no interface. It's rarely, if ever, the best.
-
AndrΓ© Polykaninereplied to David Chisnall (*Now with 50% more sarcasm!*) last edited by
@david_chisnall Although I get your point, you seem to miss a very important thing: there are lots and lots and lots of people who simply *can't* go beyond natural language. I know many people with little to no math knowledge who were exposed to computers and modern technologies in their fifties. Would they be able to formulate proper requests, as you suggest? Maybe, after an extremely steep learning curve, lots of tears, sometimes panic attacks and such. Do we want this, say, for our parents? Probably not. At least, I don't. Being a developer myself, I wish my mother had an interface best suitable for her needs.
-
@menelion
"Language" doesn't have to mean spoken language. Look at old UI designs - most of the key elements carry an obvious meaning, often conveyed through visual means as much as words. This, too, is language. The crucial thing here is that it's unambiguous. Knowing the design elements enables the user to predict what an action is supposed to do and a developer to understand what their users would want to happen. It's become rather fashionable to break that pattern, though...
@david_chisnall -
-
@DL1JPH @menelion @david_chisnall I once had a translation job where it looked like the marketing department got their hands on the UI strings and decided that the thing had to be jovial human sounding, so that the users could "relate" to it.
No screenshots, no context, no access to the development version of the program.
Translating that garbage was tough, lots of creative guessing about what they might actually mean.
-
dataramareplied to David Chisnall (*Now with 50% more sarcasm!*) last edited byThis post is deleted!
-
Shannon Clarkreplied to David Chisnall (*Now with 50% more sarcasm!*) last edited by
@david_chisnall exactly.
And further language is deeply contextual and not just about the previous words - about the context of a conversation, about the social status and physical/visual and other sensory cues that inform how we interpret words as does what each party knows about the others
And even with all those other contextual cues people misinterpret each other all the time (was that person I met just now hinting that they like me. Is my partner serious just now or being playful)
-
clacke: exhausted pixie dream boy πΈπͺππ°ππreplied to David Chisnall (*Now with 50% more sarcasm!*) last edited by
-