@inthehands It's also the case that, if I ask someone to do something, at some point any ambiguity in my request is turned into non-ambiguous actions. It's no machine-human threshold. Differences of interpretation are normal in human communication.
What you've called a "tantalizing false promise" in the thread of expecting from LLMs "the same kind of common-sense interpretation we expect of humans", is a *true* promise. I expect and see LLMs making human-like coding mistakes all the time.