Still thinking about this post from @tess. (See the threaded follow-up for explanation!)
-
Sheldon Chang 🇺🇸replied to Paul Cantrell last edited by
@inthehands every time I get asked if I'm nervous about some random technology replacing my skills as a developer, I laugh. The tech isn't the roadblock. The humans are. Until humans are better at humaning, I'll always have a job understanding what the humans really mean when they say they want this vs that.
I got into doing tech work an unusual way. I was a physical therapist before and I spent my days hearing people's theories on why their knee hurt. It was the weather. They sat in the wrong chair 10 years ago. They took the wrong supplement. It's because they wore shorts on a cold day... and then after all that I'd conclude they actually had a hip or a back problem or needed new shoes.
Figuring out what humans want with technology has actually been a lot easier than understanding humans well enough to help them understand how their own bodies work. I had no and still have no certifications, but leaned hard into my interviewing skills to gain an edge as a late starter.
-
An LLM will •randomly• turn a prompt into some unambiguous machine behavior that has some probability of fitting the prompt. And — worse!— it reapplies that randomness afresh every time.
That’s not necessarily a bad thing. Randomness can be useful. Generating random possibilities can be useful.
But I don’t think that’s what people have in mind when they imagine replacing programming with generative AI.
10/
-
Say what you will about software, but at least it’s consistent: right or wrong, at least it will follow the same instructions in the same unambiguous way.
Say what you will about the law, but at least it’s human: we constantly renegotiate it, reshape it as we live it, hot patch it as we follow it.
LLMs are neither of these things. I don’t think we have good intuitions about what they are. A lot of sloppy thinking is filling in the gaps, and investors are capitalizing on that.
/end
-
𝓓 𝓑 𝓒𝓸𝓸𝓹𝓮𝓻replied to Paul Cantrell last edited by
The “Can’t you just…” mantra is NOT limited to SWE; it applies to all technical jobs.
-
@emenel
Indeed. Related: https://hachyderm.io/@inthehands/113227415258787221 -
@ShadSterling
Loved those books. -
@RommelRico
You jest, but NullPointerException was a major safety advancement over dereferencing null and getting whatever’s at memory address zero! Why “safety?” Because it made the machine’s rule (“you just can’t dereference null, period”) something that humans could mentally model and predict. -
Paul Cantrellreplied to Sheldon Chang 🇺🇸 last edited by
-
Paul Cantrellreplied to 𝓓 𝓑 𝓒𝓸𝓸𝓹𝓮𝓻 last edited by
@partnumber2 @tess
Yes, and not just technical either. ALL jobs!Paul Cantrell (@[email protected])
“Can’t you just…” is an idiom that means “I don’t understand the work you do at all”
Hachyderm.io (hachyderm.io)
-
Sophie Schmiegreplied to Paul Cantrell last edited by
@inthehands this has been my biggest problem with all these AI assisted coding tools that have been shoved our collective throats. I'm always happy to see and test out new tools and techniques for writing code, but the tools have to be deterministic and "continuous". If I can't reliably trigger the same action twice, I cannot use it to write code and rather turn it off. If I can't use the performance on one task to predict the performance on a slightly different but very related task, I can't use the tool to help produce code.
I need to know exactly what I am telling the machine, either because I've handwritten the instructions, or because I know what the tool I use to create the instructions does to my input. That's why I love my compilers, but hate spicy autocomplete.
-
Sophie Schmiegreplied to Sophie Schmieg last edited by
@inthehands on second thought, to expand and refine this point: as a cryptography engineer, I have a love-hate relationship with compilers, precisely because of this reason. Cryptography has the unusual requirement that code needs to be executed in constant time, and compilers do not consider that as an invariant. So part of my job is to dig through the assembly compilers spit out and look for conditional jumps.
And when the compiler gets too unreliable, I have to cut it out of the development process entirely and handwrite the assembly. -
Paul Cantrellreplied to Sophie Schmieg last edited by
@sophieschmieg
Yeah, there’s always a point with any tool where it’s not helping. And I see the hope for AI-authored code — not replacing code with AI, but using AI to write code — as being that maybeLLM : high-level code :: high-level code : machine code
I don’t buy that either, but it’s a different discussion than my thread.
-
@denny @inthehands
International! -
Paul Cantrellreplied to DeterioratedStucco last edited by
@SoftwareTheron @denny
Cosmic! -
@inthehands It's also the case that, if I ask someone to do something, at some point any ambiguity in my request is turned into non-ambiguous actions. It's no machine-human threshold. Differences of interpretation are normal in human communication.
What you've called a "tantalizing false promise" in the thread of expecting from LLMs "the same kind of common-sense interpretation we expect of humans", is a *true* promise. I expect and see LLMs making human-like coding mistakes all the time.
-
@inthehands “The life of the law has not been logic; it has been experience.”
-
argv minus onereplied to Sophie Schmieg last edited by
Why the hell do compilers *still* not have a “this code must execute in constant time, no matter what, and I want a compile error if that's impossible” declaration?
-
@inthehands DAOs were a great object lesson in "what if the laws actually were code‽"
(When looking for the wikipedia link, I was pleasantly surprised to find nothing web3-related in the first page of search results for "dao" - we did it, y'all!)
-
Sophie Schmiegreplied to argv minus one last edited by
@argv_minus_one @inthehands it turns out, compilers are complicated. Compiler folks have tried, but piping an annotation like that through the entire LLVM stack is nontrivial, and there are only very few people trying to write cryptographic code, and very many people trying to write code that is as fast as possible, so the effort that was put into it by compiler folks had a bit of an upper bound. So in the end, godbolt.org is the cryptography engineer's best friend.
-
argv minus onereplied to Sophie Schmieg last edited by
Maybe, but that sounds like a *giant* footgun for timing-attack vulnerabilities. Merely changing the compiler version could create one, nevermind compiling for a new ISA.
I didn't say it's easy to implement such a thing, but it is absolutely necessary.