Agreed with everything @kevinriggle wrote here.
-
Yes, this from @sethrichards is what I’m talking about:
https://mas.to/@sethrichards/113642032055823958I had a great moment with a student the other day dealing with some broken code. The proximate problem was a Java NullPointerException. The proximate solution was “that ivar isn’t initialized yet.”
BUT…
-
…It wasn’t initialized because they weren’t thinking about the lifecycle of that object because they weren’t thinking about when things happen in the UI because they weren’t thinking about the sequence of the user’s interaction with the system because they weren’t thinking about how the software would actually get used or about what they actually •wanted• it to do when it worked.
The technical problem was really a design / lack of clarity problem. This happens •constantly• when writing code.
-
A good point from @rgarner, looking at this through the lens of Brooks’s Law:
https://mastodon.social/@rgarner/113642040777621582 -
You are 100% correct...
...right now.
But consider this, upper limits of AI transcend human performance constraints.
One of the possible paths for AI development is you have all these multiple agents working in orchestra.
One is responsible for overall architecture.
One for environmental impact.
Etc etc.Thought provoking post.
Thanks -
@n_dimension
Please note the hypothetical downthread where I ask what happens if AI transcends human coding performance by a factor of infinity.For your line of reasoning to work, this thing we call AI has to do something that LLMs simply do not do, do not approach doing, are not on a trajectory toward doing, and are fundamentally architecturally incapable of doing. It’s like saying “eventually if this car goes fast enough, it will be a time machine!”
-
Dr Andrew A. Adams #FBPE 🔶replied to Paul Cantrell last edited by
@inthehands
Coding is teaching a really, really dumb student how to solve a problem.
Teaching something is the best way to understand it properly. -
@rgarner @inthehands @RuthMalan Oh, I like this one. Even if it were an actual person, it’s a person who read your code but none of the design docs and hasn’t participated in any of your team’s discussions. They’d have a good chance of coming up with something reasonable, but could also totally bodge it without realizing it.
-
@inthehands Oh, why hello, Amdahl!
-
@inthehands The bet that a lot of these CXOs are making implicitly is that this will be like the transition from assembly to higher-level languages like C (I think most of them are too young and/or too disconnected to make it explicitly). And I'm not 100% sold on it but my 60% hunch is that it's not.
-
@inthehands kickers are only on the field for a few minutes per game, fire them and make AI punt.
-
@kevinriggle
Yeah, I’ve heard that thought too. It’s tantalizing nonsense. I could write about this at length, and maybe one day I will, but the very short version is that automation is not even remotely the same thing as abstraction. -
@inthehands Yes! Yes. This is it exactly.
One can imagine a version of these systems where all the "source code" is English-language text describing a software system, and the Makefile first runs that through an LLM to generate C or Python or whatever before handing it off to a regular complier, which would in some sense be more abstraction, but this is like keeping the .o files around and making the programmers debug the assembly with a hex editor.
-
@OmegaPolice
That’s it: Amdahl’s Law law except optimization actually creates large costs in the other parts of the system! -
@kevinriggle
That’s exactly the line of thought, yes. And the thing that makes abstractions useful, if they are useful, is that they make good decisions about what doesn’t matter, what can be standard, and what requires situation-specific thought. Those decisions simultaneously become productivity boost, safety, and a metaphor that is a tool for thought and communication. -
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
@kevinriggle
What happens when the semantics of your abstractive model are defined by probabilistic plagiarism, and may change every single time you use it? That might be good for something, I guess??? But it doesn’t remotely resemble what a high-level language does for assembly. -
@inthehands Another bull-case argument about LLMs is that they are a form of autonomation (autonomy + automation), in the sense that the Toyota Production System uses it, the classic example being the automated loom which has a tension sensor and will stop if one of the warp yarns break. But we already have many such systems in software, made out of normal non-LLM parts, and also that's ... not really what's going on here, at least the way they're currently being used.
-
@kevinriggle
Yeah, one of the troubles with the systems is that basically every metaphor we can think of for what they are is misleading. -
@inthehands One could imagine using a fixed set of model weights and not retraining, using a fixed random seed, and keeping the model temperature relatively low. I'm imagining on some level basically the programming-language-generating version of Nvidia's DLSS tech here. But that's not what people are doing and I'm not convinced if we did that it would be useful
-
Paul Cantrellreplied to Kevin Riggle last edited by [email protected]
@kevinriggle
Even if that gave semantically staple answers, which I’m not convinced it would, it still skips that all-important step where there’s communication and reflection and consensus building.I suppose there’s some help in approaches where and LLM generates plausible answers and then some semantically reliable verification checks that the results aren’t nonsense. But we’re really stretching it.
-
@sethrichards @inthehands Importantly, when more people lean into solo-AI coding they're leaning away from building shared context and understanding of the problem domains with their team. Then those teams struggle to solve basic problems later because they can't coordinate on the most basics of team work and shared decision making on things.
AI is just another in a long most of tools trying to brute force solve a people problem.
Building shared learning and knowledgeable takes time but pays off hugely for teams and individuals who invest in it.