Agreed with everything @kevinriggle wrote here.
-
You are 100% correct...
...right now.
But consider this, upper limits of AI transcend human performance constraints.
One of the possible paths for AI development is you have all these multiple agents working in orchestra.
One is responsible for overall architecture.
One for environmental impact.
Etc etc.Thought provoking post.
Thanks -
@n_dimension
Please note the hypothetical downthread where I ask what happens if AI transcends human coding performance by a factor of infinity.For your line of reasoning to work, this thing we call AI has to do something that LLMs simply do not do, do not approach doing, are not on a trajectory toward doing, and are fundamentally architecturally incapable of doing. It’s like saying “eventually if this car goes fast enough, it will be a time machine!”
-
Dr Andrew A. Adams #FBPE 🔶replied to Paul Cantrell last edited by
@inthehands
Coding is teaching a really, really dumb student how to solve a problem.
Teaching something is the best way to understand it properly. -
@rgarner @inthehands @RuthMalan Oh, I like this one. Even if it were an actual person, it’s a person who read your code but none of the design docs and hasn’t participated in any of your team’s discussions. They’d have a good chance of coming up with something reasonable, but could also totally bodge it without realizing it.
-
@inthehands Oh, why hello, Amdahl!
-
@inthehands The bet that a lot of these CXOs are making implicitly is that this will be like the transition from assembly to higher-level languages like C (I think most of them are too young and/or too disconnected to make it explicitly). And I'm not 100% sold on it but my 60% hunch is that it's not.
-
@inthehands kickers are only on the field for a few minutes per game, fire them and make AI punt.
-
@kevinriggle
Yeah, I’ve heard that thought too. It’s tantalizing nonsense. I could write about this at length, and maybe one day I will, but the very short version is that automation is not even remotely the same thing as abstraction. -
@inthehands Yes! Yes. This is it exactly.
One can imagine a version of these systems where all the "source code" is English-language text describing a software system, and the Makefile first runs that through an LLM to generate C or Python or whatever before handing it off to a regular complier, which would in some sense be more abstraction, but this is like keeping the .o files around and making the programmers debug the assembly with a hex editor.
-
@OmegaPolice
That’s it: Amdahl’s Law law except optimization actually creates large costs in the other parts of the system! -
@kevinriggle
That’s exactly the line of thought, yes. And the thing that makes abstractions useful, if they are useful, is that they make good decisions about what doesn’t matter, what can be standard, and what requires situation-specific thought. Those decisions simultaneously become productivity boost, safety, and a metaphor that is a tool for thought and communication. -
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
@kevinriggle
What happens when the semantics of your abstractive model are defined by probabilistic plagiarism, and may change every single time you use it? That might be good for something, I guess??? But it doesn’t remotely resemble what a high-level language does for assembly. -
@inthehands Another bull-case argument about LLMs is that they are a form of autonomation (autonomy + automation), in the sense that the Toyota Production System uses it, the classic example being the automated loom which has a tension sensor and will stop if one of the warp yarns break. But we already have many such systems in software, made out of normal non-LLM parts, and also that's ... not really what's going on here, at least the way they're currently being used.
-
@kevinriggle
Yeah, one of the troubles with the systems is that basically every metaphor we can think of for what they are is misleading. -
@inthehands One could imagine using a fixed set of model weights and not retraining, using a fixed random seed, and keeping the model temperature relatively low. I'm imagining on some level basically the programming-language-generating version of Nvidia's DLSS tech here. But that's not what people are doing and I'm not convinced if we did that it would be useful
-
Paul Cantrellreplied to Kevin Riggle last edited by [email protected]
@kevinriggle
Even if that gave semantically staple answers, which I’m not convinced it would, it still skips that all-important step where there’s communication and reflection and consensus building.I suppose there’s some help in approaches where and LLM generates plausible answers and then some semantically reliable verification checks that the results aren’t nonsense. But we’re really stretching it.
-
@sethrichards @inthehands Importantly, when more people lean into solo-AI coding they're leaning away from building shared context and understanding of the problem domains with their team. Then those teams struggle to solve basic problems later because they can't coordinate on the most basics of team work and shared decision making on things.
AI is just another in a long most of tools trying to brute force solve a people problem.
Building shared learning and knowledgeable takes time but pays off hugely for teams and individuals who invest in it.
-
@gregdosh @sethrichards
Strong agree. Of course the problems you describe can happen without AI too when siloed process, bad team relationships, high turnover, careless outsourcing etc. prevent communication and relationship-building during development. But just as you say, the AI productivity myth bring a whole new layer of “human problem” pitfalls. -
@kevinriggle
The primary job of a development team is the creation and maintenance of a shared mental model of what the software does and how it does it. Periodically, they change the code to implement changes to the mental model that have been agreed upon, or to correct places where the code does not match the model. An LLM cannot reason and does not have a theory of mind and as such cannot participate in the model process or meaningfully access that model — written documentation is at best a memory aid for the model — and thus cannot actually do anything that matters in the process. The executive class would prefer that other people in the org not be permitted to think, let alone paid for it, and therefore willfully confuses the output with the job.
@inthehands -
@jrose @rgarner @inthehands @RuthMalan I’m totally on your side in this fight, but very soon the AI will have been in your team’s discussions. Or at least the other AI, the one that’s always listening to Slack and Zoom, will have written a summary of those discussions that’s in the coding AI’s context. Design docs too.
Fully-remote teams will have an advantage. At least until RTO includes wearing a lapel mic at all times…