Agreed with everything @kevinriggle wrote here.
-
@inthehands kickers are only on the field for a few minutes per game, fire them and make AI punt.
-
@kevinriggle
Yeah, I’ve heard that thought too. It’s tantalizing nonsense. I could write about this at length, and maybe one day I will, but the very short version is that automation is not even remotely the same thing as abstraction. -
@inthehands Yes! Yes. This is it exactly.
One can imagine a version of these systems where all the "source code" is English-language text describing a software system, and the Makefile first runs that through an LLM to generate C or Python or whatever before handing it off to a regular complier, which would in some sense be more abstraction, but this is like keeping the .o files around and making the programmers debug the assembly with a hex editor.
-
@OmegaPolice
That’s it: Amdahl’s Law law except optimization actually creates large costs in the other parts of the system! -
@kevinriggle
That’s exactly the line of thought, yes. And the thing that makes abstractions useful, if they are useful, is that they make good decisions about what doesn’t matter, what can be standard, and what requires situation-specific thought. Those decisions simultaneously become productivity boost, safety, and a metaphor that is a tool for thought and communication. -
Paul Cantrellreplied to Paul Cantrell last edited by [email protected]
@kevinriggle
What happens when the semantics of your abstractive model are defined by probabilistic plagiarism, and may change every single time you use it? That might be good for something, I guess??? But it doesn’t remotely resemble what a high-level language does for assembly. -
@inthehands Another bull-case argument about LLMs is that they are a form of autonomation (autonomy + automation), in the sense that the Toyota Production System uses it, the classic example being the automated loom which has a tension sensor and will stop if one of the warp yarns break. But we already have many such systems in software, made out of normal non-LLM parts, and also that's ... not really what's going on here, at least the way they're currently being used.
-
@kevinriggle
Yeah, one of the troubles with the systems is that basically every metaphor we can think of for what they are is misleading. -
@inthehands One could imagine using a fixed set of model weights and not retraining, using a fixed random seed, and keeping the model temperature relatively low. I'm imagining on some level basically the programming-language-generating version of Nvidia's DLSS tech here. But that's not what people are doing and I'm not convinced if we did that it would be useful
-
Paul Cantrellreplied to Kevin Riggle last edited by [email protected]
@kevinriggle
Even if that gave semantically staple answers, which I’m not convinced it would, it still skips that all-important step where there’s communication and reflection and consensus building.I suppose there’s some help in approaches where and LLM generates plausible answers and then some semantically reliable verification checks that the results aren’t nonsense. But we’re really stretching it.
-
@sethrichards @inthehands Importantly, when more people lean into solo-AI coding they're leaning away from building shared context and understanding of the problem domains with their team. Then those teams struggle to solve basic problems later because they can't coordinate on the most basics of team work and shared decision making on things.
AI is just another in a long most of tools trying to brute force solve a people problem.
Building shared learning and knowledgeable takes time but pays off hugely for teams and individuals who invest in it.
-
@gregdosh @sethrichards
Strong agree. Of course the problems you describe can happen without AI too when siloed process, bad team relationships, high turnover, careless outsourcing etc. prevent communication and relationship-building during development. But just as you say, the AI productivity myth bring a whole new layer of “human problem” pitfalls. -
@kevinriggle
The primary job of a development team is the creation and maintenance of a shared mental model of what the software does and how it does it. Periodically, they change the code to implement changes to the mental model that have been agreed upon, or to correct places where the code does not match the model. An LLM cannot reason and does not have a theory of mind and as such cannot participate in the model process or meaningfully access that model — written documentation is at best a memory aid for the model — and thus cannot actually do anything that matters in the process. The executive class would prefer that other people in the org not be permitted to think, let alone paid for it, and therefore willfully confuses the output with the job.
@inthehands -
@jrose @rgarner @inthehands @RuthMalan I’m totally on your side in this fight, but very soon the AI will have been in your team’s discussions. Or at least the other AI, the one that’s always listening to Slack and Zoom, will have written a summary of those discussions that’s in the coding AI’s context. Design docs too.
Fully-remote teams will have an advantage. At least until RTO includes wearing a lapel mic at all times…
-
@jamiemccarthy @jrose @inthehands @RuthMalan so far, my experience is: they may have seen the discussion, but they don't "remember" it, and they certainly have no idea which are the salient points. Sometimes even when you ram said points down their throats.
In short, I'm fine asking them to show me a depth-first search, but I would trust them with architecture and logical design decisions about as far as I could comfortably spit a rat.
-
@gregdosh @sethrichards @inthehands
Wow. I have never thought about solo-AI coding => forced knowledge silo aggregation.
Thank you for sharing your thoughts. -
@rgarner @jrose @inthehands @RuthMalan 100% agree with the overall thrust of what you’re saying
-
That’s all well said, and gets to what @jenniferplusplus was talking about here: https://jenniferplusplus.com/losing-the-imitation-game/
-
@jamiemccarthy @rgarner @jrose @RuthMalan
Yeah, I'm with Russell here: the whole “soon the AI will think” line simply isn’t justified by either theory or evidence. It’s akin to thinking that if you make cars go fast enough, eventually they’ll travel backwards in time.Re summarization specifically…
-
@jamiemccarthy @rgarner @jrose @RuthMalan
…there was a recent paper (lost link, sorry) that systematically reviewed LLM-generated summaries. They found in the lab what people have observed anecdotally: LLMs suck at it because they don’t know what the point is. They’re great at reducing word count in a grammatically well-formed way! But they often miss the key finding, highlight the wrong thing, etc.