AI generation when writing software is a false economy. You are replacing writing code with code review. Code review is harder and requires you to already have an understanding of the domain which often means that you would’ve even able to write it you...
-
@mary @KeithAmmann @xkummerer To see the difference, try feeding some awful Bash to Shellcheck on one hand, and to an LLM on the other.
The LLM isn't (usually) going to find even half of the *really* dangerous crap that Shellcheck does. But it can (sometimes) tell you if you've screwed up your documentation, or written code that's likely to be harder for a human to understand than it has to be. They're not *reliable* in the same way a linter is, and probably can't ever be.
-
dataramareplied to Jan :rust: :ferris: on last edited by
I've said elsewhere that I think a lot of this particular discussion is actually two discussions hiding inside the contours of each other. One is "are LLMs useful for programming?", and the other is "will LLMs replace programmers?". I fall into this trap myself, probably because this is a very bad time to be a programmer who's dealing with anxiety.
At any rate: I make lots of little scripts that are literally one-off, as in I don't bother keeping them after use.
-
@janriemer @Schouten_B @mary (Quick data conversions and migrations of various kinds, mostly - where there's nothing left for the script to do after I've run it.
But yes, "load-bearing prototypes" are definitely something that unfortunately regularly happens, and something our profession ought to be a lot more mindful about.)
-
The few times I asked it to review code it did so very poorly, suggested changes that made no sense and broke the code.
It's useless. I still don't get how people talk about it seriously after trying it for ten minutes.
-
@bloodykneelers @xkummerer @mary I've gotten both good and bad results out of several of them, and I think the most frustrating aspect is that it is hard to develop an intuition of what they're good at and what they're bad at.
-
Pangolin Gerasimreplied to datarama on last edited by
@datarama @mary @KeithAmmann @xkummerer so, at best, marginal utility from adding an LLM into the dev workflow?
You've now got two things to review: the correctness of the code, and the assertions made about that code by a code-reviewing LLM.
Add to that the horrendous costs of running an LLM (and we don't yet reliably know what that'll be when the VCs stop pumping in vast amounts of cash to pay for all that compute) and we're well into negative territory.
-
dataramareplied to Pangolin Gerasim on last edited by
@fluidlogic @mary @KeithAmmann @xkummerer I'd say that having an LLM pre-reviewer is far *less* likely to land you in negative territory than LLM code generation is. Assuming you've just written the code, it's still fresh in your brain, and you can quickly see if its suggestions actually make a valid point.
It's also the sort of thing you can do using a smaller, locally-hosted LLM, so the compute cost at inference time is "two seconds of your GPU's time, or five of your CPU's."
-
vriesk (Jan Srz)replied to datarama on last edited by
@datarama @abucci @xkummerer @mary Yeah, if done well, I wouldn't mind this kind of AI-driven linter (_in addition_ to static linter rules).