Agreed with everything @kevinriggle wrote here.
-
@jrose @rgarner @inthehands @RuthMalan I’m totally on your side in this fight, but very soon the AI will have been in your team’s discussions. Or at least the other AI, the one that’s always listening to Slack and Zoom, will have written a summary of those discussions that’s in the coding AI’s context. Design docs too.
Fully-remote teams will have an advantage. At least until RTO includes wearing a lapel mic at all times…
-
@jamiemccarthy @jrose @inthehands @RuthMalan so far, my experience is: they may have seen the discussion, but they don't "remember" it, and they certainly have no idea which are the salient points. Sometimes even when you ram said points down their throats.
In short, I'm fine asking them to show me a depth-first search, but I would trust them with architecture and logical design decisions about as far as I could comfortably spit a rat.
-
@gregdosh @sethrichards @inthehands
Wow. I have never thought about solo-AI coding => forced knowledge silo aggregation.
Thank you for sharing your thoughts. -
@rgarner @jrose @inthehands @RuthMalan 100% agree with the overall thrust of what you’re saying
-
That’s all well said, and gets to what @jenniferplusplus was talking about here: https://jenniferplusplus.com/losing-the-imitation-game/
-
@jamiemccarthy @rgarner @jrose @RuthMalan
Yeah, I'm with Russell here: the whole “soon the AI will think” line simply isn’t justified by either theory or evidence. It’s akin to thinking that if you make cars go fast enough, eventually they’ll travel backwards in time.Re summarization specifically…
-
@jamiemccarthy @rgarner @jrose @RuthMalan
…there was a recent paper (lost link, sorry) that systematically reviewed LLM-generated summaries. They found in the lab what people have observed anecdotally: LLMs suck at it because they don’t know what the point is. They’re great at reducing word count in a grammatically well-formed way! But they often miss the key finding, highlight the wrong thing, etc. -
@muhanga @gregdosh @sethrichards
Another version of this I've heard is that AI-generated code reduces the “bus factor” of your team to zero the moment the code is written. -
What I am seeing from studies is pretty troubling e.g. https://hachyderm.io/@shafik/113391494588227091
Folks feel more productive but actual objective measures say otherwise.
It is never about lines of code produced that is a silly measure by itself.
It is really about how many errors make it into production. If using AI code generators means a lot more bad code makes into production you are making a very poor tradeoff.
-
@inthehands My 30+ years as a dev disagree with the statement, "coding a solution is a deep way of understanding a problem", on semantics. Analyzing a solution is a deep way of understanding a problem. I've found that too many devs are in a rush to code and give short shrift to the analysis.
-
Paul Cantrellreplied to Old Fucking Punk last edited by [email protected]
@lwriemen This is an overly pedantic quibble, though I agree with the underlying sentiment that people rush into coding too fast without thinking.
Doing the work of filling the things I left between the lines for the reader to infer above: coding something •while thinking• — assessing the results, letting the ideas talk back and surprise, treating design and implementation problems as prompts to think about goals and context — is a deep way of understanding a problem.