Agreed with everything @kevinriggle wrote here.
-
All the above is also true (though perhaps in different proportions) of humans writing code! But here’s the big difference:
When humans write the code, those humans are •thinking• about the problem the whole time: understanding where those flaws might be hiding, playing out the implications of business assumptions, studying the problem up close.
When AI write the code, none of that happens. It’s a tradeoff: faster code generation at the cost of reduced understanding.
2/
-
The effect of AI is to reduce the cost of •generating code• by a factor of X at the cost of increasing the cost of •thinking about the problem• by a factor of Y.
And yes, Y>1. A thing non-developers do not understand about code is that coding a solution is a deep way of understanding a problem — and conversely, using code that’s dropped in your lap greatly increases the amount of problem that must be understood.
3/
-
Increase the cost of generating code by a factor of X; increase the cost of understanding by a factor of Y. How much bigger must X be than Y for that to pay off?
Check that OP again: if a software engs spend on average 1 hr/day writing code, and assuming (optimistically!) that they only work 8 hr days, then a napkin sketch of your AI-assisted cost of coding is:
1 / X + 7 * Y
That means even if X = ∞ (and it doesnt, but even if!!), then Y cannot exceed ~1.14.
Hey CXO, you want that bet?
4/
-
This is a silly thumbnail-sized model, and it’s full of all kinds of holes.
Maybe devs straight-up waste 3 hours a day, so then payoff is Y < 1.25 instead! Maybe the effects are complex and nonlinear! Maybe this whole quantification effort is doomed!
Don’t take my math too seriously. I’m not actually setting up a useful predictive model here; I’m making a point.
5/
-
Though my model is quantitatively silly, it does get at the heart of something all too real:
If you see the OP and think it means software development is on the cusp of being automatable because devs only spend ≤1/8 of their time actually typing code, you’d damn well better understand how they spend the other ≥7/8 of their time — and how your executive decisions, necessarily made from a position of ignorance* if you are an executive, impact that 7/8.
/end (with footnote)
-
* Yes, executive decisions are •necessarily• made from a position of ignorance. The point of having these high-level roles isn’t (or at least should not be) amassing power, but rather having people looking at things at different zoom levels. Summary is at the heart of good management, along with the humility to know that you are seeing things in summary. If you know all the details, you’re insufficiently zoomed out. If you’re zoomed out, you have to remember how many details you don’t know.
-
@inthehands @RuthMalan every dev wants a greenfield project. LLMs shade even greenfield projects brown.
But then it's not the devs that are asking for this* so much as a managerial class looking for the sort of silver bullet that brings down both pay and the amount of time dealing with a type of worker they find difficult.
*not the ones who are any good, anyway
-
@rgarner @RuthMalan
Yup. All that.And Brooks’s maxim that there is no silver bullet still stands undefeated.
-
@inthehands I agree with all of this, and I'd add: When I'm writing code, I'm *learning* about the problem as well through the process. When I fix a bug in my code, I (hopefully) learn not to make the same mistake again. When I help someone on the team fix a bug in their code, we both learn something. If we write documentation or a unit test to make sure the bug doesn't happen again, the organization "learns" something too.
It's unclear to me whether AI is even capable of learning in this way.
-
Jeff Miller (orange hatband)replied to Paul Cantrell last edited by
@inthehands I appreciate how your line of argument chimes with Fred Brooks' "No Silver Bullet", along the lines of essential complexity of the problem and the solution matching up, except the comparison here being the embedded understanding (hopefully) in code written specifically for the problem, versus the embedded unchecked assumptions in generated code.
Novel software libraries have a similar problem: easy to adopt, not necessarily easy to evaluate.
-
Jeff Miller (orange hatband)replied to Paul Cantrell last edited by
@inthehands NSB
-
@inthehands @RuthMalan and that thing about adding people to projects. An LLM isn't a person quite so much as the average of some.
-
Yes, this from @sethrichards is what I’m talking about:
https://mas.to/@sethrichards/113642032055823958I had a great moment with a student the other day dealing with some broken code. The proximate problem was a Java NullPointerException. The proximate solution was “that ivar isn’t initialized yet.”
BUT…
-
…It wasn’t initialized because they weren’t thinking about the lifecycle of that object because they weren’t thinking about when things happen in the UI because they weren’t thinking about the sequence of the user’s interaction with the system because they weren’t thinking about how the software would actually get used or about what they actually •wanted• it to do when it worked.
The technical problem was really a design / lack of clarity problem. This happens •constantly• when writing code.
-
A good point from @rgarner, looking at this through the lens of Brooks’s Law:
https://mastodon.social/@rgarner/113642040777621582 -
You are 100% correct...
...right now.
But consider this, upper limits of AI transcend human performance constraints.
One of the possible paths for AI development is you have all these multiple agents working in orchestra.
One is responsible for overall architecture.
One for environmental impact.
Etc etc.Thought provoking post.
Thanks -
@n_dimension
Please note the hypothetical downthread where I ask what happens if AI transcends human coding performance by a factor of infinity.For your line of reasoning to work, this thing we call AI has to do something that LLMs simply do not do, do not approach doing, are not on a trajectory toward doing, and are fundamentally architecturally incapable of doing. It’s like saying “eventually if this car goes fast enough, it will be a time machine!”
-
Dr Andrew A. Adams #FBPE 🔶replied to Paul Cantrell last edited by
@inthehands
Coding is teaching a really, really dumb student how to solve a problem.
Teaching something is the best way to understand it properly. -
@rgarner @inthehands @RuthMalan Oh, I like this one. Even if it were an actual person, it’s a person who read your code but none of the design docs and hasn’t participated in any of your team’s discussions. They’d have a good chance of coming up with something reasonable, but could also totally bodge it without realizing it.
-
@inthehands Oh, why hello, Amdahl!