@simon quotes Sinofsky: “But if you think functional AI helping to code will make humans dumber or isn’t real programming just consider that’s been the argument against every generation of programming tools going back to Fortran.”
-
@simon @timbray Right, but if I check in generated code, that implies that I want to maintain it
And if I want to maintain it, I have to ask: am I using genAI because it is easy, or because the code produced by genAI is simple and easy to maintain?
If it is both, great! But if it is easy and not simple, then your 6mo+ time horizon for that artifact is very bad
-
Simon Willisonreplied to Bill Phillips last edited by [email protected]
@billjings @timbray I’ve been finding that the code I get out of generative AI is fantastic to maintain in the future - but that’s because I iterate on it with the tool a bunch to get it into the right shape before I use it, effectively a weird kind of pair programming
-
-
Simon Willisonreplied to Ignacio Torres last edited by
-
-
@billjings @timbray I think I am - I’m working on an article about that at the moment (clickbait title: “LLM assistance makes me a better programmer”)
I’m able to learn new libraries and languages faster which means I can apply a wider range of tools to problems - and I can generally work faster, which means I can take on more ambitious projects
-
Bill Phillipsreplied to Simon Willison last edited by
@simon @timbray Maybe! But the central point is.... we have to evaluate the benefit or lack thereof of these tools by their artifacts, not by how easy those artifacts are to create.
The most compelling use cases I hear for genAI are around domains where those artifacts seem more disposable. E.g. writing visualization code for data analysis.
But even there, "disposable" is super fuzzy. Things transition to load bearing swiftly and unexpectedly.
-
Simon Willisonreplied to Bill Phillips last edited by [email protected]
@billjings @timbray my rule for quality code is that it has automated tests, is accurately documented, is easy to understand and can be productively modified in the future
Most of the code I’ve been writing in collaboration with LLMs has ticked all of those boxes for me so far
-
@simon @billjings Hey Simon, do you get it to generate unit tests too? Any repository I manage disallows code check-in unless there is excellent coverage…
-
@timbray @billjings I try to include tests that prove a change works in every commit: https://simonwillison.net/2022/Oct/29/the-perfect-commit/
These days those are often written with LLM assistance - it’s great at frustrating details like configuring Python’s somewhat obtuse mocking library
Some older (pre-ChatGPT) notes about using LLMs to help with tests here: https://til.simonwillison.net/gpt3/writing-test-with-copilot