Forrest Brazeal:
-
-
@AlSweigart I think there’s been a low-key trend in Silicon Valley of reducing QA teams generally over the past 5-10 years - I wonder if that might reverse now that more code is being written by LLMs driven by non-experts
-
@simon …as long as nobody invents any new languages or techniques
Even if they keep training new models, will they be able to overcome their own poisoning of the well with AI slop?
It feels to me like we’re in a temporary awakening before the world’s greatest corpus of language is ruined
-
@graham_knapp @matt pick a language or stack you don’t know at all, fire up ChatGPT or Claude to help you along and see how far you can get with it on a low-stakes project
I tried building a Pomodoro desktop app in https://tauri.app the other day, was a fun little side quest
-
@llimllib I don’t believe in the “model collapse” idea personally, AI models have been deliberately training on “synthetic data” for the last 12 months with increasingly impressive results
How quickly models can pick up new tech is definitely an interesting question - I’ve been pasting dozens of pages of documentation directly into them with good results, eg this example https://gist.github.com/simonw/97e29b86540fcc627da4984daf5b7f9f
-
João S. O. Buenoreplied to Simon Willison last edited by
@simon maybe. But maybe there are levels of specialization particular to each language that A.I. simply can't dewelve into. I am such an specialist for Python and I can't imagine me trusting LLM to do some of the more subtle things. Adding extra methods to a namedtuple? Show me AI code -deciding- to do that instead of just using a dataclass (which would imply extra conversion steps I don't want)
-
Simon Willisonreplied to Simon Willison last edited by
@llimllib the idea of “model collapse” is almost irresistible, because it’s a story of LLMs being brought down by their dual sins of polluting the web and then training on unverified and unlicensed scraped data
If AI labs continued to train indiscriminately it might be a problem, but those researchers are smarter than that: their whole game is about sourcing (and often deliberately generating) high quality training data
-
Simon Willisonreplied to João S. O. Bueno last edited by
@gwidion that’s exactly how I work with LLMs - a lot of my time is spent saying things like “rewrite that to use a namedtuple, not a dataclass”
Kind of like working with an infinitely patient intern who never gets frustrated at constant demands for tweaks and changes
-
Hynek Schlawackreplied to Simon Willison last edited by
@simon There’s kinda a difference between tinkering where ambition is good, and writing production software, no? What that articles predicts is a wave of janky, poorly-understood, and unidiomatic code that will eventually collapse under its own weight. I like LLMs as an assistant to learning, but man a world where people “learn” .NET in an afternoon and start churning out “production” code is positively dystopian to me
-
Simon Willisonreplied to Hynek Schlawack last edited by
@hynek I thought that too, but the more work I get done with LLMs myself the less worried I am about that
I have a Go project I wrote from scratch in production now, despite not being remotely fluent in Go. It has comprehensive test coverage and even implements continuous integration and continuous deployment, which is why I’m confident it’s not a spectacularly bad idea
Would other people YOLO something like that to production without tests? Maybe, and that would definitely be a bad idea!
-
Hynek Schlawackreplied to Simon Willison last edited by
@simon Yeah but that’s exactly gonna happen once you work under economic constraints and middle managers pining for promotions. My point is exactly what you’re accidentally implying: they’re amazing for tinkering but a time bomb in prod envs. ️
-
Simon Willisonreplied to Hynek Schlawack last edited by
@hynek I certainly won’t deny that there are an incredible new array of footguns now available to anyone who wants them
-
@simon QA has always had the problem of people treating it as "dumber" than engineering. If management replaces specialized developers with AI, it'd be hard to argue they shouldn't do the same with QA (let alone, *increase* the QA department.)
-
@AlSweigart maybe QA will finally get the respect it deserves?
I can dream!
-
-
-
Simon Willisonreplied to Simon Willison last edited by
-
@simon I don’t buy that at all. It’s true that many technical developer skills transfer well to other programming languages, but I’ve also seen and experienced for decades now that truly understanding an ecosystem (like learning to speak a foreign language near-native, including social, political, cultural aspects) is a long and slow process. AI might help, but it won’t make you proficient over night.
-
@ujay68 there’s fluent, but there’s a level below that where you can build and ship small (not large) projects with confidence despite not knowing the language inside out - that’s where I am now with AI-assisted development for Go and jq and Bash and Dockerfile and AppleScript
-
In my day job, I deal daily with professional developers, unassisted by AI, who manage to ship products that people use, that the company makes money off of - and that can and often do have security holes you can drive a truck through. It's my job to understand the environment, the players, and our own developers enough to sort out the gaps and force corrections that our experienced-in-that-environment developers still missed.
One big very common failing is "It worked when I tried it - ship it!" As opposed to "this is correct and secure - ship it".
Iterating with an AI gets you "it works!" Code - not "it's correct" code. Running without errors is no guarantee the output is correct. Getting correct output once won't guarantee it's consistently so. And secure/compliant? That's a whole other thing. You eschew experts at your own peril.
The hidden cost of not hiring experienced IT folks is you get what you pay for - and will pay the difference in other ways. Fair warning.