There's an utterly ridiculous "study" out from Stanford about "ghost engineers" which are reportedly engineers who do nothing at companies.
-
Oh, it also goes on to claim with these absolutely terrible "metrics" that people who work from home are less productive than people who work from offices.. gotta love utterly flawed studies that may actually affect people's lives.
-
@thisismissem oh my God ........... This is like catnip to me rn not gonna lie
-
@samir yeah, exactly. I've been fired before for not writing code fast enough (and then subsequently left tech for a year), which was on a mythical man month project with little to no design or feature documentation that I was expected to completely rewrite because the other engineers couldn't make heads or tails of it. I got 80% of the way towards a working application too.
-
@thisismissem what a nightmare.
-
@thisismissem the summaries I see (assuming I'm looking at the right one but pretty sure I am) say this is based on private repos?? In what world is someone getting what they claim is 50k plus engineers to opt into this, I have a lot of questions
-
@thisismissem @jaredwhite So an AI simulated panel of experts is being used to criticize the work of actual experts.
We are so f**ked.
-
@grimalkina yeah, it sounds dubious at best on that alone, allowing an outsider and a panel of ten "experts" to review all the code by 50k engineers? It sounds incredibly unlikely, given NDAs
-
@thisismissem did they simulate this panel with an LLM? I feel like I know the answer.
-
@mjf_pro @jaredwhite it doesn't quite specify what "simulate a panel of ten experts" actually means, but I guess, yes, it could mean this.
-
@kissane based on the author's previous paper, I'm gunna guess yes: https://www.gsb.stanford.edu/faculty-research/working-papers/predicting-expert-evaluations-software-code-reviews
-
@thisismissem gooooo it's all goo
-
Ha, surprise surprise, this isn't actually a "pre-print" at all, but uses another pre-print's data by the same author(s), and even in that pre-print looking at Predicting Expert Evaluations in Software Code Reviews the data seems woefully flawed
-
Rocky Lhotka π€πreplied to Emelia πΈπ» last edited by
@thisismissem Useless then. These days, for better or worse, a whole lot of my code is written by my IDE via automation or #Copilot. Sure, I *start* typing a line, and the rest is done for me.
The *vast* majority of my time is spent interacting with users, developers, thinking and discussing new features, bug fixes, and the like.
Coding is a small, but important part of being a developer. If all someone measures is code output though, that's useless.
-
James Smith πΎreplied to Emelia πΈπ» last edited by
@thisismissem "simulated" as in... they got a bunch of LLMs to read it and make value judgements about a thing that's a bad metric in the first place?
-
Emelia πΈπ»replied to James Smith πΎ last edited by
@Floppy probably. I can't actually find claimed pre-print, but they've another pre-print which conspicuously uses the same sample sizes for data, data which wasn't collected properly from what I can tell.
-
@thisismissem @jaredwhite I think it means βfigure out a way to blame the engineers for everything that goes wrong, as cheaply as possible!β
-
This post is deleted!
-
@thisismissem A colleague of mine told us how their software engineering manager measured productivity by counting semicolons written per week back in the 90s.
It was only a matter of time until they started using semicolons to frame their comments.
-
@thisismissem Where can I apply to be a ghost engineer?
-
@Jeff @drahardja probably at: https://careers.ghost.org
(Joking aside on that term, the Ghost team do some fantastic work)