There's an utterly ridiculous "study" out from Stanford about "ghost engineers" which are reportedly engineers who do nothing at companies.
-
@thisismissem oh my God ........... This is like catnip to me rn not gonna lie
-
@samir yeah, exactly. I've been fired before for not writing code fast enough (and then subsequently left tech for a year), which was on a mythical man month project with little to no design or feature documentation that I was expected to completely rewrite because the other engineers couldn't make heads or tails of it. I got 80% of the way towards a working application too.
-
Christin Whitereplied to Emelia πΈπ» on last edited by
@thisismissem what a nightmare.
-
@thisismissem the summaries I see (assuming I'm looking at the right one but pretty sure I am) say this is based on private repos?? In what world is someone getting what they claim is 50k plus engineers to opt into this, I have a lot of questions
-
Michael Fisherreplied to Emelia πΈπ» on last edited by
@thisismissem @jaredwhite So an AI simulated panel of experts is being used to criticize the work of actual experts.
We are so f**ked.
-
@grimalkina yeah, it sounds dubious at best on that alone, allowing an outsider and a panel of ten "experts" to review all the code by 50k engineers? It sounds incredibly unlikely, given NDAs
-
@thisismissem did they simulate this panel with an LLM? I feel like I know the answer.
-
Emelia πΈπ»replied to Michael Fisher on last edited by
@mjf_pro @jaredwhite it doesn't quite specify what "simulate a panel of ten experts" actually means, but I guess, yes, it could mean this.
-
@kissane based on the author's previous paper, I'm gunna guess yes: https://www.gsb.stanford.edu/faculty-research/working-papers/predicting-expert-evaluations-software-code-reviews
-
@thisismissem gooooo it's all goo
-
Emelia πΈπ»replied to Emelia πΈπ» on last edited by
Ha, surprise surprise, this isn't actually a "pre-print" at all, but uses another pre-print's data by the same author(s), and even in that pre-print looking at Predicting Expert Evaluations in Software Code Reviews the data seems woefully flawed
-
Rocky Lhotka π€πreplied to Emelia πΈπ» on last edited by
@thisismissem Useless then. These days, for better or worse, a whole lot of my code is written by my IDE via automation or #Copilot. Sure, I *start* typing a line, and the rest is done for me.
The *vast* majority of my time is spent interacting with users, developers, thinking and discussing new features, bug fixes, and the like.
Coding is a small, but important part of being a developer. If all someone measures is code output though, that's useless.
-
James Smith πΎreplied to Emelia πΈπ» on last edited by
@thisismissem "simulated" as in... they got a bunch of LLMs to read it and make value judgements about a thing that's a bad metric in the first place?
-
Emelia πΈπ»replied to James Smith πΎ on last edited by
@Floppy probably. I can't actually find claimed pre-print, but they've another pre-print which conspicuously uses the same sample sizes for data, data which wasn't collected properly from what I can tell.
-
Michael Fisherreplied to Emelia πΈπ» on last edited by
@thisismissem @jaredwhite I think it means βfigure out a way to blame the engineers for everything that goes wrong, as cheaply as possible!β
-
This post is deleted!
-
@thisismissem A colleague of mine told us how their software engineering manager measured productivity by counting semicolons written per week back in the 90s.
It was only a matter of time until they started using semicolons to frame their comments.
-
@thisismissem Where can I apply to be a ghost engineer?
-
@Jeff @drahardja probably at: https://careers.ghost.org
(Joking aside on that term, the Ghost team do some fantastic work)
-
I contracted a helpdesk gig where I made myself reportback liason and the client didnt renew me and the contractor wouldnt leave me alone for a year and I still talk to them every so often about Hospital/clinic placements