AI needs to stop
-
Yiff in hell furf-
Wait, what
-
I would counter that non-optimal washing by doing what I ask via primitive buttons and dials is perfectly acceptable, and actually preferable
-
Art seems like a side hussle or a hobby not a main job. I can't think of a faster way to hate your own passion.
I wanted to work as a programmer but getting a degree tought me I'm too poor to do it as a job as I need 6 more papers and to know the language for longer than it existed to even interview to earn the grind. Having fun building a stupid side project to bother my friends though.
-
[email protected]replied to [email protected] last edited by
But the poor shareholders!
-
[email protected]replied to [email protected] last edited by
Please tell this my simulation group members. I told them, but they won't listen.
-
Exactly. I can code and make a simple game app. If it gets some downloads, maybe pulls in a little money, I'm happy. But I'm not gonna produce endless mtx and ad-infested shovelware to make shareholders and investors happy. I also own a 3D printer. I've done a few projects with it and I was happy to do them, I've even taken commissions to model and print some things, but it's not my main job as there's no way I could afford to sit at home and just print things out all month.
-
AutistoMephistoreplied to [email protected] last edited by
I mean, they have Alexa connected refrigerators with a camera inside the fridge that sees what you put in it and how much, to either let you know when you're running low on something or ask to put in an order for more of that item before you run out.
-
[email protected]replied to [email protected] last edited by
Nft didn't fail, it was just the idiotic selling of jogs for obscene amounts that crashed (most of that was likely money laundering anyway). The tech still has a use.
Wouldn't exactly call crypto a failure, either, when we're in the midst of another bull run.
-
My only side hussle worthy skill is fixing computers and I rather swallow a hot soldering iron than meet a stranger and get money involved.
-
[email protected]replied to [email protected] last edited by
It's the fucking touch screens again.
-
[email protected]replied to [email protected] last edited by
It’s probably just a sticker they put over the word “smart”
-
That's a different argument entirely from "no possible benefit", though
-
[email protected]replied to [email protected] last edited by
Ah yes, please fire up the washing machine at 3am and scare the fuck out of everybody. And then let the clothes sit in there wet so that when you wake up, they smell like mildew
-
TʜᴇʀᴀᴘʏGⒶʀʏ⁽ᵗʰᵉʸ‘ᵗʰᵉᵐ⁾replied to [email protected] last edited by
I get what you mean, but people might read this and think Perplexity is an ethical company.
https://opendatascience.com/perplexity-ai-ceo-offers-to-step-in-amidst-nyt-tech-workers-strike/
-
I was referring mostly about security conferences. Last year almost every vendor was selling API security products. Now it’s all AI infused products.
-
Have you been to any appsec conferences last year? It was all API security. This year it was all AI-leveraged security products.
-
[email protected]replied to [email protected] last edited by
At this point, I'm full on ready to make "though shall not make a machine in the likeness of a human mind" global international law and a religious commandment. At least that way, we can burn all AI grifters as witches!
-
[email protected]replied to [email protected] last edited by
Get into self hosting.
Smart everything without subscriptions. And with you in control.
-
[email protected]replied to [email protected] last edited by
Relevant ad.
-
[email protected]replied to [email protected] last edited by
Yes, you're absolutely right. The first StarCoder model demonstrated that it is in fact possible to train a useful LLM exclusively on permissively licensed material, contrary to OpenAI's claims. Unfortunately, the main concerns of the leading voices in AI ethics at the time this stuff began to really heat up were a) "alignment" with human values / takeover of super-intelligent AI and b) bias against certain groups of humans (which I characterize as differential alignment, i.e. with some humans but not others). The latter group has since published some work criticizing genAI from a copyright and data dignity standpoint, but their absolute position against the technology in general leaves no room for re-visiting the premise that use of non-permissively licensed work is inevitable. (Incidentally they also hate classification AI as a whole; thus smearing AI detection technology which could help on all fronts of this battle. Here again it's obviously a matter of responsible deployment; the kind of classification AI that UHC deployed to reject valid health insurance claims, or the target selection AI that IDF has used, are examples of obviously unethical applications in which copyright infringement would be irrelevant.)