About as open source as a binary blob without the training data
-
[email protected]replied to [email protected] last edited by
I mean if you both think this is overhyped nonsense, then by all means buy some Nvidia stock. If you know something the hedge fund teams don’t, why not sell your insider knowledge and become rich?
Or maybe you guys don’t understand it as well as you think. Could be either, I guess.
-
I don't care what Facebook likes or doesn't like. The OSS community is us.
-
[email protected]replied to [email protected] last edited by
“Look at this shiny.”
Investment goes up.
“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”
Investment goes up.
“Look at this shiny.”
Investment goes up.
“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”
-
Isn't all software just data plus algorithms?
-
[email protected]replied to [email protected] last edited by
I have spent a very considerable amount of time tinkering with using ai models of all sorts.
Personally, I don't know shit. I learned about... Zero entropy loss functions (?) The other day. That was interesting. I don't know a lick of calculus and was able to grok what was going on thanks to a very excellent YouTube video. Anyway, I guess my point is that suddenly is an expert.
Like. I've spent hundreds or possibly thousands of hours learning as much as I can about AI of all sorts (as a hobby) and I still don't know shit.
Its a cool state to be in cuz there's so much out there to learn about.
I'm not entirely sure what my point is here beyond the fact that most people I've seen grandstanding about this stuff online tend to get schooled by an actual expert.
I love it when that happens.
-
[email protected]replied to [email protected] last edited by
I didn't say it is all overhyped nonsense, my only point is that I agree with the opinion stated in the meme, and I don't think people who disagree really understand AI models or what "open source" means.
-
Well, yes, but usually it's the code that's the main deal, and the part that's open, and the data is what you do with it. Here, the training weights seem to be "it", so to speak.
-
[email protected]replied to [email protected] last edited by
My career is AI. It is over hyped and what the tech bros say is nonsense. AI models are not source, they are artifacts, which can be used by other source to run inference, but they themselves are not source, and anyone who says they are don't know what code is.
-
[email protected]replied to [email protected] last edited by
Because over-hyped nonsense is what the stock market craves... That's how this works. That's how all of this works.
-
[email protected]replied to [email protected] last edited by
Would you accept a Smalltalk image as Open Source?
-
[email protected]replied to [email protected] last edited by
Ok. How does that apply to DeepSeek?
Your anti-AI talking points are so embedded with anti-Big Tech arguments, that now you can’t pivot when it’s a publicly available, communist developed, energy efficient AI.
-
[email protected]replied to [email protected] last edited by
That... Doesn't align with years of research. Data is king. As someone who specifically studies long tail distributions and few-shot learning (before succumbing to long COVID, sorry if my response is a bit scattered), throwing more data at a problem always improves it more than the method. And the method can be simplified only with more data. Outside of some neat tricks that modern deep learning has decided is hogwash and "classical" at least, but most of those don't scale enough for what is being looked at.
Also, datasets inherently impose bias upon networks, and it's easier to create adversarial examples that fool two networks trained on the same data than the same network twice freshly trained on different data.
Sharing metadata and acquisition methods is important and should be the gold standard. Sharing network methods is also important, but that's kind of the silver standard just because most modern state of the art models differ so minutely from each other in performance nowadays.
Open source as a term should require both. This was the standard in the academic community before tech bros started running their mouths, and should be the standard once they leave us alone.
-
... Statistical engines are older than personal computers, with the first statistical package developed in 1957. And AI professionals would have called them trained models. The interpreter is code, the weights are not. We have had terms for these things for ages.
-
[email protected]replied to [email protected] last edited by
Or as a human without all the previous people's work we learned from without paying them, aka normal life.
-
[email protected]replied to [email protected] last edited by
Actually no. As someone who prefers academic work, I very heavily prefer Deepseek to OpenAI. But neither are open. They have open weights and open source interpreters, but datasets need to be documented. If it's not reproducible, it's not open source. At least in my eyes. And without training data, or details on how to collect it, it isn't reproducible.
You're right. I don't like big tech. I want to do research without being accused of trying to destroy the world again.
And how is Deepseek over-hyped? It's an LLM. LLM's cannot reason, but they're very good at producing statistically likely language generation which can sound like its training data enough to gaslight, but not actually develop. They're great tools, but the application is wrong. Multi domain systems that use expert systems with LLM front ends to provide easy to interpret results is a much better way to do things, and Deepseek may help people creating expert systems (whether AI or not) make better front ends. This is in fact huge. But it's not the silver bullet tech bros and popsci mags think it is.
-
[email protected]replied to [email protected] last edited by
But also, you were talking about Nvidia in your comment I responded to, not Deepseek, so your rebuttal is non sequitur...
-
[email protected]replied to [email protected] last edited by
Yes please, let's use this term, and reserve Open Source for it's existing definition in the academic ML setting of weights, methods, and training data. These models don't readily fit into existing terminology for structure and logistic reasons, but when someone says "it's got open weights" I know exactly what set of licenses and implications it may have without further explanation.
-
[email protected]replied to [email protected] last edited by
LoL. Love when bots can’t follow the conversation, and accidentally out themselves.
-
[email protected]replied to [email protected] last edited by
Weights available?
-
[email protected]replied to [email protected] last edited by
China's new and cheaper magic beans shock America's unprepared magic bean salesmen
American magic bean companies like Beanco, The Boston Bean Company, and Nvidia have already shed hundreds of billions of dollars in stock value.
The Beaverton (www.thebeaverton.com)