About as open source as a binary blob without the training data
-
magic_lobster_partyreplied to Fushuan [he/him] last edited by
There’s usually randomness involved with the initial weights and the order the data is processed.
-
Fushuan [he/him]replied to magic_lobster_party last edited by
Not enough for it to make results diverge. Randomness is added to avoid falling into local maximas in optimization. You should still end in the same global maxima. Models usualy run until their optimization converges.
As stated, if the randomness is big enough that multiple reruns end up with different weights aka optimized for different maximas, the randomization is trash. Anything worth their salt won't have randomization big enough.
So, going back to my initial point, we need the training data to validate the weights. There are ways to check the performance of a model (quite literally, the same algorithm that is used to evaluate weights in training is them used to evaluate the trained weights post training) the performance should be identical up to a very small rounding error if a rerun with the same data and parameters is used.
-
[email protected]replied to [email protected] last edited by
Nah, just a 21st century Luddite.
-
[email protected]replied to [email protected] last edited by
It's not like you need specific knowledge of Transformer models and whatnot to counterargue LLM bandwagon simps. A basic knowledge of Machine Learning is fine.
-
[email protected]replied to [email protected] last edited by
It's even crazier that Sam Altman and other ML devs said that they reached the peak of what current Machine Learning models were capable of years ago
But that doesn't mean shit to the marketing departments
-
[email protected]replied to [email protected] last edited by
Sorry, that was a PR move from the get-go. Sam Altman doesn't have an altruistic cell in his whole body.
-
[email protected]replied to [email protected] last edited by
And you believe you’re portraying that level of competence in these comments?
-
Meta's "open source AI" ad campaign is so frustrating.
-
[email protected]replied to [email protected] last edited by
I at least do.
-
[email protected]replied to [email protected] last edited by
I mean if you both think this is overhyped nonsense, then by all means buy some Nvidia stock. If you know something the hedge fund teams don’t, why not sell your insider knowledge and become rich?
Or maybe you guys don’t understand it as well as you think. Could be either, I guess.
-
I don't care what Facebook likes or doesn't like. The OSS community is us.
-
[email protected]replied to [email protected] last edited by
“Look at this shiny.”
Investment goes up.
“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”
Investment goes up.
“Look at this shiny.”
Investment goes up.
“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”
-
Isn't all software just data plus algorithms?
-
[email protected]replied to [email protected] last edited by
I have spent a very considerable amount of time tinkering with using ai models of all sorts.
Personally, I don't know shit. I learned about... Zero entropy loss functions (?) The other day. That was interesting. I don't know a lick of calculus and was able to grok what was going on thanks to a very excellent YouTube video. Anyway, I guess my point is that suddenly is an expert.
Like. I've spent hundreds or possibly thousands of hours learning as much as I can about AI of all sorts (as a hobby) and I still don't know shit.
Its a cool state to be in cuz there's so much out there to learn about.
I'm not entirely sure what my point is here beyond the fact that most people I've seen grandstanding about this stuff online tend to get schooled by an actual expert.
I love it when that happens.
-
[email protected]replied to [email protected] last edited by
I didn't say it is all overhyped nonsense, my only point is that I agree with the opinion stated in the meme, and I don't think people who disagree really understand AI models or what "open source" means.
-
Well, yes, but usually it's the code that's the main deal, and the part that's open, and the data is what you do with it. Here, the training weights seem to be "it", so to speak.
-
[email protected]replied to [email protected] last edited by
My career is AI. It is over hyped and what the tech bros say is nonsense. AI models are not source, they are artifacts, which can be used by other source to run inference, but they themselves are not source, and anyone who says they are don't know what code is.
-
[email protected]replied to [email protected] last edited by
Because over-hyped nonsense is what the stock market craves... That's how this works. That's how all of this works.
-
[email protected]replied to [email protected] last edited by
Would you accept a Smalltalk image as Open Source?
-
[email protected]replied to [email protected] last edited by
Ok. How does that apply to DeepSeek?
Your anti-AI talking points are so embedded with anti-Big Tech arguments, that now you can’t pivot when it’s a publicly available, communist developed, energy efficient AI.