About as open source as a binary blob without the training data
-
[email protected]replied to [email protected] last edited by
It's not just the weights though is it? You can download the training data they used, and run your own instance of the model completely separate from their servers.
-
The runner is open source, the model is not
The service uses both so calling their service open source gives a false impression to 99,99% of users that don't know better.
-
[email protected]replied to [email protected] last edited by
Did "they" publish the training data? And the hyperparameters?
-
[email protected]replied to KillingTimeItself last edited by
Is it common? Many fields have standard, open datasets. That's not the case here, and this data is the most important part of training an LLM.
-
Fushuan [he/him]replied to [email protected] last edited by
The training data is NOT right there. If I can't reproduce the results with the given data, the model is NOT open source.
-
magic_lobster_partyreplied to Fushuan [he/him] last edited by
The model is as far as I know open, even for commercial use. This is in stark contrast with Meta’s models, which have (or had?) a bespoke community license restricting commercial use.
Or is there anything that can’t be done with the DeepSeek model that I’m unaware of?
-
Fushuan [he/him]replied to magic_lobster_party last edited by
The model is open, it's not open source!
How is it so hard to understand? The complete source of the model is not open. It's not a hard concept.
Sorry if I'm coming of as rude but I'm getting increasingly frustrated at having to explain a simple combination of two words that is pretty self explanatory.
-
[email protected]replied to [email protected] last edited by
There are lots of problems with the new lingo. We need to come up with new words.
How about “Open Weightings”?
-
[email protected]replied to [email protected] last edited by
Open sources will eventually surpass all closed-source softwares in some day, no matter how many billions of dollars are invested in them.
-
[email protected]replied to [email protected] last edited by
I mean, I downloaded it from the repo.
-
[email protected]replied to [email protected] last edited by
Just look at blender vs maya for example.
-
[email protected]replied to [email protected] last edited by
Never have I used open source software that has achieved that, or was even close to achieving it. Usually it is opinionated (you need to do it this way in this exact order, because that's how we coded it. No, you cannot do the same thing but select from the back), lacks features and breaks. Especially CAD - comparing Solidworks to FreeCAD for instance, where in FreeCAD any change to previous ops just breaks everything. Modelling software too - Blender compared to 3ds Max - can't do half the things.
-
[email protected]replied to [email protected] last edited by
The training data would be incredible big. And it would contain copyright protected material (which is completely okay in my opinion, but might invoice criticism). Hell, it might even be illegal to publish the training data with the copyright protected material.
They published the weights AND their training methods which is about as open as it gets.
-
[email protected]replied to [email protected] last edited by
They could disclose how they sourced the training data, what the training data is and how you could source it. Also, did they publish their hyperparameters?
They could jpst not call it Open Source, if you can't open source it.
-
[email protected]replied to [email protected] last edited by
You downloaded the weights. That's something different.
-
There were e|forts. Facebook didn't like those.
-
That sounds like a segment on “My 600lb Life”
-
[email protected]replied to [email protected] last edited by
I reckon C++ > Delphi
-
[email protected]replied to [email protected] last edited by
You can do sneaky things with weights that are virtually undetectable.
-
magic_lobster_partyreplied to Fushuan [he/him] last edited by
Ok I understand now why people are upset. There’s a disagreement with terminology.
The source code for the model is open source. It’s defined in PyTorch. The source code for it is available with the MIT license. Anyone can download it and do whatever they want with it.
The weights for the model are open, but it’s not open source, as it’s not source code (or an executable binary for that matter). No one is arguing that the model weights are open source, but there seem to be an argument against that the model is open source.
And even if they provided the source code for the training script (and all its data), it’s unlikely anyone would reproduce the same model weights due to randomness involved. Training model weights is not like compiling an executable, because you’ll get different results every time.