About as open source as a binary blob without the training data
-
Just wanted to thank you both for this discourse! As somebody who's interested in AI but totally ignorant to how the hell it works, I found this conversation very helpful! I would say you both have good points. Happy days to you both!
-
[email protected]replied to KillingTimeItself last edited by
But it is factually inaccurate. We don't call binaries open-source, we don't even call visible-source open-source. An AI model is an artifact just like a binary is.
-
[email protected]replied to [email protected] last edited by
I don't understand your objections. Even if the amount of data is rather big, it doesn't change that this data is part of the source, and leaving it out makes the whole project non-open-source.
Under that standard of scrutiny not only could there never be an LLM that would qualify, but projects that are considered open source would not be. Thus making the distinction meaningless.
What? No? Open-source projects literally do meet this standard.
-
[email protected]replied to [email protected] last edited by
On the contrary. What they open sourced was just a small part of the project. What they did not open source is what makes the AI tick. Having less than one percent of a project open sourced does not make it an "Open Source" project.
-
[email protected]replied to KillingTimeItself last edited by
That "specific block of data" is more than 99% of such a project. Hardly insignificant.
-
Fushuan [he/him]replied to [email protected] last edited by
The engine is open source, the model is not.
The enumqtor is open source, the games it can run are not.
I don't see how it's so hard to understand.
They are saying that the model that the engine is running is open source because they released the model. That's like saying that a game is open source because I released an emulator and the exscutable file. It's just not true.
-
Fushuan [he/him]replied to [email protected] last edited by
What most people understand as deepseek is the app thauses their trained model, not the running or training engines.
This post mentions open source, not open source code, big distinction. The source of a trained model is part the training engine, and way bigger part the input data. We only have access to a fraction of that "source". So the service isn't open source.
Just to make clear, no LLM service is open source currently.
-
Fushuan [he/him]replied to KillingTimeItself last edited by
The running engine and the training engine are open source. The service that uses the model trained with the open source engine and runs it with the open source runner is not, because a biiiig big part of what makes AI work is the trained model, and a big part of the source of a trained model is training data.
When they say open source, 99.99% of the people will understand that everything is verifiable, and it just is not. This is misleading.
As others have stated, a big part of open source development is providing everything so that other users can get the exact same results. This has always been the case in open source ML development, people do provide links to their training data for reproducibility. This has been the case with most of the papers on natural language processing (overarching branch of llm) I have read in the past. Both code and training data are provided.
-
Arguably they are a new type of software, which is why the old categories do not align perfectly. Instead of arguing over how to best gatekeep the old name, we need a new classification system.
-
Fushuan [he/him]replied to [email protected] last edited by
The source OP is referring to is the training data what they used to compute those weights. Meaning, petabytes of text. Without that we don't know which content theynused for training the model.
The running/training engines might be open source, the pretrained model isn't and claiming otherwise is wrong.
Nothing wrong with it being this way, most commercial models operate the same way obviously. Just don't claim that themselves is open source because a big part of it is that people can reproduce your training to verify that there's no fowl play in the input data. We literally can't. That's it.
-
[email protected]replied to [email protected] last edited by
It's not just the weights though is it? You can download the training data they used, and run your own instance of the model completely separate from their servers.
-
The runner is open source, the model is not
The service uses both so calling their service open source gives a false impression to 99,99% of users that don't know better.
-
[email protected]replied to [email protected] last edited by
Did "they" publish the training data? And the hyperparameters?
-
[email protected]replied to KillingTimeItself last edited by
Is it common? Many fields have standard, open datasets. That's not the case here, and this data is the most important part of training an LLM.
-
Fushuan [he/him]replied to [email protected] last edited by
The training data is NOT right there. If I can't reproduce the results with the given data, the model is NOT open source.
-
magic_lobster_partyreplied to Fushuan [he/him] last edited by
The model is as far as I know open, even for commercial use. This is in stark contrast with Meta’s models, which have (or had?) a bespoke community license restricting commercial use.
Or is there anything that can’t be done with the DeepSeek model that I’m unaware of?
-
Fushuan [he/him]replied to magic_lobster_party last edited by
The model is open, it's not open source!
How is it so hard to understand? The complete source of the model is not open. It's not a hard concept.
Sorry if I'm coming of as rude but I'm getting increasingly frustrated at having to explain a simple combination of two words that is pretty self explanatory.
-
[email protected]replied to [email protected] last edited by
There are lots of problems with the new lingo. We need to come up with new words.
How about “Open Weightings”?
-
[email protected]replied to [email protected] last edited by
Open sources will eventually surpass all closed-source softwares in some day, no matter how many billions of dollars are invested in them.
-
[email protected]replied to [email protected] last edited by
I mean, I downloaded it from the repo.