About as open source as a binary blob without the training data
-
A software analogy:
Someone designs a compiler, makes it open source. Make an open runtime for it. 'Obtain' some source code with unclear license. Compiles it with the compiler and releases the compiled byte code that can run with the runtime on free OS. Do you call the program open source? Definitely it is more open than something that requires proprietary inside use only compiler and closed runtine and sometimes you can't access even the binary; it runs on their servers. It depends on perspective.
ps: the compiler takes ages and costs mils in hardware.
edit: typo
-
[email protected]replied to [email protected] last edited by
Thank you for taking the time to write this. Making the rests reproducable and possible to improve on is important.
-
[email protected]replied to [email protected] last edited by
Thank you for the explanation. I didn’t know about the ‘preferred format’ definition or how AI models are changed at all.
-
You'd be wrong. Open source has a commonly accepted definition and a CC licensed PNG does not fall under it. It's copyleft, yes, but not open source.
I do agree that model weights are data and can be given a license, including CC0. There might be some argument about how one can assign a license to weights derived from copyrighted works, but I won't get into that right now. I wouldn't call even the most liberally licensed model weights open-source though.
-
magic_lobster_partyreplied to [email protected] last edited by
I think a more appropriate analogy is if you make an open source game. With the game you have made textures, because what is a game without textured surfaces? You include the binary jpeg images along with the source code.
You’ve made the textures with photoshop, which is a closed source application. The textures also features elements of stock photos. You don’t provide the original stock photos.
Anyone playing the game is free to replace the textures with their own. The game will have a different feel, but it’s still a playable game. Anyone is also free to modify the existing textures.
Would you consider this game closed source?
-
[email protected]replied to [email protected] last edited by
Eh, it seems like it fits to me. We casually refer to all manner of data as "open source" even if we lack the ability to specifically recreate it. It might be technically more accurate to say "open data" but we usually don't, so I can't be too mad at these folks for also not.
There's huge deaths of USGS data that's shared as open data that I absolutely cannot ever replicate.
If we're specifically saying that open source means you can recreate the binaries, then data is fundamentally not able to be open source, since it distinctly lacks any form of executable content.
-
[email protected]replied to [email protected] last edited by
So, where's the source, then?
-
[email protected]replied to [email protected] last edited by
It's worth noting that OpenR1 have themselves said that DeepSeek didn't release any code for training the models, nor any of the crucial hyperparameters used. So even if you did have suitable training data, you wouldn't be able to replicate it without re-discovering what they did.
OSI specifically makes a carve-out that allows models to be considered "open source" under their open source AI definition without providing the training data, so when it comes to AI, open source is really about providing the code that kicks off training, checkpoints if used, and details about training data curation so that a comparable dataset can be compiled for replicating the results.
-
Let's transfer your bullshirt take to the kernel, shall we?
The kernel is instructions, not code. It’s perfectly fine to call it open source even though you don’t have the code to reproduce the kernel from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.
-
[email protected]replied to [email protected] last edited by
Seems kinda reductive about what makes it different from most other LLM’s
The other LLMs aren't open source, either.
isn’t that just trained from the other AI?
Most certainly not. If it were, it wouldn't output coherent text, since LLM output degenerates if you human-centipede its' outputs.
And the way it uses that data, afaik, is open and editable, and the license to use it is open.
From that standpoint, every binary blob should be considered "open source", since the machine instructions are readable in RAM.
-
[email protected]replied to [email protected] last edited by
You could train it yourself too.
How, without information on the dataset and the training code?
-
[email protected]replied to [email protected] last edited by
If we're specifically saying that open source means you can recreate the binaries, then data is fundamentally not able to be open source
lol, are you claiming data isn't reproducable? XD
-
They published the source code needed run the model.
Yeah, but not to train it
anyone can download the model, run it locally, and further build on it.
Yeah, it's about as open source as binary blobs.
Training from scratch costs millions.
So what? You still can gleam something if you know the dataset on which the model has been trained.
If software is hard to compile, can you keep the source code closed and still call software "open source"?
-
[email protected]replied to magic_lobster_party last edited by
I'm going to take your point to the extreme.
It's only open source if the camera that took the picture that is used in the stock image that was used to create the texture is open source.
You used a fully mechanical camera and chemical flash powder? Better publish that design patent and include the chemistry of the flash powder! -
[email protected]replied to [email protected] last edited by
-
Well that’s the argument.
-
Ai condensing ai is what is talked about here, from my understanding deepseek is two parts and they start with known datasets in use, and the two parts bounce ideas against each other and calculates fitness. So degrading recursive results is being directly tackled here. But training sets are tokenized gathered data. The gathering of data sets is a rights issue, but this is not part of the conversation here.
-
It could be i don’t have a complete concept on what is open source, but from looking into it, all the boxes are checked. The data set is not what is different, it’s just data. Deepseek say its weights are available and open to be changed (https://api-docs.deepseek.com/news/news250120) but the processes that handle that data at unprecedented efficiency us what makes it special
-
-
[email protected]replied to [email protected] last edited by
The point of open sourge is access to reproducability the weights are the end products (like a binary blob), you need to supply a way on how the end product is created to be open source.
-
[email protected]replied to [email protected] last edited by
So i am leaning as much as i can here, so bear with me. But it accepts tokenized data and structures it via a transformer as a json file or sun such. The weights are a binary file that’s separate and is used to, well, modify the tokenized data to generate outcomes. As long as you used a compatible tokenization structure, and weights structure, you could create a new training set. But that can be done with any LLM. You can’t pull the data from this just as you can’t make wheat from dissecting bread. But they provide the tools to set your own data, and the way the LLM handles that data are novel, due to being hamstrung by US sanctions. A “necessity is the mother of invention” and all that. Running comparable ai’s on interior hardware and much smaller budget is what makes this one stand out, not the training data.
-
Training code created by the community always pops up shortly after release. It has happened for every major model so far. Additionally you have never needed the original training dataset to continue training a model.
-
So, Ocarina of Time is considered open source now, since it's been decompiled by the community, or what?
Community effort and the ability to build on top of stuff doesn't make anything open source.
Also: initial training data is important.
-
[email protected]replied to [email protected] last edited by
So its not how it tokenized the data you are looking for, it’s not how the weights are applied you want, and it’s not how it functions to structure the output you want because these are all open… it’s the entirety of the bulk unfiltered data you want. Of which deepseek was provided from other ai projects for initial training, can be changed to fit user needs, and dissent touch on at all how this LLM is different from other LLM’s? This would be as i understand it… like saying that an open source game emulator can’t be open source because Nintendo games are encapsulated? I don’t consider the training data to be the LLM. I consider the system that manipulated that data to be the LLM. Is that where the difference in opinion is?