There’s a clear and obvious difference between models like Qwen, DeepSeek, and Llama; and models like ChatGPT, Claude, and Gemini; and the well-established and widely-understood phrase for this difference is “open source”, contrasted with “proprietary” or “closed source,” used in things like hardware, fonts,[1] and military intelligence. If you like, think of it as a kind of fossilization of the phrase, where the “source” part has ceased to be more than an etymological curiosity; you can certainly dislike this phenomenon, but – and I say this with regret since I’m far more prescriptivist than the next guy – trying to change it is probably pissing upwind.
The restrictions on usage are a better argument: among the models with weights available, some are clearly more “open source”[2] than others, and I’d even agree that Llama’s 700-million-user restriction means that, while for most practical purposes it’s open source, it’s technically only “source-available.”
It’s not obvious to what extent fonts count as programs, and their “source code” is usually nothing more than the glyphs, which can be read out from proprietary fonts trivially. Maybe there’s a bit of obfuscation one could perform on the feature file?
The usage I’m objecting to started, as far as I can tell, about 2 years ago with Llama 2. The term “open weights”, which is often used interchangably, is a much better fit.
There’s a clear and obvious difference between models like Qwen, DeepSeek, and Llama; and models like ChatGPT, Claude, and Gemini; and the well-established and widely-understood phrase for this difference is “open source”, contrasted with “proprietary” or “closed source,” used in things like hardware, fonts,[1] and military intelligence. If you like, think of it as a kind of fossilization of the phrase, where the “source” part has ceased to be more than an etymological curiosity; you can certainly dislike this phenomenon, but – and I say this with regret since I’m far more prescriptivist than the next guy – trying to change it is probably pissing upwind.
The restrictions on usage are a better argument: among the models with weights available, some are clearly more “open source”[2] than others, and I’d even agree that Llama’s 700-million-user restriction means that, while for most practical purposes it’s open source, it’s technically only “source-available.”
It’s not obvious to what extent fonts count as programs, and their “source code” is usually nothing more than the glyphs, which can be read out from proprietary fonts trivially. Maybe there’s a bit of obfuscation one could perform on the feature file?
I agree it’s useful vocabulary, and reducing it to a binary makes it less so.
The usage I’m objecting to started, as far as I can tell, about 2 years ago with Llama 2. The term “open weights”, which is often used interchangably, is a much better fit.