Maybe you have some information that I don’t have about the labs and the buy-in? You think this applies to OpenAI and not just Anthropic?
But as far as open source goes, I’m not sure. Deepseek? Meta? Mistral? xAI? Some big labs are just producing open source stuff. DeepSeek is maybe only 6 months behind. Is that enough headway?
It seems to me that the tipping point for many people (I don’t know for you) about open source is whether or not open source is better than close source, so this is a relative tipping point in terms of capabilities. But I think we should be thinking about absolute capabilities. For example, what about bioterrorism? At some point, it’s going to be widely accessible. Maybe the community only cares about X-risks, but personally I don’t want to die either.
Is there a good explanation online of why I shouldn’t be afraid of open-source?
As far as open source, the quick argument is that once AI becomes sufficiently powerful, it’s unlikely that the incentives are toward open sourcing it (including goverment incentives). This isn’t totally obvious though, and this doesn’t rule out catastrophic bioterrorism (more like COVID scale than extinction scale) prior to AI powerful enough to substantially accelerate R&D across many sectors (including bio). It also doesn’t rule out powerful AI being open sourced years after it is first created (though the world might be radically transformed by this point anyway). I don’t have that much of an inside view on this, but reasonable people I talk to are skeptical that open source is a very big deal (in >20% of worlds) from at least an x-risk perspective. (Seems very sensitive to questions about government response, how much stuff is driven by ideology, and how much people end up being compelled (rightly or not) by “commoditize your complement” (and ecosystem) economic arguments.)
Open source seems good on current margins, at least to the extent it doesn’t leak algorithmic advances / similar.
Maybe you have some information that I don’t have about the labs and the buy-in? You think this applies to OpenAI and not just Anthropic?
But as far as open source goes, I’m not sure. Deepseek? Meta? Mistral? xAI? Some big labs are just producing open source stuff. DeepSeek is maybe only 6 months behind. Is that enough headway?
It seems to me that the tipping point for many people (I don’t know for you) about open source is whether or not open source is better than close source, so this is a relative tipping point in terms of capabilities. But I think we should be thinking about absolute capabilities. For example, what about bioterrorism? At some point, it’s going to be widely accessible. Maybe the community only cares about X-risks, but personally I don’t want to die either.
Is there a good explanation online of why I shouldn’t be afraid of open-source?
As far as open source, the quick argument is that once AI becomes sufficiently powerful, it’s unlikely that the incentives are toward open sourcing it (including goverment incentives). This isn’t totally obvious though, and this doesn’t rule out catastrophic bioterrorism (more like COVID scale than extinction scale) prior to AI powerful enough to substantially accelerate R&D across many sectors (including bio). It also doesn’t rule out powerful AI being open sourced years after it is first created (though the world might be radically transformed by this point anyway). I don’t have that much of an inside view on this, but reasonable people I talk to are skeptical that open source is a very big deal (in >20% of worlds) from at least an x-risk perspective. (Seems very sensitive to questions about government response, how much stuff is driven by ideology, and how much people end up being compelled (rightly or not) by “commoditize your complement” (and ecosystem) economic arguments.)
Open source seems good on current margins, at least to the extent it doesn’t leak algorithmic advances / similar.
I would be happy to discuss in a dialogue about this. This seems to be an important topic, and I’m really unsure about many parameters here.