Coming from a very technical field, but without an AI or AI-safety background, I’ll say: so much of this AI safety work and research seems like such self-serving nonsense. It just so happens that all the leading AI companies and many employees with huge equity stakes agree that open-source AI == doom and death on a massive scale?
The internet also helps bio-terrorists communicate and learn how to do bad acts. Imagine 50 years ago, the largest internet companies at the time were pushing to make internet protocols closed-source and walled-gardens because of terrorism? (Well they were for different reasons, but can be seen as anticompetitive nonsense in the present day)
Encryption and encrypted messaging apps also help bad actors massively. You can communicate over long distances with no risk of spying or comms interception. Also, governments and the US govt in particular tried really hard to ban encryption algos as “export of arms and munitions”. Luckily this failed, the war on encryption mostly continues, but us plebes do have access to Signal and PGP.
Now it just so happens that AI needs to be closed source, walled off, and controlled by a small cartel for our safety. Have we not heard this before, on like every single technological breakthrough? I haven’t fallen for it… yet at least.
Anthropic CEO:
>”AI will lead to the unemployment of 20% of workers and civil unrest/war level poverty for a major portion of our economy”
>”Oh and also, have you seen our new funding round? It’s the biggest yet! Let’s speed this up!”
OpenAI:
>”we can’t release open models of our most powerful models, as it will lead to bioterrorism” (even though the latest uh bio-COVID-event was created by government labs which do/will have access to uncensored AI)
>Doesn’t even release their GPT-3 model from years past that barely makes coherent sentences (I wonder why, surely ain’t terrorism)
Did you notice a few months ago, when Grok 3 was released and people found it could be used for chemical weapons recipes, assassination planning, and so on? The xAI team had to scramble to fix its behavior. If it had been open source, that would not even be an option, it would just be out there now, helping to boost any psychopath or gang who got it, towards criminal mastermind status.
Coming from a very technical field, but without an AI or AI-safety background, I’ll say: so much of this AI safety work and research seems like such self-serving nonsense. It just so happens that all the leading AI companies and many employees with huge equity stakes agree that open-source AI == doom and death on a massive scale?
The internet also helps bio-terrorists communicate and learn how to do bad acts. Imagine 50 years ago, the largest internet companies at the time were pushing to make internet protocols closed-source and walled-gardens because of terrorism? (Well they were for different reasons, but can be seen as anticompetitive nonsense in the present day)
Encryption and encrypted messaging apps also help bad actors massively. You can communicate over long distances with no risk of spying or comms interception. Also, governments and the US govt in particular tried really hard to ban encryption algos as “export of arms and munitions”. Luckily this failed, the war on encryption mostly continues, but us plebes do have access to Signal and PGP.
Now it just so happens that AI needs to be closed source, walled off, and controlled by a small cartel for our safety. Have we not heard this before, on like every single technological breakthrough? I haven’t fallen for it… yet at least.
Anthropic CEO:
>”AI will lead to the unemployment of 20% of workers and civil unrest/war level poverty for a major portion of our economy”
>”Oh and also, have you seen our new funding round? It’s the biggest yet! Let’s speed this up!”
OpenAI:
>”we can’t release open models of our most powerful models, as it will lead to bioterrorism” (even though the latest uh bio-COVID-event was created by government labs which do/will have access to uncensored AI)
>Doesn’t even release their GPT-3 model from years past that barely makes coherent sentences (I wonder why, surely ain’t terrorism)
Did you notice a few months ago, when Grok 3 was released and people found it could be used for chemical weapons recipes, assassination planning, and so on? The xAI team had to scramble to fix its behavior. If it had been open source, that would not even be an option, it would just be out there now, helping to boost any psychopath or gang who got it, towards criminal mastermind status.