Nuclear weapons [...] Why is there none of that for AI?
The world has actually seen what nuclear weapons can do. The idea that AI could be just as lethal remains hypothetical. Meanwhile, the proof that AI can be helpful or entertaining is immediately available to anyone with an Internet connection, and the elites of the world see AI as a ticket to power and profit.
Beyond that… Hopefully you don’t need convincing that human collective behavior, including activism and politics in general, is capable of being irrational or dysfunctional (though maybe you underestimate just how bad it can get, or already is?). But we can certainly ask where a particular dysfunction or omission comes from, and in this case the main question might be: if AI is so dangerous, why is there so little organized concern from the people who should know it best, academics and people in the industry?
I would say the main factors are money—working on AI is a job, and you can even get rich doing it—and the paradigm of AI safety—the hope that the builders of AI can deal with all the risks of AI themselves, as part of making a good product. But I’m not in academia or the AI industry, so I’m open to hearing a different explanation.
I thought lesswrong is supposed to be educational website.
It functions more as a discussion forum now. The Sequences are the most clearly didactic part, but they come from another era.
The world has actually seen what nuclear weapons can do. The idea that AI could be just as lethal remains hypothetical. Meanwhile, the proof that AI can be helpful or entertaining is immediately available to anyone with an Internet connection, and the elites of the world see AI as a ticket to power and profit.
Beyond that… Hopefully you don’t need convincing that human collective behavior, including activism and politics in general, is capable of being irrational or dysfunctional (though maybe you underestimate just how bad it can get, or already is?). But we can certainly ask where a particular dysfunction or omission comes from, and in this case the main question might be: if AI is so dangerous, why is there so little organized concern from the people who should know it best, academics and people in the industry?
I would say the main factors are money—working on AI is a job, and you can even get rich doing it—and the paradigm of AI safety—the hope that the builders of AI can deal with all the risks of AI themselves, as part of making a good product. But I’m not in academia or the AI industry, so I’m open to hearing a different explanation.
It functions more as a discussion forum now. The Sequences are the most clearly didactic part, but they come from another era.