Yes, nothing prevents these from being mutually exclusive. I see it generally like:
AI is the most immediate extinction risk, because maybe humanity is too stupid to deserve to live (so 1 and 4 are correct).
WW3 might happen first (or even because of AI) and kill a large proportion of the population and possibly collapse civilization, but won’t in itself make humanity extinct (so 2 could be mostly correct).
Global warming is certainly a serious problem on a 200-year timescale (so 3 is also correct), but there are also much more immediate and severe threats to the future of civilization including some not mentioned here.
It seems to me that the correct response is to mitigate the risk of all of these, and others.
When Nuclear weapons gained that status, it became illegal in many coutries to possess radioactive materials, and facilities got obligations to follow some rules, and special entities to observe were created.
Why is there none of that for AI? For an outsider (if he somehow manages to even follow the ai conversations) it looks absurd: researchers are complaining how risky it all is, but instead of going and protesting on the central streets to make the government introduce overseeing entities, they only make more publications about some improvements in AI.
I understand that those who complain and those who update models can be different groups, but i still do not see anyone protesting on the streets. Yet it is possible to find occasional global warming protest.
Bubble does not only mean the information is not going in, it also means information is not going out.
One of additional problems why info is not going out, is extremely complicated language of lesswrong. Some simple thought on the main page can be made into 30minute read (i believe that is subconcious attempt of people to “match the serious aesthetics of the site”), while some hard topics lack the explanation of notions they use. I just cannot send this to my friends or collegues. I thought lesswrong is supposed to be educational website.
Nuclear weapons [...] Why is there none of that for AI?
The world has actually seen what nuclear weapons can do. The idea that AI could be just as lethal remains hypothetical. Meanwhile, the proof that AI can be helpful or entertaining is immediately available to anyone with an Internet connection, and the elites of the world see AI as a ticket to power and profit.
Beyond that… Hopefully you don’t need convincing that human collective behavior, including activism and politics in general, is capable of being irrational or dysfunctional (though maybe you underestimate just how bad it can get, or already is?). But we can certainly ask where a particular dysfunction or omission comes from, and in this case the main question might be: if AI is so dangerous, why is there so little organized concern from the people who should know it best, academics and people in the industry?
I would say the main factors are money—working on AI is a job, and you can even get rich doing it—and the paradigm of AI safety—the hope that the builders of AI can deal with all the risks of AI themselves, as part of making a good product. But I’m not in academia or the AI industry, so I’m open to hearing a different explanation.
I thought lesswrong is supposed to be educational website.
It functions more as a discussion forum now. The Sequences are the most clearly didactic part, but they come from another era.
Yes, nothing prevents these from being mutually exclusive. I see it generally like:
AI is the most immediate extinction risk, because maybe humanity is too stupid to deserve to live (so 1 and 4 are correct).
WW3 might happen first (or even because of AI) and kill a large proportion of the population and possibly collapse civilization, but won’t in itself make humanity extinct (so 2 could be mostly correct).
Global warming is certainly a serious problem on a 200-year timescale (so 3 is also correct), but there are also much more immediate and severe threats to the future of civilization including some not mentioned here.
It seems to me that the correct response is to mitigate the risk of all of these, and others.
Let’s say AI is the greatest risk.
When Nuclear weapons gained that status, it became illegal in many coutries to possess radioactive materials, and facilities got obligations to follow some rules, and special entities to observe were created.
Why is there none of that for AI? For an outsider (if he somehow manages to even follow the ai conversations) it looks absurd: researchers are complaining how risky it all is, but instead of going and protesting on the central streets to make the government introduce overseeing entities, they only make more publications about some improvements in AI.
I understand that those who complain and those who update models can be different groups, but i still do not see anyone protesting on the streets. Yet it is possible to find occasional global warming protest.
Bubble does not only mean the information is not going in, it also means information is not going out.
One of additional problems why info is not going out, is extremely complicated language of lesswrong. Some simple thought on the main page can be made into 30minute read (i believe that is subconcious attempt of people to “match the serious aesthetics of the site”), while some hard topics lack the explanation of notions they use. I just cannot send this to my friends or collegues. I thought lesswrong is supposed to be educational website.
The world has actually seen what nuclear weapons can do. The idea that AI could be just as lethal remains hypothetical. Meanwhile, the proof that AI can be helpful or entertaining is immediately available to anyone with an Internet connection, and the elites of the world see AI as a ticket to power and profit.
Beyond that… Hopefully you don’t need convincing that human collective behavior, including activism and politics in general, is capable of being irrational or dysfunctional (though maybe you underestimate just how bad it can get, or already is?). But we can certainly ask where a particular dysfunction or omission comes from, and in this case the main question might be: if AI is so dangerous, why is there so little organized concern from the people who should know it best, academics and people in the industry?
I would say the main factors are money—working on AI is a job, and you can even get rich doing it—and the paradigm of AI safety—the hope that the builders of AI can deal with all the risks of AI themselves, as part of making a good product. But I’m not in academia or the AI industry, so I’m open to hearing a different explanation.
It functions more as a discussion forum now. The Sequences are the most clearly didactic part, but they come from another era.