First: Most people haven’t encountered the idea (note: watching Terminator does not constitute encountering the idea). Most who have have only a very hazy idea about it and haven’t given it serious thought.
Second: Suppose you decide that both pension savings and intelligence explosion have a real chance of making a difference to your future life. Which can you do more about? Well, you can adjust your future wealth considerably by changing how much you spend and how much you save, and the tradeoff between present and future is reasonably clear. What can you do to make it more likely that a future intelligence explosion will improve your life and less likely that it’ll make it worse? Personally, I can’t think of anything I can do that seems likely to have non-negligible impact, nor can I think of anything I can do for which I am confident about the sign of the impact they do have.
(Go and work for Google and hope to get on a team working on AI? Probably unachievable, not clear I could actually help, and who knows whether anything they produce will be friendly? Donate to MIRI? There’s awfully little evidence that anything they’re doing is actually going to be of any use, and if at some point they decide they should actually start building AI systems to experiment with their ideas, who knows?, they might be dangerous. Lobby for government-imposed AI safety regulations? Unlikely to succeed, and if it did it might turn out to impede carefully done AI research more than it impedes actually dangerous AI research, not least because it turns out that one can do AI research in more than one of the world’s countries. Try to build a friendly AI myself? Ha ha ha. Assassinate AI researchers? Aside from being illegal and immoral and dangerous, probably just as likely to stop someone having a crucial insight needed for friendly AI as to stop someone making something that will kill us all. Try to persuade other people to worry about unfriendly AI? OK, but they don’t have any more useful things to do about it than I do. Etc.)
Incidentally, do many people actually spend much time worrying about their retirement plans? (Note: this is not the same question as “do people worry about their retirement plans?” or “are people worried about their retirement plans?”.)
People could vote for government officials who have FAI research on their agenda, but currently, I think few if any politicians even know what FAI is. Why is that?
First: Most people haven’t encountered the idea (note: watching Terminator does not constitute encountering the idea). Most who have have only a very hazy idea about it and haven’t given it serious thought.
Second: Suppose you decide that both pension savings and intelligence explosion have a real chance of making a difference to your future life. Which can you do more about? Well, you can adjust your future wealth considerably by changing how much you spend and how much you save, and the tradeoff between present and future is reasonably clear. What can you do to make it more likely that a future intelligence explosion will improve your life and less likely that it’ll make it worse? Personally, I can’t think of anything I can do that seems likely to have non-negligible impact, nor can I think of anything I can do for which I am confident about the sign of the impact they do have.
(Go and work for Google and hope to get on a team working on AI? Probably unachievable, not clear I could actually help, and who knows whether anything they produce will be friendly? Donate to MIRI? There’s awfully little evidence that anything they’re doing is actually going to be of any use, and if at some point they decide they should actually start building AI systems to experiment with their ideas, who knows?, they might be dangerous. Lobby for government-imposed AI safety regulations? Unlikely to succeed, and if it did it might turn out to impede carefully done AI research more than it impedes actually dangerous AI research, not least because it turns out that one can do AI research in more than one of the world’s countries. Try to build a friendly AI myself? Ha ha ha. Assassinate AI researchers? Aside from being illegal and immoral and dangerous, probably just as likely to stop someone having a crucial insight needed for friendly AI as to stop someone making something that will kill us all. Try to persuade other people to worry about unfriendly AI? OK, but they don’t have any more useful things to do about it than I do. Etc.)
Incidentally, do many people actually spend much time worrying about their retirement plans? (Note: this is not the same question as “do people worry about their retirement plans?” or “are people worried about their retirement plans?”.)
People could vote for government officials who have FAI research on their agenda, but currently, I think few if any politicians even know what FAI is. Why is that?