I have been thinking about nano/AI skepticism somewhat. I feel that most people have nothing to gain from knowing the truth, and admit myself sometimes wishing I could un-know it. I mean really, the implications of rationality for futurism are just plain unpleasant. Sometimes I even look at the good community and favorable gender ratio of religious people and wonder whether being religious is a better deal.
Motivated cognition surely doesn’t cause people to pursue beliefs chosen at random: rather it seems to do some limited inference about whether the belief would cause pleasant emotions. Perhaps this fires in the case of ai and nano, and people’s motivated cognition module asks:
“would I feel better if I thought that this honking great disaster was going to befall my children’s generation?”
Now obviously “too good to be true” isn’t strictly a good argument, but it can be a useful first-order rule of thumb, can’t it?
By using it you are recognizing (a) that people who are trying to sell you something fishy usually make it sound like a panacea; (b) that if you really like the idea you should be all the more wary of it.
I have been thinking about nano/AI skepticism somewhat. I feel that most people have nothing to gain from knowing the truth, and admit myself sometimes wishing I could un-know it. I mean really, the implications of rationality for futurism are just plain unpleasant. Sometimes I even look at the good community and favorable gender ratio of religious people and wonder whether being religious is a better deal.
Motivated cognition surely doesn’t cause people to pursue beliefs chosen at random: rather it seems to do some limited inference about whether the belief would cause pleasant emotions. Perhaps this fires in the case of ai and nano, and people’s motivated cognition module asks:
“would I feel better if I thought that this honking great disaster was going to befall my children’s generation?”
“It’s too good to be true” seems a more common reaction for AI and nano.
Now obviously “too good to be true” isn’t strictly a good argument, but it can be a useful first-order rule of thumb, can’t it?
By using it you are recognizing (a) that people who are trying to sell you something fishy usually make it sound like a panacea; (b) that if you really like the idea you should be all the more wary of it.
Not to drexlerian nanotechnology—“you and your people have scared our children”
“Why oh why didn’t I take the BLUE pill?”—The Matrix.