As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you’ll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself, and we currently have no information about how it arose (did the first self-replicating molecule lead to all life as we know it? Or were there many competing forms of life, one of which eventually won?)
What is meant by ‘known risk’ though? Do you mean ‘knowledge that AI is possible’, or ‘knowledge about what it will entail’? I agree with you completely that we have no information about the latter.
As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you’ll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself [...]
What, a new thinking technology? You can’t be serious.
As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you’ll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself, and we currently have no information about how it arose (did the first self-replicating molecule lead to all life as we know it? Or were there many competing forms of life, one of which eventually won?)
History shows a variety of singular events. But it because they are singular, you can;t quantify their risk. So there is a contradiction between saying uFAI is a definite, known, risk, and saying it is an unprecedented singularity.)
What is meant by ‘known risk’ though? Do you mean ‘knowledge that AI is possible’, or ‘knowledge about what it will entail’? I agree with you completely that we have no information about the latter.
The latter.
What, a new thinking technology? You can’t be serious.