Good post! I disagree with your conclusions in general, but I like your writing and your choice of subject.
I believe that logical implications of sound arguments should not reach out indefinitely and thereby outweigh other risks whose implications are fortified by empirical evidence.
Evidence is evidence, plausible reasoning is plausible reasoning. I don’t accept the distinction that you are trying to make.
If you think that some particular inference is doubtful, then that’s fine. Perhaps you attach a probability of only 0.05 to the idea that a smarter-than-human AI built by humans would recursively self-improve. But then you should phrase your disagreement with proponents of the intelligence explosion scenario in those terms (your probability estimate differs from theirs), not by referring to an ostensible problem in arguing under uncertainty in general.
Note that there is no rigorous distinction between implications grounded in empirical evidence, and “speculation”. There is uncertainty attached to everything, and it only varies quantitatively.
lot of people do not doubt the possibility of risks from AI but are simply not sure if they should really concentrate their efforts on such vague possibilities.
There’s no such thing as a “vague possibility”. I hope I’m not unfairly picking on your English (which is rather good for a non-native speaker); this problem seems to tie in with your earlier statements. A probability is just a probability.
If I were to guess what you are grasping towards it is this: “For someone who is not an intellect of the highest calibre, working through the arguments in favour of relatively speculative future scenarios like the intelligence explosion could be massively time-consuming. And I might find myself simply unable to understand the concepts involved, even given time. Therefore I reserve the right to attach a low probability to these scenarios without providing specific objections to the inferences involved.”
I’m not sure what to think, if someone were to make that claim. On one hand, bounded rationality seems to apply. On the other, the idea of recursively self-improving intelligence doesn’t seem all that complex to me. It seems like it might be a fully general excuse.
I would probably criticise such a claim to the extent that the person making the claim is intelligent, that the argument in question is simple, and that I believe his prior assigns a generally high credibility to the non-mainstream beliefs of theorists like Yudkowsky.
Good post! I disagree with your conclusions in general, but I like your writing and your choice of subject.
Evidence is evidence, plausible reasoning is plausible reasoning. I don’t accept the distinction that you are trying to make.
If you think that some particular inference is doubtful, then that’s fine. Perhaps you attach a probability of only 0.05 to the idea that a smarter-than-human AI built by humans would recursively self-improve. But then you should phrase your disagreement with proponents of the intelligence explosion scenario in those terms (your probability estimate differs from theirs), not by referring to an ostensible problem in arguing under uncertainty in general.
Note that there is no rigorous distinction between implications grounded in empirical evidence, and “speculation”. There is uncertainty attached to everything, and it only varies quantitatively.
There’s no such thing as a “vague possibility”. I hope I’m not unfairly picking on your English (which is rather good for a non-native speaker); this problem seems to tie in with your earlier statements. A probability is just a probability.
If I were to guess what you are grasping towards it is this: “For someone who is not an intellect of the highest calibre, working through the arguments in favour of relatively speculative future scenarios like the intelligence explosion could be massively time-consuming. And I might find myself simply unable to understand the concepts involved, even given time. Therefore I reserve the right to attach a low probability to these scenarios without providing specific objections to the inferences involved.”
I’m not sure what to think, if someone were to make that claim. On one hand, bounded rationality seems to apply. On the other, the idea of recursively self-improving intelligence doesn’t seem all that complex to me. It seems like it might be a fully general excuse.
I would probably criticise such a claim to the extent that the person making the claim is intelligent, that the argument in question is simple, and that I believe his prior assigns a generally high credibility to the non-mainstream beliefs of theorists like Yudkowsky.