I am reasonably sympathetic to this argument, and I agree that the difference between EY’s p(doom) > 50% and my p(doom) of perhaps 5% to 10% doesn’t obviously cash out into major policy differences.
I of course fully agree with EY/bostrom/others that AI is the dominant risk, we should be appropriately cautious, etc. This is more about why I find EY’s specific classic doom argument to be uncompelling.
My own doom scenario is somewhat different and more subtle, but mostly beyond scope of this (fairly quick) summary essay.
You mention here that “of course” you agree that AI is the dominant risk, and that you rate p(doom) somewhere in the 5-10% range.
But that wasn’t at all clear to me from reading the opening to the article.
Eliezer Yudkowsky predicts doom from AI: that humanity faces likely extinction in the near future (years or decades) from a rogue unaligned superintelligent AI system. …
I have evaluated this model in detail and found it substantially incorrect...
As written, that opener suggests to me that you think the overall model of doom being likely is substantially incorrect (not just the details I’ve elided of it being the default).
I feel it would be very helpful to the reader to ground the article from the outset with the note you’ve made here somewhere near the start. I.e., that your argument is with the specific doom case from EY, that you retain a significant p(doom), but that it’s based on different reasoning.
I agree, and believe it would have been useful if Jacob (post author) had made this clear in the opening paragraph of the post. I see no point in reading the post if it does not measurably impact my foom/doom timeline probability distribution.
I am reasonably sympathetic to this argument, and I agree that the difference between EY’s p(doom) > 50% and my p(doom) of perhaps 5% to 10% doesn’t obviously cash out into major policy differences.
I of course fully agree with EY/bostrom/others that AI is the dominant risk, we should be appropriately cautious, etc. This is more about why I find EY’s specific classic doom argument to be uncompelling.
My own doom scenario is somewhat different and more subtle, but mostly beyond scope of this (fairly quick) summary essay.
You mention here that “of course” you agree that AI is the dominant risk, and that you rate p(doom) somewhere in the 5-10% range.
But that wasn’t at all clear to me from reading the opening to the article.
As written, that opener suggests to me that you think the overall model of doom being likely is substantially incorrect (not just the details I’ve elided of it being the default).
I feel it would be very helpful to the reader to ground the article from the outset with the note you’ve made here somewhere near the start. I.e., that your argument is with the specific doom case from EY, that you retain a significant p(doom), but that it’s based on different reasoning.
I agree, and believe it would have been useful if Jacob (post author) had made this clear in the opening paragraph of the post. I see no point in reading the post if it does not measurably impact my foom/doom timeline probability distribution.
I am interested in his doom scenario, however.