I think the book’s thesis is basically right — that if anyone builds superintelligent AI in the next decade or two, it’ll have a terrifyingly high (15%+) chance of causing everyone to die in short order
I think this is an absurd statement of the book’s thesis: the book is plainly saying something much stronger than that. How did you pick 15% rather than for example 80% or 95%?
I am basically on the fence about the statement you have made. For reasons described here, I think P(human extinction|AI takeover) is like 30%, and I separately think P(AI takeover) is like 45%.
I think the world where the book becomes an extremely popular bestseller is much better on expectation than the world where it doesn’t
I think this is probably right, but it’s unclear.
I generally respect MIRI’s work and consider it underreported and underrated
It depends on what you mean by “work”. I think their work making AI risk arguments at the level of detail represented by the book is massively underreported and underrated (and Eliezer’s work on making a rationalist/AI safety community is very underrated). I think that their technical work is overrated by LessWrong readers.
Yeah there’s always going to be a gray area when everyone is being asked if their complex belief-state maps to one side of a binary question like endorsing a statement of support.
I’ve updated the first bullet to just use the book’s phrasing. It now says:
I think the book’s thesis is likely, or at least all-too-plausibly right: That building an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, will cause human extinction.
I think this is an absurd statement of the book’s thesis: the book is plainly saying something much stronger than that. How did you pick 15% rather than for example 80% or 95%?
I am basically on the fence about the statement you have made. For reasons described here, I think P(human extinction|AI takeover) is like 30%, and I separately think P(AI takeover) is like 45%.
I think this is probably right, but it’s unclear.
It depends on what you mean by “work”. I think their work making AI risk arguments at the level of detail represented by the book is massively underreported and underrated (and Eliezer’s work on making a rationalist/AI safety community is very underrated). I think that their technical work is overrated by LessWrong readers.
Yeah there’s always going to be a gray area when everyone is being asked if their complex belief-state maps to one side of a binary question like endorsing a statement of support.
I’ve updated the first bullet to just use the book’s phrasing. It now says:
I think the book’s thesis is likely, or at least all-too-plausibly right: That building an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, will cause human extinction.