Yeah there’s always going to be a gray area when everyone is being asked if their complex belief-state maps to one side of a binary question like endorsing a statement of support.
I’ve updated the first bullet to just use the book’s phrasing. It now says:
I think the book’s thesis is likely, or at least all-too-plausibly right: That building an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, will cause human extinction.
Yeah there’s always going to be a gray area when everyone is being asked if their complex belief-state maps to one side of a binary question like endorsing a statement of support.
I’ve updated the first bullet to just use the book’s phrasing. It now says:
I think the book’s thesis is likely, or at least all-too-plausibly right: That building an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, will cause human extinction.