Use the agree/disagree react buttons on this poll-comment to build mutual knowledge about what LWers believe!
I support IABIED: I think the book’s thesis is likely, or at least all-too-plausibly right: That building an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, will cause human extinction.
If you don’t know whether the book’s thesis being “all-too-plausibly” or “not all-too-plausibly” right describes your position better, you can just go ahead and not count yourself as a supporter of IABIED (or flip a coin, or don’t participate in the aggregation effort). The mutual-knowledge I’m hoping to build is among people who don’t see this as a gray-area question, because I think that’s already a pretty high fraction (maybe majority) of LWers.
Use the agree/disagree react buttons on this poll-comment to build mutual knowledge about what LWers believe!
I don’t know what “all-too-plausibly” means. Depending on the probabilities that this implies I may agree or disagree.
If you don’t know whether the book’s thesis being “all-too-plausibly” or “not all-too-plausibly” right describes your position better, you can just go ahead and not count yourself as a supporter of IABIED (or flip a coin, or don’t participate in the aggregation effort). The mutual-knowledge I’m hoping to build is among people who don’t see this as a gray-area question, because I think that’s already a pretty high fraction (maybe majority) of LWers.