The purpose of this post is to build mutual knowledge that many (most?) of us on LessWrong support If Anyone Builds It, Everyone Dies.
Inside of LW, not every user is a long-timer who’s already seen consistent signals of support for these kinds of claims. A post like this could make the difference in strengthening vs. weakening the perception of how much everyone knows that everyone knows (...) that everyone supports the book.
Externally, people who wonder how seriously the book is being taken may check LessWrong and look for an indicator of how much support the book has from the community that Eliezer Yudkowsky originally founded.
The LessWrong frontpage, where high-voted posts are generally based on “whether users want to see more of a kind of content”, wouldn’t by default map a large amount of internal support for IABIED into a frontpage that signals support. What seems to emerge from the current system is an active discussion of various aspects of the book including well-written criticisms, disagreements and nitpicks.
Statement of Support
I support If Anyone Builds It, Everyone Dies.
That is:
I think the book’s thesis is likely, or at least all-too-plausibly right: That building an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, will cause human extinction.
I think the world where the book becomes an extremely popular bestseller is much better on expectation than the world where it doesn’t
I generally respect MIRI’s work and consider it underreported and underrated
Similarity to the CAIS Statement on AI Risk
The famous 2023 Center for AI Safety Statement on AI risk reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
I’m extremely happy that this statement exists and has so many prominent signatories. While many people considered it too obvious and trivial to need stating, many others who weren’t following the situation closely (or are motivated to think otherwise) had assumed there wasn’t this level of consensus on the content of the statement across academia and industry.
Notably, the statement wasn’t a total consensus that everyone signed, or that everyone who signed agreed with passionately, yet it still documented a meaningfully widespread consensus, and was a hugely valuable exercise. I think LW might benefit from having a similar kind of mutual-knowledge-building Statement on this occasion.
Statement of Support for “If Anyone Builds It, Everyone Dies”
Mutual-Knowledgeposting
The purpose of this post is to build mutual knowledge that many (most?) of us on LessWrong support If Anyone Builds It, Everyone Dies.
Inside of LW, not every user is a long-timer who’s already seen consistent signals of support for these kinds of claims. A post like this could make the difference in strengthening vs. weakening the perception of how much everyone knows that everyone knows (...) that everyone supports the book.
Externally, people who wonder how seriously the book is being taken may check LessWrong and look for an indicator of how much support the book has from the community that Eliezer Yudkowsky originally founded.
The LessWrong frontpage, where high-voted posts are generally based on “whether users want to see more of a kind of content”, wouldn’t by default map a large amount of internal support for IABIED into a frontpage that signals support. What seems to emerge from the current system is an active discussion of various aspects of the book including well-written criticisms, disagreements and nitpicks.
Statement of Support
That is:
I think the book’s thesis is likely, or at least all-too-plausibly right: That building an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, will cause human extinction.
I think the world where the book becomes an extremely popular bestseller is much better on expectation than the world where it doesn’t
I generally respect MIRI’s work and consider it underreported and underrated
Similarity to the CAIS Statement on AI Risk
The famous 2023 Center for AI Safety Statement on AI risk reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
I’m extremely happy that this statement exists and has so many prominent signatories. While many people considered it too obvious and trivial to need stating, many others who weren’t following the situation closely (or are motivated to think otherwise) had assumed there wasn’t this level of consensus on the content of the statement across academia and industry.
Notably, the statement wasn’t a total consensus that everyone signed, or that everyone who signed agreed with passionately, yet it still documented a meaningfully widespread consensus, and was a hugely valuable exercise. I think LW might benefit from having a similar kind of mutual-knowledge-building Statement on this occasion.