Richard—I think you’re just factually wrong that ‘people are split on whether AGI/ASI is an existential threat’.
Thousands of people signed the 2023 CAIS statement on AI risk, including almost every leading AI scientist, AI company CEO, AI researcher, AI safety expert, etc.
There are a few exceptions, such as Yann LeCun. And there are a few AI CEOs, such as Sam Altman, who had previously acknowledged the existential risks, but now downplay them.
But if all the leading figures in the industry—including Altman, Amodei, Hassabis, etc—have publicly and repeatedly acknowledged the existential risks, why would you claim ‘people are split’?
But if all the leading figures in the industry—including Altman, Amodei, Hassabis, etc—have publicly and repeatedly acknowledged the existential risks, why would you claim ‘people are split’?
You just mentioned LeCun and “a few AI CEOs, such as Sam Altman” as exceptions, so it isn’t by any means “all the leading figures”. I would also name Mark Zuckerberg, who has started “Superintelligence Labs” with the aim of “personal superintelligence for everyone”, with nary a mention of how if anyone builds it, everyone dies. Presumably all the talent he’s bought are on board with that.
I also see various figures (no names to hand) pooh-poohing the very idea of ASI at all, or of ASI as existential threat. They may be driven by the bias of dismissing the possibility of anything so disastrous as to make them have to Do Something and miss lunch, but right or wrong, that’s what they say.
And however many there are on each side, I stand by my judgement of the futility of screaming shame at the other side, and of the self-gratifying fantasy about how “they” will react.
Richard—I think you’re just factually wrong that ‘people are split on whether AGI/ASI is an existential threat’.
Thousands of people signed the 2023 CAIS statement on AI risk, including almost every leading AI scientist, AI company CEO, AI researcher, AI safety expert, etc.
There are a few exceptions, such as Yann LeCun. And there are a few AI CEOs, such as Sam Altman, who had previously acknowledged the existential risks, but now downplay them.
But if all the leading figures in the industry—including Altman, Amodei, Hassabis, etc—have publicly and repeatedly acknowledged the existential risks, why would you claim ‘people are split’?
You just mentioned LeCun and “a few AI CEOs, such as Sam Altman” as exceptions, so it isn’t by any means “all the leading figures”. I would also name Mark Zuckerberg, who has started “Superintelligence Labs” with the aim of “personal superintelligence for everyone”, with nary a mention of how if anyone builds it, everyone dies. Presumably all the talent he’s bought are on board with that.
I also see various figures (no names to hand) pooh-poohing the very idea of ASI at all, or of ASI as existential threat. They may be driven by the bias of dismissing the possibility of anything so disastrous as to make them have to Do Something and miss lunch, but right or wrong, that’s what they say.
And however many there are on each side, I stand by my judgement of the futility of screaming shame at the other side, and of the self-gratifying fantasy about how “they” will react.