But if all the leading figures in the industry—including Altman, Amodei, Hassabis, etc—have publicly and repeatedly acknowledged the existential risks, why would you claim ‘people are split’?
You just mentioned LeCun and “a few AI CEOs, such as Sam Altman” as exceptions, so it isn’t by any means “all the leading figures”. I would also name Mark Zuckerberg, who has started “Superintelligence Labs” with the aim of “personal superintelligence for everyone”, with nary a mention of how if anyone builds it, everyone dies. Presumably all the talent he’s bought are on board with that.
I also see various figures (no names to hand) pooh-poohing the very idea of ASI at all, or of ASI as existential threat. They may be driven by the bias of dismissing the possibility of anything so disastrous as to make them have to Do Something and miss lunch, but right or wrong, that’s what they say.
And however many there are on each side, I stand by my judgement of the futility of screaming shame at the other side, and of the self-gratifying fantasy about how “they” will react.
You just mentioned LeCun and “a few AI CEOs, such as Sam Altman” as exceptions, so it isn’t by any means “all the leading figures”. I would also name Mark Zuckerberg, who has started “Superintelligence Labs” with the aim of “personal superintelligence for everyone”, with nary a mention of how if anyone builds it, everyone dies. Presumably all the talent he’s bought are on board with that.
I also see various figures (no names to hand) pooh-poohing the very idea of ASI at all, or of ASI as existential threat. They may be driven by the bias of dismissing the possibility of anything so disastrous as to make them have to Do Something and miss lunch, but right or wrong, that’s what they say.
And however many there are on each side, I stand by my judgement of the futility of screaming shame at the other side, and of the self-gratifying fantasy about how “they” will react.