We have to denounce them as the Bad Guys. As traitors to our species. And then, later, once they’ve experienced the most intense moral shame they’ve ever felt, [etc. contd. p.94]
This is self-indulgent, impotent fantasy. Everyone agrees that people hurting children is bad. People are split on whether AGI/ASI is an existential threat.[1] There is no “we” beyond “people who agree with you”. “They” are not going to have anything like the reaction you’re imagining. Your strategy of screaming and screaming and screaming and screaming and screaming and screaming and screaming and screaming is not an effective way of changing anyone’s mind.
Richard—I think you’re just factually wrong that ‘people are split on whether AGI/ASI is an existential threat’.
Thousands of people signed the 2023 CAIS statement on AI risk, including almost every leading AI scientist, AI company CEO, AI researcher, AI safety expert, etc.
There are a few exceptions, such as Yann LeCun. And there are a few AI CEOs, such as Sam Altman, who had previously acknowledged the existential risks, but now downplay them.
But if all the leading figures in the industry—including Altman, Amodei, Hassabis, etc—have publicly and repeatedly acknowledged the existential risks, why would you claim ‘people are split’?
But if all the leading figures in the industry—including Altman, Amodei, Hassabis, etc—have publicly and repeatedly acknowledged the existential risks, why would you claim ‘people are split’?
You just mentioned LeCun and “a few AI CEOs, such as Sam Altman” as exceptions, so it isn’t by any means “all the leading figures”. I would also name Mark Zuckerberg, who has started “Superintelligence Labs” with the aim of “personal superintelligence for everyone”, with nary a mention of how if anyone builds it, everyone dies. Presumably all the talent he’s bought are on board with that.
I also see various figures (no names to hand) pooh-poohing the very idea of ASI at all, or of ASI as existential threat. They may be driven by the bias of dismissing the possibility of anything so disastrous as to make them have to Do Something and miss lunch, but right or wrong, that’s what they say.
And however many there are on each side, I stand by my judgement of the futility of screaming shame at the other side, and of the self-gratifying fantasy about how “they” will react.
This is self-indulgent, impotent fantasy. Everyone agrees that people hurting children is bad. People are split on whether AGI/ASI is an existential threat.[1] There is no “we” beyond “people who agree with you”. “They” are not going to have anything like the reaction you’re imagining. Your strategy of screaming and screaming and screaming and screaming and screaming and screaming and screaming and screaming is not an effective way of changing anyone’s mind.
Anyone responding “but it IS an existential threat!!” is missing the point.
Richard—I think you’re just factually wrong that ‘people are split on whether AGI/ASI is an existential threat’.
Thousands of people signed the 2023 CAIS statement on AI risk, including almost every leading AI scientist, AI company CEO, AI researcher, AI safety expert, etc.
There are a few exceptions, such as Yann LeCun. And there are a few AI CEOs, such as Sam Altman, who had previously acknowledged the existential risks, but now downplay them.
But if all the leading figures in the industry—including Altman, Amodei, Hassabis, etc—have publicly and repeatedly acknowledged the existential risks, why would you claim ‘people are split’?
You just mentioned LeCun and “a few AI CEOs, such as Sam Altman” as exceptions, so it isn’t by any means “all the leading figures”. I would also name Mark Zuckerberg, who has started “Superintelligence Labs” with the aim of “personal superintelligence for everyone”, with nary a mention of how if anyone builds it, everyone dies. Presumably all the talent he’s bought are on board with that.
I also see various figures (no names to hand) pooh-poohing the very idea of ASI at all, or of ASI as existential threat. They may be driven by the bias of dismissing the possibility of anything so disastrous as to make them have to Do Something and miss lunch, but right or wrong, that’s what they say.
And however many there are on each side, I stand by my judgement of the futility of screaming shame at the other side, and of the self-gratifying fantasy about how “they” will react.