P(Permanent world dictatorship by winner by 2030) = 25% * 40% = 10%
Imagine that you received such a letter. Then asking you to quit the race would reduce P(your dictatorship) to 0 instead of 3.3%. But how does it reduce P(extinction)? Maybe a better argument is that alignment is hard AND that alignment to anyone’s dictatorship or another dystopian future (e.g. North Korea letting most of its population starve) is WAY harder?
An AI lab head could do a lot to prevent extinction if they did not run an AI lab. For starters they could make it their (and their org’s) fulltime job to convince their entire network that extinction is coming. Then they could try convincing the public and run for US election to get an AI pause.
But yes I haven’t spelled out a detailed alternate plan for what to do if you’re a well-networked billionaire trying to fix AI risk, and it is worth doing so.
Imagine that you received such a letter. Then asking you to quit the race would reduce P(your dictatorship) to 0 instead of 3.3%. But how does it reduce P(extinction)? Maybe a better argument is that alignment is hard AND that alignment to anyone’s dictatorship or another dystopian future (e.g. North Korea letting most of its population starve) is WAY harder?
(edited)
An AI lab head could do a lot to prevent extinction if they did not run an AI lab. For starters they could make it their (and their org’s) fulltime job to convince their entire network that extinction is coming. Then they could try convincing the public and run for US election to get an AI pause.
But yes I haven’t spelled out a detailed alternate plan for what to do if you’re a well-networked billionaire trying to fix AI risk, and it is worth doing so.
This might have relevant stuff: Support the movement against AI xrisk