At a moderate P(doom), say under 25%, from a selfish perspective it makes sense to accelerate AI if it increases the chance that you get to live forever, even if it increases your risk of dying.
If you’re not elderly or otherwise at risk of irreversible harms in the near future, then pausing for a decade (say) to reduce the chance of AI ruin by even just a few percentage points still seems good. So the crux is still “can we do better by pausing.” (This assumes pauses on the order of 2-20years; the argument changes for longer pauses.)
Maybe people think the background level of xrisk is higher than it used to be over the last decades because the world situation seems to be deteriorating. But IMO this also increases the selfishness aspect of pushing AI forward because if you’re that desperate for a deus ex machina, surely you also have to thihnk that there’s a good chance things will get worse when you push technology forward.
(Lastly, I also want to note that for people who care less about living forever and care more about near-term achievable goals like “enjoy life with loved ones,” the selfish thing would be to delay AI indefinitely because rolling the dice for a longer future is then less obvioiusly worth it.)
If you’re not elderly or otherwise at risk of irreversible harms in the near future, then pausing for a decade (say) to reduce the chance of AI ruin by even just a few percentage points still seems good. So the crux is still “can we do better by pausing.” (This assumes pauses on the order of 2-20years; the argument changes for longer pauses.)
Maybe people think the background level of xrisk is higher than it used to be over the last decades because the world situation seems to be deteriorating. But IMO this also increases the selfishness aspect of pushing AI forward because if you’re that desperate for a deus ex machina, surely you also have to thihnk that there’s a good chance things will get worse when you push technology forward.
(Lastly, I also want to note that for people who care less about living forever and care more about near-term achievable goals like “enjoy life with loved ones,” the selfish thing would be to delay AI indefinitely because rolling the dice for a longer future is then less obvioiusly worth it.)