AI existential risks, especially extinction risks from a long-termist perspective are now way overfunded compared to better futures work, and longtermism properly interpreted agrees with the common view amongst the general public that sub-existential catastrophes that collapse civilization are at least as important as risks that kill everybody, and are more important to prevent in practice than extinction risks.
One major upshot of this is that bio-threats, wars that can collapse civilization entirely, or other threats that kill off a large fraction of the population but don’t make them extinct, especially coming from AI is quite a bit more important to prevent than classical AI risk scenarios, and probably deserve more funding than current AI safety.
A better heuristic is to instead focus on a wider portfolio of grand challenges, which were defined in the article as decisions that could affect the value of the future by at least 0.1%, and another better heuristic related to long-term alignment of ASI is to scrap the Coherent Extrapolated Volition target and instead make ASIs execute optimal moral trades.
AI existential risks, especially extinction risks from a long-termist perspective are now way overfunded compared to better futures work, and longtermism properly interpreted agrees with the common view amongst the general public that sub-existential catastrophes that collapse civilization are at least as important as risks that kill everybody, and are more important to prevent in practice than extinction risks.
One major upshot of this is that bio-threats, wars that can collapse civilization entirely, or other threats that kill off a large fraction of the population but don’t make them extinct, especially coming from AI is quite a bit more important to prevent than classical AI risk scenarios, and probably deserve more funding than current AI safety.
Related to this, the maxipok heuristic is a bad guide to action, because expected (and quite likely the actual distribution) distributions of futures are nowhere near as dichotomous as some people think, and because the probability of AGI this century is quite high, it’s quite likely that non-existential interventions persist.
A better heuristic is to instead focus on a wider portfolio of grand challenges, which were defined in the article as decisions that could affect the value of the future by at least 0.1%, and another better heuristic related to long-term alignment of ASI is to scrap the Coherent Extrapolated Volition target and instead make ASIs execute optimal moral trades.