Like there is in principle time to prepare much better before risking AGI, there is also in principle time to work against other non-AI bad outcomes before risking AGI. The only reason there won’t be enough time to solve these problems is that humanity isn’t giving itself that time. Status quo is a bit ambiguous between looking at what we currently have, and looking at what would happen if we are passive and fatalistic.
(One of the points underlying this post is that different things feel like “doom” to different people. You might have some sense of what counts as “doom”, but other people will have a different sense. Under illusion of transparency with respect to the distribution of outcomes you expect, discussing “doom” without more clarification becomes noise or even misleading, rather than communicating anything in particular.)
Like there is in principle time to prepare much better before risking AGI, there is also in principle time to work against other non-AI bad outcomes before risking AGI. The only reason there won’t be enough time to solve these problems is that humanity isn’t giving itself that time. Status quo is a bit ambiguous between looking at what we currently have, and looking at what would happen if we are passive and fatalistic.
(One of the points underlying this post is that different things feel like “doom” to different people. You might have some sense of what counts as “doom”, but other people will have a different sense. Under illusion of transparency with respect to the distribution of outcomes you expect, discussing “doom” without more clarification becomes noise or even misleading, rather than communicating anything in particular.)