I think this is where P(Doom) can lead people astray.
A 5% P(Doom) from AI shouldn’t be seen in isolation; you have to consider the lost expected utility in a non-AI world.
I think people are generally very bad at that because we have installed a lot of psychological coping mechanisms around familiar risks, such as death by aging and societal change via wars, economics, mass migration and cultural evolution.
P(Doom) without AI is probably more like 100% over a roughly century long timeline if you measure Doom properly, taking into account the things that people actually really care about like themselves, their loved ones, their culture.
I think the AI risk discussion runs the risk of prioritizing AI catastrophes that are significantly less probable than mundane catastrophes because mundane catastrophes aren’t particularly salient or exciting.
I think this is where P(Doom) can lead people astray.
A 5% P(Doom) from AI shouldn’t be seen in isolation; you have to consider the lost expected utility in a non-AI world.
I think people are generally very bad at that because we have installed a lot of psychological coping mechanisms around familiar risks, such as death by aging and societal change via wars, economics, mass migration and cultural evolution.
P(Doom) without AI is probably more like 100% over a roughly century long timeline if you measure Doom properly, taking into account the things that people actually really care about like themselves, their loved ones, their culture.
I think the AI risk discussion runs the risk of prioritizing AI catastrophes that are significantly less probable than mundane catastrophes because mundane catastrophes aren’t particularly salient or exciting.