Have you incorporated Government Notkilleveryoneism into your model? Rational people interested in not dying ought to invest in AI safety in proportion to the likelihood that they expect to be killed by AI. Rational governments ought to invest in AI safety in proportion to the likelihood they expect to be killed by AI. But, as we see from this article, what kills governments is not what kills people.
The government of Acheampong is in more danger from a miscalculated letter than from an AI-caused catastrophe that kills 10% of people or more, so long as important people like Acheampong are not among the 10%. The government of Acheampong, as a mimetic agent, should not care about alignment as much as it should care about having a military; governments maintain standing armies for protection against threats from within and without its borders.
You do not have to expect a government to irrationally invest in AI alignment anyway. Private Investment in AI 2024 was $130 billion. There is plenty of private money which could dramatically increase funding for AI safety. Yet it has not happened. Here are some potential explanations.
1) People do not think there is AI risk, because other people do not think there is AI risk, the issue you mentioned.
2) When people must put their money where their mouth is, their revealed preference is that they see no AI risk.
3) People think there is AI risk, but see no way to invest in mitigating AI risk. They would rather invest in flood insurance.
If option 3 is the case, Mr. Lee, you should create AI catastrophe insurance. You can charge a premium akin to high-risk life insurance. You can invest some of your revenue in assets you think will survive the AI catastrophe and then distribute these to policyholders or policyholders’ next of kin in case of catastrophe, and you can invest the rest in AI safety. If there is an AI catastrophe, you will be physically prepared. If there is not an AI catastrophe, you will profit handsomely from the success of your AI safety investments and the service you would have done in a counterfactual world. You said yourself “nobody is doing anything about it.” This is your chance to do something. Good luck. I’m excited to hear how it goes.
Unfortunately, my model of the world is that if AI kills “more than 10%,” it’s probably going to be everyone and everything, so the insurance won’t work according to my beliefs.
I only defined AI catastrophe as “killing more than 10%” because it’s what the survey by Karger et al. asked the participants.
I don’t believe in option 2, because if you asked people to bet against AI risk with unfavourable odds, they probably won’t feel too confident against AI risk.
Have you incorporated Government Notkilleveryoneism into your model? Rational people interested in not dying ought to invest in AI safety in proportion to the likelihood that they expect to be killed by AI. Rational governments ought to invest in AI safety in proportion to the likelihood they expect to be killed by AI. But, as we see from this article, what kills governments is not what kills people.
The government of Acheampong is in more danger from a miscalculated letter than from an AI-caused catastrophe that kills 10% of people or more, so long as important people like Acheampong are not among the 10%. The government of Acheampong, as a mimetic agent, should not care about alignment as much as it should care about having a military; governments maintain standing armies for protection against threats from within and without its borders.
You do not have to expect a government to irrationally invest in AI alignment anyway. Private Investment in AI 2024 was $130 billion. There is plenty of private money which could dramatically increase funding for AI safety. Yet it has not happened. Here are some potential explanations.
1) People do not think there is AI risk, because other people do not think there is AI risk, the issue you mentioned.
2) When people must put their money where their mouth is, their revealed preference is that they see no AI risk.
3) People think there is AI risk, but see no way to invest in mitigating AI risk. They would rather invest in flood insurance.
If option 3 is the case, Mr. Lee, you should create AI catastrophe insurance. You can charge a premium akin to high-risk life insurance. You can invest some of your revenue in assets you think will survive the AI catastrophe and then distribute these to policyholders or policyholders’ next of kin in case of catastrophe, and you can invest the rest in AI safety. If there is an AI catastrophe, you will be physically prepared. If there is not an AI catastrophe, you will profit handsomely from the success of your AI safety investments and the service you would have done in a counterfactual world. You said yourself “nobody is doing anything about it.” This is your chance to do something. Good luck. I’m excited to hear how it goes.
:) thank you so much for your thoughts.
Unfortunately, my model of the world is that if AI kills “more than 10%,” it’s probably going to be everyone and everything, so the insurance won’t work according to my beliefs.
I only defined AI catastrophe as “killing more than 10%” because it’s what the survey by Karger et al. asked the participants.
I don’t believe in option 2, because if you asked people to bet against AI risk with unfavourable odds, they probably won’t feel too confident against AI risk.