The idea that AI is a low probability risk is one that has some merit, but one doesn’t need a Pascal’s Mugging sort of scenario to consider it to be a problem. If it is only 5 or 10 percent of existential risk in the next century then it is already a serious problem. In general, all existential risks are underfunded by a lot. The only difference with AI is that for a long time it has been even more underfunded than other sources of existential risk.
The idea that AI is a low probability risk is one that has some merit, but one doesn’t need a Pascal’s Mugging sort of scenario to consider it to be a problem. If it is only 5 or 10 percent of existential risk in the next century then it is already a serious problem. In general, all existential risks are underfunded by a lot. The only difference with AI is that for a long time it has been even more underfunded than other sources of existential risk.