“Willing-to-bet-and-be-shot-in-the-head-if-you-lose” starts to be a little too much Pascal’s Mugging. By this token we can make different arguments about how unimaginable high suffering (S-risk) happens right now, which seems plausible enough that nobody would be willing to put their life on a bet about it, and the implications of these arguments might be in facts that AI should mostly wipe out the complex life on earth and start from scratch. I.e.., this would motivate that consciousness and ethics research is even higher, or at least equal priority with more “orthodox” style of AI x-risk concen, which takes that human’s or at least complex life’s existence on earth is an unshackable assumption (prior).
But in general, I agree with the sentiment of your comment.
I say it just to mean that the level of certainty required is quite high. Even if you count only human deaths and not all the rest of the value lost, with 8 billion people, a 0.0001% chance of extinction is an expected value of 8,000 deaths. People would usually be quite careful about boldly stating something that can get 8,000 people killed! Anyone who’s trying to argue that “no worries, it’ll be fine” has a much higher burden of proof for that reason alone, IMO, especially if they want to essentially dodge altogether the argument for why it might not, and simply rely on “there’s no way we’ll invent AGI that soon anyway”, which many people do.
“Willing-to-bet-and-be-shot-in-the-head-if-you-lose” starts to be a little too much Pascal’s Mugging. By this token we can make different arguments about how unimaginable high suffering (S-risk) happens right now, which seems plausible enough that nobody would be willing to put their life on a bet about it, and the implications of these arguments might be in facts that AI should mostly wipe out the complex life on earth and start from scratch. I.e.., this would motivate that consciousness and ethics research is even higher, or at least equal priority with more “orthodox” style of AI x-risk concen, which takes that human’s or at least complex life’s existence on earth is an unshackable assumption (prior).
But in general, I agree with the sentiment of your comment.
I say it just to mean that the level of certainty required is quite high. Even if you count only human deaths and not all the rest of the value lost, with 8 billion people, a 0.0001% chance of extinction is an expected value of 8,000 deaths. People would usually be quite careful about boldly stating something that can get 8,000 people killed! Anyone who’s trying to argue that “no worries, it’ll be fine” has a much higher burden of proof for that reason alone, IMO, especially if they want to essentially dodge altogether the argument for why it might not, and simply rely on “there’s no way we’ll invent AGI that soon anyway”, which many people do.