I also agree with the statement. I’m guessing most people who haven’t been sold on longtermism would too.
When people say things like “even a 1% chance of existential risk is unacceptable”, they are clearly valuing the long term future of humanity a lot more than they are valuing the individual people alive right now (assuming that the 99% in that scenario above is AGI going well & bringing huge benefits).
Related question: You can push a button that will, with probability P, cure aging and make all current humans immortal. But with probability 1-P, all humans die. How high does P have to be before you push? I suspect that answers to this question are highly correlated with AI caution/accelerationsim
I think the point of the statement is: wait until the probability of you dying before you next get an opportunity to push the button is > 1-P then push the button.
I also agree with the statement. I’m guessing most people who haven’t been sold on longtermism would too.
When people say things like “even a 1% chance of existential risk is unacceptable”, they are clearly valuing the long term future of humanity a lot more than they are valuing the individual people alive right now (assuming that the 99% in that scenario above is AGI going well & bringing huge benefits).
Related question: You can push a button that will, with probability P, cure aging and make all current humans immortal. But with probability 1-P, all humans die. How high does P have to be before you push? I suspect that answers to this question are highly correlated with AI caution/accelerationsim
If I choose P=1, then 1-P=0, so I am immortal and nobody dies
Well sure, but the interesting question is the minimum value of P at which you’d still push
I think the point of the statement is: wait until the probability of you dying before you next get an opportunity to push the button is > 1-P then push the button.