Saving someone from being eaten by bears might lead them to conceive the next Hitler, but it probably won’t (saith my subjective prior). Even with an infinite future, I assign a substantial probability to hypotheses like:
Avoiding human extinction will result in a civilization with an expected positive impact.
Particular sorts of human global governance will enable coordination problems to be solved on very large scales.
And so forth. I won’t be very confident about the relevant causal connections, but I have betting odds to offer on lots of possibilities, and those let me figure out general directions to go.
Expected values and priors.
Saving someone from being eaten by bears might lead them to conceive the next Hitler, but it probably won’t (saith my subjective prior). Even with an infinite future, I assign a substantial probability to hypotheses like:
Avoiding human extinction will result in a civilization with an expected positive impact.
Particular sorts of human global governance will enable coordination problems to be solved on very large scales.
And so forth. I won’t be very confident about the relevant causal connections, but I have betting odds to offer on lots of possibilities, and those let me figure out general directions to go.