The Bayesian answer is to honestly tally up the indications that future success is likely, and stop if they are lacking. So I want to ask an object-level question and a meta-level question: 1) What evidence supports the intuition that, contra game theory, single-player decision theory has a “solution”? 2) If there’s not much evidence supporting that intuition, how should I change my actions?
The Bayesian answer is to honestly tally up the indications that future success is likely, and stop if they are lacking.
So I want to ask an object-level question and a meta-level question:
1) What evidence supports the intuition that, contra game theory, single-player decision theory has a “solution”?
2) If there’s not much evidence supporting that intuition, how should I change my actions?
Since we are trying to reach the conclusion “how should I change my actions?” it seems that we are missing perhaps the most important question:
0) What is the expected value to me (inclusive of altruistic values) of discovering a solution?
Since we are trying to reach the conclusion “how should I change my actions?” it seems that we are missing perhaps the most important question:
0) What is the expected value to me (inclusive of altruistic values) of discovering a solution?