A lot of our irrationality seems to be rationality tuned for small sample sizes. When you live in a tribe of <200 people, any given event or opinion has a lot of weight. We evolved to do science on a small scale. How do we get around Dunbar’s limit?
This isn’t a formal problem that can be “solved” with a formal solution. I am specifically talking about problems like the Angel problem or P = NP.
Examples I can think of from the top of my head are Newcomb’s problem and the Prisoner’s dilemma. Both of these can be expressed formally with Bayesian terms. Have the problems been solved? I assumed so or I would have brought them up in my post.
For fun, I am starting to work out what is needed to tackle Newcomb’s problem and it certainly seems doable. I figured it is a good test of my new Bayesian skillz. Game theory claims to have “solved” one-shot PDs but not in a way that makes sense in helping someone decide what to do in a real life example. Newcomb’s seemed easier so I am starting with that.
There are a fair number of formal analyses of Newcomb’s problem. I particularly like this one:
D.H. Wolpert and G. Benford, What does Newcomb’s paradox teach us? (showing that the standard approaches to the paradox encode fundamentally different—and inconsistent—views about the nature of the decision problem, and clearing up a number of other confusions.)
One such problem has already come up:
1. How can we train rationality?
A lot of our irrationality seems to be rationality tuned for small sample sizes. When you live in a tribe of <200 people, any given event or opinion has a lot of weight. We evolved to do science on a small scale. How do we get around Dunbar’s limit?
This isn’t a formal problem that can be “solved” with a formal solution. I am specifically talking about problems like the Angel problem or P = NP.
Examples I can think of from the top of my head are Newcomb’s problem and the Prisoner’s dilemma. Both of these can be expressed formally with Bayesian terms. Have the problems been solved? I assumed so or I would have brought them up in my post.
For fun, I am starting to work out what is needed to tackle Newcomb’s problem and it certainly seems doable. I figured it is a good test of my new Bayesian skillz. Game theory claims to have “solved” one-shot PDs but not in a way that makes sense in helping someone decide what to do in a real life example. Newcomb’s seemed easier so I am starting with that.
Ok, I had interpreted the scope more widely than you intended.
I believe Eliezer has a formal analysis of Newcomb’s problem, but I don’t know if he’s published it anywhere.
There are a fair number of formal analyses of Newcomb’s problem. I particularly like this one:
D.H. Wolpert and G. Benford, What does Newcomb’s paradox teach us? (showing that the standard approaches to the paradox encode fundamentally different—and inconsistent—views about the nature of the decision problem, and clearing up a number of other confusions.)
Newcomb’s problem seems to disappear under any known formalization, and as far as I can tell from that thread and all others Eliezer doesn’t have any good formalization which preserves its paradoxical nature