How to (un)become a crank

Ahhh, a human interest post. Well, sort of. At least it has something besides math-talk.

In the extreme programming community they have a saying, “three strikes and you refactor”. The rationalist counterpart would be this: once you’ve noticed the same trap twice, you’d be stupid to fall prey to it the third time.

Strike one is Eliezer’s post The Crackpot Offer. Child-Eliezer thought he’d overthrown Cantor’s theorem, then found an error in his reasoning, but felt a little tempted to keep on trying to overthrow the damned theorem anyway. The right and Bayesian thing to do, which he ended up doing, was to notice that once you’ve found your mistake there’s no longer any reason to wage war on an established result.

Strike two is Emile’s comment on one of my recent posts:

I find it annoying how my brain keeps saying “hah, I bet I could” even though I explained to it that it’s mathematically provable that such an input always exists. It still keeps coming up with “how about this clever encoding?, blablabla” … I guess that’s how you get cranks.

Strike three is… I’m a bit ashamed to say that...

...strike three is about me. And maybe not only me.

There’s a certain vibe in the air surrounding many discussions of decision theory. It sings: maybe the central insight of game theory (that multiplayer situations are not reducible to single-player ones) is wrong. Maybe the slightly-asymmetrized Prisoner’s Dilemma has a single right answer. Maybe you can get a unique solution to dividing a cake by majority vote if each individual player’s reasoning is “correct enough”. But honestly, where exactly is the Bayesian evidence that merits anticipating success on that path? Am I waging war on clear and simple established results because of wishful thinking? Are my efforts the moral equivalent of counting the reals or proving the consistency of PA within PA?

An easy answer is that “we don’t know” if our inquiries will be fruitful, so you can’t prove I must stop. But that’s not the Bayesian answer. The Bayesian answer is to honestly tally up the indications that future success is likely, and stop if they are lacking.

So I want to ask an object-level question and a meta-level question:

1) What evidence supports the intuition that, contra game theory, single-player decision theory has a “solution”?

2) If there’s not much evidence supporting that intuition, how should I change my actions?

(I already have tentative answers to both questions, but am curious what others think. Note that you can answer the second question without knowing any math :-))