And here is the reason I linked to “Bayesians vs. Barbarians”, above: what Eliezer is proposing as the best course of action for a rationalist society that is attacked from without sounds like a second-tier rationalism.
Not exactly. Since I intend to work with self-modifying AIs, any decision theory I care to spend much time thinking about should be reflectively consistent and immediately so. This excludes e.g. both causal decision theory and evidential decision theory as usually formulated.
The idea of sacrificing your life after being selected in a draft lottery that maximized your expectation of survival if all other agents behaved the same way you did, is not meant to be second-tier.
But if humans cannot live up to such stern rationality in the face of Newcomblike decision problems, then after taking their own weakness into account, they may have cause to resort to enforcement mechanisms. This is second-tier-ish in a way, but still pretty strongly interpretable as maximizing, to the extent that you vote on the decision before the lottery.
Not exactly. Since I intend to work with self-modifying AIs, any decision theory I care to spend much time thinking about should be reflectively consistent and immediately so. This excludes e.g. both causal decision theory and evidential decision theory as usually formulated.
The idea of sacrificing your life after being selected in a draft lottery that maximized your expectation of survival if all other agents behaved the same way you did, is not meant to be second-tier.
But if humans cannot live up to such stern rationality in the face of Newcomblike decision problems, then after taking their own weakness into account, they may have cause to resort to enforcement mechanisms. This is second-tier-ish in a way, but still pretty strongly interpretable as maximizing, to the extent that you vote on the decision before the lottery.