As we become more complex reasoners, we will develop new bugs and weaknesses in our reasoning for more-sophisticated dark artists to exploit.
Are we expecting to become more complex reasoners? It seems to be the opposite to me. We are certainly moving in the direction of reasoning about increasingly complex things, but by all indications, the mechanisms of normal human reasoning are much more complex than they should be, which is why it has so many bugs and weaknesses in the first place. Becoming better at reasoning, in the LW tradition, appears to consist entirely of removing components (biases, obsolete heuristics, bad epistemologies and cached thoughts, etc.), not adding them.
If the goal is to become perfect Bayesians, then the goal is simplicity itself. I realize that is probably an impossible goal — even if the Singularity happens and we all upload ourselves into supercomputer robot brains, we’d need P=NP in order to compute all of our probabilities to exactly where they should be — but every practical step we take, away from our evolutionary patchwork of belief-acquisition mechanisms and toward this ideal of rationality, is one less opportunity for things to go wrong.
Are we expecting to become more complex reasoners? It seems to be the opposite to me. We are certainly moving in the direction of reasoning about increasingly complex things, but by all indications, the mechanisms of normal human reasoning are much more complex than they should be, which is why it has so many bugs and weaknesses in the first place. Becoming better at reasoning, in the LW tradition, appears to consist entirely of removing components (biases, obsolete heuristics, bad epistemologies and cached thoughts, etc.), not adding them.
If the goal is to become perfect Bayesians, then the goal is simplicity itself. I realize that is probably an impossible goal — even if the Singularity happens and we all upload ourselves into supercomputer robot brains, we’d need P=NP in order to compute all of our probabilities to exactly where they should be — but every practical step we take, away from our evolutionary patchwork of belief-acquisition mechanisms and toward this ideal of rationality, is one less opportunity for things to go wrong.