In conventional decision/game theory, there is often conflict between individual and group rationality even if we assume idealized (non-altruistic) individuals. Eliezer and others have been working on more advanced decision/game theories which may be able to avoid these conflicts, but that’s still fairly speculative at this point. If we put that work aside, I think my point about over- and under-confidence hurting individual rationality, but possibly helping group rationality (by lessening the public goods problem in knowledge production), is a general one.
There is one paragraph in my post that is not about rationality in general, but only meant to apply to humans, but I made that pretty clear, I think:
If you’re overconfident in an idea, then you would tend to be more interested
in working out its applications. Conversely, if you’re underconfident in it (i.e., are
excessively skeptical), you would tend to work harder to try to find its flaws.
For ideal rational agents with converging confidences, you could still get a spread of activities (not confidence levels) in a community, because if an angle (excessive skepticism for example) is not being explored enough, the potential payoff for working on it increases even if your confidence remains unchanged. But you seem to want to change activities by changing confidence levels, that is, hacking human irrationality.
In conventional decision/game theory, there is often conflict between individual and group rationality even if we assume idealized (non-altruistic) individuals. Eliezer and others have been working on more advanced decision/game theories which may be able to avoid these conflicts, but that’s still fairly speculative at this point. If we put that work aside, I think my point about over- and under-confidence hurting individual rationality, but possibly helping group rationality (by lessening the public goods problem in knowledge production), is a general one.
There is one paragraph in my post that is not about rationality in general, but only meant to apply to humans, but I made that pretty clear, I think:
Then why the appeal to human biases? Here:
For ideal rational agents with converging confidences, you could still get a spread of activities (not confidence levels) in a community, because if an angle (excessive skepticism for example) is not being explored enough, the potential payoff for working on it increases even if your confidence remains unchanged. But you seem to want to change activities by changing confidence levels, that is, hacking human irrationality.