If you’re overconfident in an idea, then you would tend to be more interested
in working out its applications. Conversely, if you’re underconfident in it (i.e., are
excessively skeptical), you would tend to work harder to try to find its flaws.
For ideal rational agents with converging confidences, you could still get a spread of activities (not confidence levels) in a community, because if an angle (excessive skepticism for example) is not being explored enough, the potential payoff for working on it increases even if your confidence remains unchanged. But you seem to want to change activities by changing confidence levels, that is, hacking human irrationality.
Then why the appeal to human biases? Here:
For ideal rational agents with converging confidences, you could still get a spread of activities (not confidence levels) in a community, because if an angle (excessive skepticism for example) is not being explored enough, the potential payoff for working on it increases even if your confidence remains unchanged. But you seem to want to change activities by changing confidence levels, that is, hacking human irrationality.