Where Are We the Weakest?

As rationalists, we should be able to consistently and accurately make predictions that enable us to act effectively.

As humans, we don’t. At least not perfectly.

We need to improve. Many of us have, or at least believe we have. However, it’s a notably hacked improvement. PredictionBook is an excellent source of feedback on how well we’re doing, but there’s more detailed information that isn’t easily available that I think could be incredibly useful. Questions I would like to see answered are:

  • What kinds of predictions are we the least successful at predicting? (weakest calibration, smallest accuracy)

  • What kinds of predictions have the most low-hanging fruit? What’s the easiest to improve on right now?

  • What kinds of predictions are the most useful to us? (accurately predicting a close friend’s behavior>predicting obscure political decision)

  • Where aren’t we making quantitative predictions? Where does our behavior involve predictions that are underrepresented on PredictionBook?

Before we are able to improve as a community, we need to know where to improve. I’d love to hear suggestions on how to answer these questions in the comments.