Summary of my Participation in the Good Judgment Project

Follow-Up to Good Judgment Project, Season Three.

During the last forecasting season I took part in the Good Judgment Project (GJP; see also the blog) and this is a short summary of my participation (actually triggered by hamnox comment).

The GJP estimates world events like

  • Ukraine conflict

  • Arctic ice cap melting

  • Ebola outbreak duration

  • Chinese sea conflict

  • ISIS attacks

  • Terrorist attacks

  • Oil price

  • Certain exchange rates

  • Election results

  • and many other political events

To participate in that study one has to register (can’t remember where exactly I stumbled over the link, possibly the one at the top). And one has to do an preparatory online course and one has to pass an online test. At least I had to complete it. Whether the result affected my assignment to any group I can’t say. The course explains the scoring and gives recommendations for making good forecasts (choose forecasts one has an edge in, estimate early, update often, do post-mortems). The test seems to test for calibration and accuracy by asking for known (mostly political) events and whether one is sure about them.

The current forecasting season started in November 2014 and has just ended. I invested significantly less then half an hour a week on 8 questions of about 100 (and thus less than I projected in an early questionaire). I did 2 to 15 updates for these questions and I earned a score in the middle range (mostly due to getting hit by an unexpected terrorist attack). As I just learned I was assigned to the study condition were I could neither see the total group estimate nor the estimates of the other group members—only their comments. I was somewhat disappointed by this as I had hoped to learn something from how the scores developed. Too bad I wasn’t in a prediction marked group. But I hope to get the study results later.

I will not take part in further rounds as I shy the effort for the types of forecasts which are mostly political. They are political because the sponsor (guess who) is interested mostly in political events—less in economical, environmental, scientific or other types. But I enjoyed forecasting artic ice cap melting and ebola—and netted a better than average score on that.

The scoring—at least in this group—is interesting and uses an averaged Brier Score—averaged over a) all forecast questiontion and b) within a question over all the days for which a forecast is provided. I intended to game that by betting on questions that a) I could forecast well and b) that had an expected reliable outcome. Sadly there were few of type a.

From this experience I learned that

  • such prediction organizations ask mostly for political events,

  • political events are hard to predict and

  • predicting political events requires a lot of background information.

  • I’m below average in predicting political event (at least compared to my group which I’d guess has more interest in politics than I) but

  • I’m above average on non-political topics.