Throw a prediction party with your EA/rationality group
TL;DR: Prediction & calibration parties are an exciting way for your EA/rationality/LessWrong group to practice rationality skills and celebrate the new year.
On December 30th, Seattle Rationality had a prediction party. Around 15 people showed up, brought snacks, brewed coffee, and spent several hours making predictions for 2017, and generating confidence levels for those predictions.
This was heavily inspired by Scott Alexander’s yearly predictions. (2014 results, 2015 results, 2016 predictions.) Our move was to turn this into a communal activity, with a few alterations to meet our needs and make it work better in a group.
Each person individually writes a bunch of predictions for the upcoming year. They can be about global events, people’s personal lives, etc.
Write down confidence levels for each prediction.
Save your predictions and put it aside for 12 months.
Open up your predictions and see how everyone did.
To make this work in a group, we recommend the following:
Don’t share your confidence intervals. Avoid anchoring by just not naming how likely or unlikely you think any prediction is.
Do share predictions. Generating 30+ predictions is difficult, and sharing ideas (without confidence levels) makes it way easier to come up with a bunch. We made a shared google doc, and everyone pasted some of their predictions into it.
Make predictions that, in a year, will verifiably have happened or not. (IE, not “the academic year will go well”, which is debatable, but “I will finish the year with a 3.5 GPA or above”.)
It’s convenient to assume that unless stated otherwise predictions that end by the next year (IE, “I will go to the Bay Area” means “I will go to the Bay Area at least once in 2017.”) It’s also fine to make predictions that have other end dates (“I will go to EA Global this summer.”)
Make a bunch of predictions first without thinking too hard about how likely they are, then assign confidence levels. This post details why. You could also generate a group list of predictions, and everyone individually lists their own confidence levels.
This makes a good activity for rationality/EA groups for the following reasons:
Practicing rationality skills:
Making accurate predictions
Using confidence intervals
It’s open to many different knowledge levels. Even if you don’t know a thing about geopolitics, you can still give predictions and confidence intervals about media, sports, or your own life.
More free-form and less intimidating than using a prediction market. You do not have to know about the details of forecasting to try this.
Natural time and recurring activity
You could do this at any point during the year, but doing it at the start of the year seems appropriate for ringing in the new year.
In twelve months, you have an automatic new activity, which is coming back together and checking everybody’s predictions from last year. Then you make a new set of predictions for next year. (If this falls through for some reason, everyone can, of course, still check their predictions on their own.)
Fostering a friendly sense of competitiveness
Everyone wants to have the best calibration, or the lowest Brier score. Everyone wants to have the most accurate predictions!
Some examples of the predictions people used:
I will switch jobs.
I will make more than $1000 money in a way that is different from my primary job or stock.
I will exercise 3 or more times per week in October, November, December.
I’ll get another tattoo.
Gay marriage will continue to be legal in Washington state.
Gay marriage will continue to be legal in all 50 states.
I will try Focusing at least once.
I will go to another continent.
CRISPR clinical trials will happen on humans in the US.
A country that didn’t previously have nuclear weapons will acquire them.
I will read Thinking Fast and Slow.
I will go on at least 3 dates.
Brier values and graphs of ‘perfect’ vs. actual scores will give you different information. Yvain writes about the differences between these. Several of us did predictions last year using the Scott Alexander method (bins at fixed probabilities), although this year, everybody seems to have used continuous probabilities. The exact method by which we’ll determine how well-calibrated we were will be left to Seattle Rationality of 2018, but will probably include Brier values AND something to determine calibration.