PredictionBook: A Short Note

New Updates

PredictionBook has been updated and the speed improvements are massive. I was considering abandoning it because navigating the site was so slow (especially on my Android smartphone), but now I’m here to stay. Since there is no news feed on the website itself, it seemed appropriate to announce the update here. Also, the layout has changed slightly (for the better IMO) and the probability assignments have been made more intuitive (which many of the newcomers were having trouble with). The updates, along with the (slowly) growing user base, make now a better time then ever to join. Thanks TrikeApps and Ivan Kozik!

My Experience So Far

I was one of those that was convinced to give PredictionBook a shot based on gwern’s article. It has only been a month, so I am using up my willpower preventing myself from drawing any firm conclusions about my web of belief or sanity from such a small sample size. Ultimately, my goal is to know what 5% feels like and I already believe I have made a small step towards doing so. Even if PredictionBook turns out to be a failure for improving my calibration to the extent that I can intuitively feel what my degree of belief in a proposition is in percentage terms, it would still be worth it for the simple reason that I now have an inkling about how poorly calibrated I am. Finally, on at least three occasions, I have found myself trying to cook up some kind of rationalization in order to put off doing some task that I probably should be doing, but have felt additional pressure not to give in to akrasia because “my prediction will be judged false”.

Admonition/​Guilt Trip

Honestly, PredictionBook is a rationalist’s dream tool. How would you even know if you are actually becoming less wrong without tracking your predictions? Eliezer Yudkowsky, Robin Hanson, Lukeprog, Yvain, Alicorn, Phil Goetz, wedrifid, Anna Salamon, Wei Dai, and cousin_it where the hell are you? Do you doubt the potential of PredictionBook to improve your calibration and diminish akrasia or is rationality really about “affiliating with an ideology or signaling one’s authority” and not about having the map reflect the territory (yes, I realize this is a false dichotomy, but it makes for better rhetoric)?

The Future of PredictionBook

I would like to see a few changes in the way PredictionBook works and a few new features. Sometimes when you are entering dates into the date field it shows an incorrect number of minutes/​days/​years (but seems to fix itself once the prediction is submitted). This seems like a minor nitpick that should be easily fixable. Another minor change would be to make it more intuitive how the judging buttons work, since some of the newer members try pushing them instead of assigning a probability to the statement in the prediction.

It would be nice to have a measure of the user’s calibration that can be compared across time, so that the user can easily determine if their calibration is getting better or worse. This could be something like a measure of how closely an OLS line of the user’s predictions matches the line-of-perfect-calibration in the graph on the user’s page. Also, I would like to see some of the community conventions written down in an FAQ-like document (specifically how conditional predictions should be judged), perhaps after they have been discussed.

I should add that the source code is now available on github, but it is written in Ruby (which I don’t have any experience with yet). If there are any Ruby programmers on the Less Wrong Public Goods Team, this might be something worth considering for a project.

Note: Now that I called some people out, my probability estimates concerning who will make predictions on PredictionBook will have to be adjusted.

EDIT: There already exists a measure of calibration somewhat like the one I hinted at above; it’s called a Brier score.

EDIT 2: lukeprog has joined the PredictionBook community.