An update: We’ve set up a way to link your LessWrong account to your Elicit account. By default, all your LessWrong predictions will show up in Elicit’s binary database but you can’t add notes or filter for your predictions. If you link your accounts, you can:* Filter for and browse your LessWrong predictions on Elicit (you’ll be able to see them by filtering for ‘My predictions’)* See your calibration for LessWrong predictions you’ve made that have resolved* Add notes to your LessWrong predictions on Elicit* Predict on LessWrong questions in the Elicit appIf you want us to link your accounts, send me an email (firstname.lastname@example.org) with your LessWrong username and your Elicit account email!
This was a good catch! I did actually mean world GDP, not world GDP growth. Because people have already predicted on this, I added the correct questions above as new questions, and am leaving the previous questions here for reference:
If you’re the question author, you can resolve your question on Elicit by clicking ‘Yes’ or ‘No’ in the expanded question!
How to add your own questions:
Go to elicit.org/binary
Type your question into the field at the top
Click on the question title, and click the copy URL button
Paste the URL into the LessWrong editor
See our launch post for more details!
You can search for the question on elicit.org/binary and see the history of all predictions made! E.G. If you copy the question title in this post, and search by clicking Filter then pasting the title into “Question title contains,” you can find the question here.
I’m counting using this to express credence on claims as a non-prediction use!
Thanks!! It’s primarily intended for prediction, but I feel excited about people experimenting with different ways of using this and seeing which are most useful & how they change discussions, so am interested to see what happens if you use it for other purposes too.
I don’t feel super strongly about this, but think it’d be fun to bet on if anyone disagrees with me (here are the Metaculus resolution details): When will a technology replace screens? (snapshot link here)
This is a really good point, thanks for bringing this up! We’ll look into how to improve this.
Yeah this seems pretty reasonable. It’s actually stark looking at the Our World in Data – that seems really high per year. Do you have your model somewhere? I’d be interested to see it.
A rough distribution (on a log scale) based on the two points you estimated for wars (95% < 1B people die in wars, 85% < 10M people die in wars) gives a median of ~2,600 people dying. Does that seem right?
I noticed that your prediction and jmh’s prediction are almost the exact opposite:
Teerth: 80%: No human being would be living on another celestial object (Moon, another planet or asteroid) (by 2030)
jmh: 90%: Humans living on the moon (by 2030)
(I plotted this here to show the difference, although this makes the assumption that you think the probability is ~uniformly distributed from 2030 – 2100). Curious why you think these differ so much? Especially jmh, since 90% by 2030 is more surprising—the Metaculus prediction for when the next human being will walk on the moon has a median of 2031.
Thank you for putting this spreadsheet database together! This seemed like a non-trivial amount of work, and it’s pretty useful to have it all in one place. Seeing this spreadsheet made me want:
More consistent questions such that all these people can make comparable predictions
Ability to search and aggregate across these so we can see what the general consensus is on various questions
I thought the 2008 GCR questions were really interesting, and plotted the median estimates here. I was surprised by / interested in:
How many more deaths were expected from wars than other disaster scenarios
For superintelligent AI, most of the probability mass was < 1M deaths, but there was a high probability (5%) on extinction
A natural pandemic was seen as more likely to cause > 1M deaths than an engineered pandemic (although less likely to cause > 1B deaths)
FYI, this is on a log scale. I plotted extinction as > 8B deaths.
(posted a similar comment on the EA forum link, since it seems like people are engaging more with this post there)
This is a really great conditional question! I’m curious what probability everyone puts on the assumption (GPT-N gets us to TAI) being true (i.e. do these predictions have a lot of weight in your overall TAI timelines)?I plotted human_generated_text and sairjy’s answers: