My favorite Less Wrong posts are almost always the parables and the dialogues. I find it easier to process and remember information that is conveyed in this way. They’re also simply more fun to read.
This post was originally written as an entry for the FTX Future Fund prize, which, at the time of writing the original draft, was a $1,000,000 prize, which I did not win, partly because it wasn’t selected as the winner and partly because FTX imploded and the prize money vanished. (There is a lesson about the importance of proper calibration of the extrema of probability estimates somewhere in there.) In any case, I did not actually think I would win, because I was basically making fun of the contest organizers by pointing out that the whole ethos behind their prize specification was wrong. At the time, there was a live debate around timelines, and a lot of discussions about the bio-anchors paper, which itself made in microcosm the same mistakes that I was pointing at.
Technically, the very-first-draft of this post was an extremely long and detailed argument for short AGI timelines that I co-wrote with my brother, but I realized while writing it that the presumption that long and short timelines should be in some sense averaged together to get a better estimate was pervasive in the zeitgeist and needed to be addressed on its own.
I am happy with this post because it started a conversation that I thought needed to be had. My whole shtick these days is that our community has seemingly tried to skip over decision theory basics in favor of esoterica, to our collective detriment, and I feel like writing this post explicitly helped with that.
I am happy to have seen this post referenced favorably elsewhere. I think I wrote it about as well as I could have, given that I was going for the specific Less Wrong Parable stylistic thing and not trying to write literary fiction.
My favorite Less Wrong posts are almost always the parables and the dialogues. I find it easier to process and remember information that is conveyed in this way. They’re also simply more fun to read.
This post was originally written as an entry for the FTX Future Fund prize, which, at the time of writing the original draft, was a $1,000,000 prize, which I did not win, partly because it wasn’t selected as the winner and partly because FTX imploded and the prize money vanished. (There is a lesson about the importance of proper calibration of the extrema of probability estimates somewhere in there.) In any case, I did not actually think I would win, because I was basically making fun of the contest organizers by pointing out that the whole ethos behind their prize specification was wrong. At the time, there was a live debate around timelines, and a lot of discussions about the bio-anchors paper, which itself made in microcosm the same mistakes that I was pointing at.
Technically, the very-first-draft of this post was an extremely long and detailed argument for short AGI timelines that I co-wrote with my brother, but I realized while writing it that the presumption that long and short timelines should be in some sense averaged together to get a better estimate was pervasive in the zeitgeist and needed to be addressed on its own.
I am happy with this post because it started a conversation that I thought needed to be had. My whole shtick these days is that our community has seemingly tried to skip over decision theory basics in favor of esoterica, to our collective detriment, and I feel like writing this post explicitly helped with that.
I am happy to have seen this post referenced favorably elsewhere. I think I wrote it about as well as I could have, given that I was going for the specific Less Wrong Parable stylistic thing and not trying to write literary fiction.