I wasn’t aware of the bounty until seeing this comment, but I am a big fan of planecrash, both as a work of fiction and as pedagogy.
I wrote one post that built on the corrigibility tag in planecrash, and another on understanding decision theory, which isn’t directly based on anything in placecrash, but is kind of loosely inspired by some things I learned from reading it.
(Neither of these posts appear to meet the requirements for the bounty, and they didn’t get much engagement in any case. Just pointing them out in case you or anyone else is looking for some planecrash-inspired rationality / AI content.)
The bounty remains open, but I’m no longer excited about this due to three reasons:
lack of evidence for glowfic being an important positive influence on rationality,
Eliezer is speaking in the public sphere (some would argue too much)
general increasing quality and decreasing weirdness of alignment research
I wasn’t aware of the bounty until seeing this comment, but I am a big fan of planecrash, both as a work of fiction and as pedagogy.
I wrote one post that built on the corrigibility tag in planecrash, and another on understanding decision theory, which isn’t directly based on anything in placecrash, but is kind of loosely inspired by some things I learned from reading it.
(Neither of these posts appear to meet the requirements for the bounty, and they didn’t get much engagement in any case. Just pointing them out in case you or anyone else is looking for some planecrash-inspired rationality / AI content.)