AI risk-related improvements to the LW wiki
Back in May, Luke suggested the creation of a scholarly AI risk wiki, which was to include a large set of summary articles on topics related to AI risk, mapped out in terms of how they related to the central debates about AI risk. In response, Wei Dai suggested that among other things, the existing Less Wrong wiki could be improved instead. As a result, the Singularity Institute has massively improved the LW wiki, in preparation for a more ambitious scholarly AI risk wiki. The outcome was the creation or dramatic expansion of the following articles:
In managing the project, I focused on content over presentation, so a number of articles still have minor issues such as the grammar and style having room for improvement. It’s our hope that, with the largest part of the work already done, the LW community will help improve the articles even further.
Thanks to everyone who worked on these pages: Alex Altair, Adam Bales, Caleb Bell, Costanza Riccioli, Daniel Trenor, João Lourenço, Joshua Fox, Patrick Rhodes, Pedro Chaves, Stuart Armstrong, and Steven Kaas.
I’ve watched a lot of these edits through the RSS feed as part of my daily spam-fighting; good work everyone!
Give this man some upvotes for his daily spam-fighting, as well as for his assistance when auto-bans targeted at spammers accidentally hit us. :)
Great work! That is a lot of updated pages.
Thanks. :)
This is awesome. Thanks for doing all that work.
Thanks. :)
LW wiki articles I wish LWers would write/expand:
Iterated embryo selection (update: AlexMennen wrote it)
Doomsday argument (update: AlexMennen wrote it)
Simpleton gambit (update: AlexMennen wrote it)
Delusion box (update: AlexMennen wrote it
Causality
Robot’s Rebellion
Dysrationalia
Epistemic prisoner’s dilemma (update: D_Malik wrote it)
Counterfactual resiliency (update: AlexMennen wrote it
Personal identity (update: AlexMennen wrote it)
Adversarial collaboration (update: AlexMennen wrote it)
Imagination inflation
Has someone watchlisted these pages to make sure no one accidentally makes them less accurate in the process of improving their presentation?
I am pretty excited about the AI risk wiki.
A key element in making use of this Wiki will be to set up a system that blocks spammers from registering accounts. Perhaps there should be a CAPTCHA with an answer that only a genuine Less Wronger would know? Anyone who knows how to set this up would be a tremendous help.