Reviews for the Alignment Forum

We (Adam Shimi, Joe Collman and Jérémy Perret) are reviewing posts in the Alignment Forum to get a better idea of what a peer-review for AI Alignment would look like.

[Question] Sugges­tions of posts on the AF to review

Re­view of “Fun with +12 OOMs of Com­pute”

Re­view of “Learn­ing Nor­ma­tivity: A Re­search Agenda”