Looking for judges for critiques of Alignment Plans

Hello!

AI-Plans.com recently held a “critique-a-thon,” where participants submitted and refined 40+ critiques for AI Alignment plans. Here are the finalized critiques from the event: https://​​docs.google.com/​​document/​​d/​​1mW4SAxFN_aI6KyYXpl9qz5B9nVdeV9Xyc69GTNme5cA/​​edit?usp=sharing

We are looking for anyone who might be interested in helping to judge these final 11 critiques.

So far, we have gratefully had the assistance of Dr Peter S Park (MIT postdoc at the Tegmark lab, Harvard PhD) and Aishwarya G (AI Existential Safety Community Member at Future of Life Institute and Governance Course Facilitator at BlueDot Impact AI Safety Fundamentals), as well as some independent alignment researchers.

I would love to hear your thoughts!

Kabir Kumar (Founder, AI-Plans.com)

No comments.