RSS

eggsyntax

Karma: 2,545

AI safety & alignment researcher

In Rob Bensinger’s typology: AGI-alarmed, tentative welfarist, and eventualist.

Public stance: AI companies are doing their best to build ASI (AI much smarter than humans), and have a chance of succeeding. No one currently knows how to build ASI without an unacceptable level of existential risk (> 5%). Therefore, companies should be forbidden from building ASI until we know how to do it safely.

I have signed no contracts or agreements whose existence I cannot mention.