AI safety & alignment researcher
In Rob Bensinger’s typology: AGI-wary/alarmed, welfarist, and eventualist.
Public stance: AI companies are doing their best to build ASI (AI much smarter than humans), and have a chance of succeeding. No one currently knows how to build ASI without an unacceptable level of existential risk (> 5%). Therefore, companies should be forbidden from building ASI until we know how to do it safely.
I have signed no contracts or agreements whose existence I cannot mention.

Hi Tristan! I can’t currently respond in detail due to time constraints, but I think you’ve got some really interesting insights here, especially your first two top-level bullet points, and I strongly encourage you to write them up into a full post. A couple of quick thoughts:
This whole section makes some great points that I think are worth expanding on!
Agreed. I expect that analytical tools from multiple fields can be usefully brought to bear here: multi-agent research on AI, sociology, political science, maybe others. Possibly analysis of how religions spread? It seems like a fruitful research direction.
My intuition is somewhat different—I agree that there’ll be a few applications that some people will be excited about and/or base startups on, but my guess is that the majority opinion will be that PSRs are dangerous and shouldn’t be allowed.
As written it’s not clear what benefit this lens provides, and I think we should generally avoid introducing jargon unless it has clear benefit. I’d suggest that if you think it’s a really useful lens, you make a case for it separately somewhere (even as a shortpost).