AI safety & alignment researcher
In Rob Bensinger’s typology: AGI-wary/alarmed, welfarist, and eventualist.
Public stance: AI companies are doing their best to build ASI (AI much smarter than humans), and have a chance of succeeding. No one currently knows how to build ASI without an unacceptable level of existential risk (> 5%). Therefore, companies should be forbidden from building ASI until we know how to do it safely.
I have signed no contracts or agreements whose existence I cannot mention.

Fair point, although I think that can also be hard to determine prior to knowing what will actually work in the real world. One reason I chose early communism as a counterexample is that as Scott Alexander’s pointed out (eg here), quite a lot of smart, thoughtful people took communism seriously at that time, and were reasonable to do so.