Nuclear engineer with a focus in nuclear plant safety and probabilistic risk assessment. Aspiring EA, interested in X-risk mitigation and the intersection of science and policy. Working towards Keegan/Kardashev/Simulacra level 4.
(Common knowledge note: I am not under a secret NDA that I can’t talk about, as of Mar 15 2025. I intend to update this statement at least once a year as long as it’s true. Update 2026: I am currently working on small modular reactor development at X-Energy.)
It seems like the point of his post was specifically to promote enmity, which is usually bad. In this case maybe he thinks it’s good because if AI companies and the government are fighting each other it could increase the likelihood the government does something to slow down AI capabilities? I can’t rule that out, but it seems like some 4d chess that I wouldn’t want to meddle in. Things can change quickly, and none of us fully understand the consequences of our words and actions. So maybe what he’s saying is true, but saying true things in a confrontational way is not always helpful. In this case I think if his words have any meaningful effect, it will likely be to galvanize governments to limit safety measures by AI developers, in the name of sovereignty.