Also there’s a good chance AI gov won’t work, and labs will just have a very limited safety budget to implement their best guess mitigations. Or maybe AI gov does work and we get a large budget, we still need to actually solve alignment.
There are definitely still benefits to doing alignment research, but this only justifies the idea that doing alignment research is better than doing nothing.
IMO the thing that matters (for an individual making decisions about what to do with their career) is something more like “on the margin, would it be better to have one additional person do AI governance or alignment/control?”
I happen to think that given the current allocation of talent, on-the-margin it’s generally better for people to choose AI policy. (Particularly efforts to contribute technical expertise or technical understanding/awareness to governments, think-tanks interfacing with governments, etc.) There is a lot of demand in the policy community for these skills/perspectives and few people who can provide them. In contrast, technical expertise is much more common at the major AI companies (though perhaps some specific technical skills or perspectives on alignment are neglected.)
In other words, my stance is something like “by default, anon technical person would have more expected impact in AI policy unless they seem like an unusually good fit for alignment or an unusually bad fit for policy.”
My understanding is that AGI policy is pretty wide open under Trump. I don’t think he and most of his close advisors have entrenched views on the topic.
If AGI is developed in this Admin (or we approach it in this Admin), I suspect there is a lot of EV on the table for folks who are able to explain core concepts/threat models/arguments to Trump administration officials.
There are some promising signs of this so far. Publicly, Vance has engaged with AI2027. Non-publicly, I think there is a lot more engagement/curiosity than many readers might expect.
This isn’t to say “everything is great and the USG is super on track to figure out AGI policy” but it’s more to say “I think people should keep an open mind– even people who disagree with the Trump Admin on mainstream topics should remember that AGI policy is a weird/niche/new topic where lots of people do not have strong/entrenched/static positions (and even those who do have a position may change their mind as new events unfold.)”
What are your thoughts on the relative value of AI governance/advocacy vs. technical research? It seems to me that many of the technical problems are essentially downstream of politics; that intent alignment could be solved, if only our race dynamics were mitigated, regulation was used to slow capabilities research, and if it was given funding/strategic priority.
This is exactly the message we need more people to hear.
What’s missing from most conversations is this: Frontier liability will cause massive legal bottlenecks soon, regulations are nowhere near ready (not even in the EU with the AI Act).
Law firms and courts will need technical safety experts.
Not just to inform regulation, but to provide expert opinions when opaque model behaviors cause harm downstream, often in ways that weren’t detectable during testing.
The legal world will be forced to allocate responsibility in the face of emergent, stochastic failure modes. Without technical guidance, there are no safeguards to enforce, and no one to translate model failures into legal reasoning.
Technical AI alignment/control is still impactful; don’t go all-in on AI gov!
Liability incentivizes safeguards, even absent regulation;
Cheaper, more effective safeguards make it easier for labs to meet safety standards;
Concrete, achievable safeguards give regulation teeth.
Also there’s a good chance AI gov won’t work, and labs will just have a very limited safety budget to implement their best guess mitigations. Or maybe AI gov does work and we get a large budget, we still need to actually solve alignment.
There are definitely still benefits to doing alignment research, but this only justifies the idea that doing alignment research is better than doing nothing.
IMO the thing that matters (for an individual making decisions about what to do with their career) is something more like “on the margin, would it be better to have one additional person do AI governance or alignment/control?”
I happen to think that given the current allocation of talent, on-the-margin it’s generally better for people to choose AI policy. (Particularly efforts to contribute technical expertise or technical understanding/awareness to governments, think-tanks interfacing with governments, etc.) There is a lot of demand in the policy community for these skills/perspectives and few people who can provide them. In contrast, technical expertise is much more common at the major AI companies (though perhaps some specific technical skills or perspectives on alignment are neglected.)
In other words, my stance is something like “by default, anon technical person would have more expected impact in AI policy unless they seem like an unusually good fit for alignment or an unusually bad fit for policy.”
I’m open to this argument, but I’m not sure it’s true under the Trump administration.
My understanding is that AGI policy is pretty wide open under Trump. I don’t think he and most of his close advisors have entrenched views on the topic.
If AGI is developed in this Admin (or we approach it in this Admin), I suspect there is a lot of EV on the table for folks who are able to explain core concepts/threat models/arguments to Trump administration officials.
There are some promising signs of this so far. Publicly, Vance has engaged with AI2027. Non-publicly, I think there is a lot more engagement/curiosity than many readers might expect.
This isn’t to say “everything is great and the USG is super on track to figure out AGI policy” but it’s more to say “I think people should keep an open mind– even people who disagree with the Trump Admin on mainstream topics should remember that AGI policy is a weird/niche/new topic where lots of people do not have strong/entrenched/static positions (and even those who do have a position may change their mind as new events unfold.)”
What are your thoughts on the relative value of AI governance/advocacy vs. technical research? It seems to me that many of the technical problems are essentially downstream of politics; that intent alignment could be solved, if only our race dynamics were mitigated, regulation was used to slow capabilities research, and if it was given funding/strategic priority.
This is exactly the message we need more people to hear.
What’s missing from most conversations is this: Frontier liability will cause massive legal bottlenecks soon, regulations are nowhere near ready (not even in the EU with the AI Act).
Law firms and courts will need technical safety experts.
Not just to inform regulation, but to provide expert opinions when opaque model behaviors cause harm downstream, often in ways that weren’t detectable during testing.
The legal world will be forced to allocate responsibility in the face of emergent, stochastic failure modes. Without technical guidance, there are no safeguards to enforce, and no one to translate model failures into legal reasoning.