AI Governance & Strategy: Priorities, talent gaps, & opportunities

Over the last few weeks, I’ve had 1:1s with several AI governance professionals to develop better models of the current AI governance & strategy landscape. Some topics that regularly came up include:

  • What are the most important/​neglected areas in AI governance & strategy?

  • What is the theory of change behind projects in these areas, and where might they break?

  • What are the current talent gaps & opportunities in these areas?

This post is my attempt to summarize some takeaways from those conversations. I list some “priority areas” in AI governance & strategy, summarize them briefly, and describe potential talent gaps in each area. I don’t claim that my list is comprehensive, and I welcome people to add their own ideas in the comments.

If you think you may have some of the relevant talents/​aptitudes and are interested in working in any of these areas, feel free to reach out to me, and I may connect you to relevant professionals. (Feel free to have a low bar for reaching out; I’ll ask you for more information if needed.)

Please also be aware that there are downside risks in each of these areas. I suggest you get in touch with relevant professionals before “jumping in” to any of these areas.

Priority Areas

I refer to “priority areas” as topics that frequently came up when talking with AI governance professionals. Caveats: This is not a rigorous method, this list is not comprehensive, some topics were excluded intentionally, the list probably overweights topics that I evaluate as valuable (on my inside view), and priorities will inevitably change as the field continues to evolve.

For each priority area, I offer a brief summary, as well as a description of the kinds of career aptitudes that might make someone an especially good fit for working in the area.

Model evaluations

Summary: There are many ways models could be dangerous, but it’s difficult to detect these failure modes. Can we develop and implement “tests” that help us determine if a model is dangerous?

Some people are working on technical tests that can determine if a model has dangerous capabilities or appears to be misaligned. Others are thinking more broadly about what kinds of evals would be useful. Some people are focused on creating agreements that labs or governments could implement (e.g., if a Deception Eval is triggered, everyone agrees to stop scaling until Y evidence is acquired).

Current gaps:

  • Engineers: People with strong engineering backgrounds are needed to build/​implement technical evals.

  • Strong conceptual alignment models: People with strong models of alignment and LLMs are needed to figure out what kinds of evals ought to be built, what specific failure modes might show up in the upcoming years, and what kind of “counterevidence” would be required to justify scaling after an eval has been triggered.

  • Experience thinking about or implementing agreements across a variety of stakeholders: People with experience implementing agreements across a variety of stakeholders are needed to figure out how evals will be implemented across the industry, what kinds of agreements will be feasible, which stakeholders will be required to sign-on, and what kinds of needs/​concerns those stakeholders are likely to have.

Additional resources: See this post by Beth, this post by me, and this paper by Ethan Perez.

Compute governance

Summary: AI progress has largely been driven by compute. Can we understand compute trends and identify regulations based on compute?

Current gaps:

  • Technical talent: People with background in technical fields are needed to support AI governance professionals who are thinking about compute governance. Hardware knowledge is especially helpful, but even broad technical skills are useful.

  • Experience thinking about regulations and complex systems: People with these skills could support AI governance professionals who are thinking about the design of compute monitoring standards, methods to verify that those standards have been implemented, and methods to enforce compliance with those standards.

Additional resources: See this sequence and this reading list by Lennart Heim, as well as this post by Mauricio.


Summary: AI labs have valuable information; adversaries might try to access that information, AI systems may become capable of assisting with hacking, and AI systems themselves may be capable of (autonomously) hacking. Furthermore, security professionals often possess a deep security mindset, which could be useful across a variety of decisions that AI labs make in the upcoming years. Can security professionals help AI labs avoid information security risks and generally cultivate a culture centered on security mindset?

Current gaps:

  • Security professionals: People with a background in information security, cryptography, and other professions that foster a security mindset.

  • Generalists with an interest in upskilling: One security professional (whose judgment I trust) told me to send them “anyone who is interested in reducing existential risks and potentially interested in upskilling in security stuff.” It seems to me that the space is very neglected at the moment, and it might be a good opportunity for aligned generalists to contribute.

Additional resources: See this post by Jeffrey Ladish and Lennart Heim, this post by elspood, this post by Eliezer Yudkowsky, and the information security section in this post by Holden Karnofsky.

Publication and model-sharing policies

Summary: AI labs face difficult decisions about whether to publish research findings and how widely to share models. Can we develop and implement reasonable policies that balance the benefits of sharing while mitigating the risks?

Current gaps:

  • Interdisciplinary domain knowledge: People who have a background in fields that often encounter dual-use publication concerns (e.g., biological sciences, cybersecurity, nuclear technologies).

  • Generalist researchers: People who are good at quickly synthesizing research across a variety of fields and thinking about cost-benefit tradeoffs.

Additional resources: See this paper by Toby Shevlane and Allan Dafoe, this paper by Nick Bostrom, and this paper by Toby Shevlane.

Communicating about AI x-risk

Summary: Several governance ideas will require that policymakers, industry leaders, and other groups have a strong understanding of the dangers and potential catastrophic risks of advanced AI systems. How can we communicate ideas and threat models clearly and responsibly to these audiences?

Current gaps:

  • Excellent communicators: People who have aptitudes in writing, podcasting, and communicating ideas in high-fidelity ways to diverse audiences.

  • Policy experience: People who have a background communicating technical ideas to policymakers.

  • Strong models of AI risk & good judgment about what ideas are worth spreading: Communicating about AI risk is tricky, and there are many ways that ideas/​memes about AI safety can be misinterpreted.

Additional resources: See this post by Holden Karnofsky.


As mentioned, please feel free to reach out if you have relevant skills/​aptitudes and think you may want to contribute in any of these areas.

For each of these areas, I’m aware of professionals/​researchers who are interested in talking with junior folks who have relevant skills & backgrounds.

Also, be aware that there are downside risks, and standard advice like “talk to people before doing things”, “it is easy for well-intentioned people to accidentally produce net negative work”, and “be wary of taking unilateralist actions” apply.

With that in mind, I’m excited to see more people thinking carefully and seriously about these topics. I hope you think about ways you might be able to contribute in some of these areas or identify areas that aren’t on this list.

I’m grateful to Lennart Heim and Jeffrey Ladish for providing feedback on sections of this post.

Crossposted to EA Forum (19 points, 0 comments)