I think this a helpful overview post. It outlines challenges to a mainstream plan (bootstrapped alignment) and offers a few case studies of how entities in other fields handle complex organizational challenges.
I’d be excited to see more follow-up research on organizational design and organizational culture. This work might be especially useful for helping folks think about various AI policy proposals.
For example, it seems plausible that at some point the US government will view certain kinds of AI systems as critical national security assets. At that point, the government might become a lot more involved in AI development. It might be the case that a new entity is created (e.g., a DOD-led “Manhattan Project”) or it might be the case that the existing landscape persists but with a lot more oversight (e.g., NSA and DoD folks show up as resident inspectors at OpenAI/DeepMind/Anthropic and have a lot more decision-making power).
If something like this happens, there will be a period of time in which relevant USG stakeholders (and AGI lab leaders) have to think through questions about organizational design, organizational culture, decision-making processes, governance structures, etc. It seems likely that some versions of this look a lot better than others.
Examples of questions I’d love to see answered/discussed:
Broadly, what are the 2-3 most important design features or considerations for an AGI project?
Suppose the USG becomes interested in (soft) nationalizing certain kinds of AI development. A senior AI policy advisor comes to you and says “what do you think are the most important things we need to get right?”
What are the best lessons learned from case studies in other fields where high-reliability organizations are important? (e.g. BSL-4 labs, nuclear facilities, health care facilities, military facilities)
What are the best lessons learned from case studies in other instances in which governments or national security officials attempted to start new organizations or exert more control over existing organizations (e.g., via government contracts)?
(I’m approaching this from a government-focused AI policy lens, partly because I suspect that whoever becomes in charge of the “What Should the USG Do” decision will have spent less time thinking through these considerations than Sam Altman or Demis Hassabis. Also, it seems like many insights would be hard to implement in the context of the race toward AGI. But I suspect there might be insights here that are valuable even if the government doesn’t get involved & it’s entirely on the labs to figure out how to design/redesign their governance structures or culture toward bootstrapped alignment.)
I think this a helpful overview post. It outlines challenges to a mainstream plan (bootstrapped alignment) and offers a few case studies of how entities in other fields handle complex organizational challenges.
I’d be excited to see more follow-up research on organizational design and organizational culture. This work might be especially useful for helping folks think about various AI policy proposals.
For example, it seems plausible that at some point the US government will view certain kinds of AI systems as critical national security assets. At that point, the government might become a lot more involved in AI development. It might be the case that a new entity is created (e.g., a DOD-led “Manhattan Project”) or it might be the case that the existing landscape persists but with a lot more oversight (e.g., NSA and DoD folks show up as resident inspectors at OpenAI/DeepMind/Anthropic and have a lot more decision-making power).
If something like this happens, there will be a period of time in which relevant USG stakeholders (and AGI lab leaders) have to think through questions about organizational design, organizational culture, decision-making processes, governance structures, etc. It seems likely that some versions of this look a lot better than others.
Examples of questions I’d love to see answered/discussed:
Broadly, what are the 2-3 most important design features or considerations for an AGI project?
Suppose the USG becomes interested in (soft) nationalizing certain kinds of AI development. A senior AI policy advisor comes to you and says “what do you think are the most important things we need to get right?”
What are the best lessons learned from case studies in other fields where high-reliability organizations are important? (e.g. BSL-4 labs, nuclear facilities, health care facilities, military facilities)
What are the best lessons learned from case studies in other instances in which governments or national security officials attempted to start new organizations or exert more control over existing organizations (e.g., via government contracts)?
(I’m approaching this from a government-focused AI policy lens, partly because I suspect that whoever becomes in charge of the “What Should the USG Do” decision will have spent less time thinking through these considerations than Sam Altman or Demis Hassabis. Also, it seems like many insights would be hard to implement in the context of the race toward AGI. But I suspect there might be insights here that are valuable even if the government doesn’t get involved & it’s entirely on the labs to figure out how to design/redesign their governance structures or culture toward bootstrapped alignment.)