Strong upvote for addressing what I feel is a neglected subject.
It feels like it would be helpful to state explicitly that working towards AI alignment and working against the development of misaligned AIs are not necessarily the same. In the casual discussions on the subject we usually refer to a military or multinational corporation as candidates who would build an AI lab, drive towards AGI, and then unleash the poorly-aligned result. The policy/strategy question goes directly to their behavior.
It seems like the time we have available to get this right is heavily influenced by how these other actors make decisions, and currently there is no particular pressure on them to make good ones. I’d like to toss a few other potential benefits of a strategy/policy echelon:
1. It would serve as a contact surface for people who are already in strategy and policy to engage with AI safety. Currently they have to use the same personal-interest method as the rest of us.
2. Aside from the institutional examples Richard provided, I point to Jean Monnet and the formation of the precursors to the European Union. Individual people are in a position to have very large influence if they have a framework ready when the opportunity presents itself.
3. Consider the risk of being unprepared if AI risk should come to the forefront of public consciousness and the government decides to act. The converse of Ben’s example where politicians abandon projects when the public loses interest is that a public outrage can drive the government into hasty action. For example, if the Russians/Chinese deploy a next-generation narrow AI in their weapons systems, or an American military AI test goes badly wrong, or if there are casualties from a commercial implementation of narrow AI the government may move to regulate AI research and funding, and there is no reason to suspect that law would be any better than the computer crime laws we have currently. I would go as far as to say that AI is the best candidate for a new Sputnik Moment, which seems like it would drive the incentives heavily in a direction we do not want.
Strong upvote for addressing what I feel is a neglected subject.
It feels like it would be helpful to state explicitly that working towards AI alignment and working against the development of misaligned AIs are not necessarily the same. In the casual discussions on the subject we usually refer to a military or multinational corporation as candidates who would build an AI lab, drive towards AGI, and then unleash the poorly-aligned result. The policy/strategy question goes directly to their behavior.
It seems like the time we have available to get this right is heavily influenced by how these other actors make decisions, and currently there is no particular pressure on them to make good ones. I’d like to toss a few other potential benefits of a strategy/policy echelon:
1. It would serve as a contact surface for people who are already in strategy and policy to engage with AI safety. Currently they have to use the same personal-interest method as the rest of us.
2. Aside from the institutional examples Richard provided, I point to Jean Monnet and the formation of the precursors to the European Union. Individual people are in a position to have very large influence if they have a framework ready when the opportunity presents itself.
3. Consider the risk of being unprepared if AI risk should come to the forefront of public consciousness and the government decides to act. The converse of Ben’s example where politicians abandon projects when the public loses interest is that a public outrage can drive the government into hasty action. For example, if the Russians/Chinese deploy a next-generation narrow AI in their weapons systems, or an American military AI test goes badly wrong, or if there are casualties from a commercial implementation of narrow AI the government may move to regulate AI research and funding, and there is no reason to suspect that law would be any better than the computer crime laws we have currently. I would go as far as to say that AI is the best candidate for a new Sputnik Moment, which seems like it would drive the incentives heavily in a direction we do not want.