Shaping economic incentives for collaborative AGI

In “An AI Race for Strategic Advantage: Rhetoric and Risks” (2018), Stephen Cave and Seán S ÓhÉigeartaigh argue that we should try to promote a cooperative AI narrative over a competitive one:

The next decade will see AI applied in an increasingly integral way to safety-critical systems; healthcare, transport, infrastructure to name a few. In order to realise these benefits as quickly and safely as possible, sharing of research, datasets, and best practices will be critical. For example, to ensure the safety of autonomous cars, pooling expertise and datasets on vehicle performances across as wide as possible a range of environments and conditions (including accidents and near-accidents) would provide substantial benefits for all involved. This is particularly so given that the research, data, and testing needed to refine and ensure the safety of such systems before deployment may be considerably more costly and time-consuming than the research needed to develop the initial technological capability.
Promoting recognition that deep cooperation of this nature is needed to deliver the benefits of AI robustly may be a powerful tool in dispelling a ‘technological race’ narrative; and a ‘cooperation for safe AI’ framing is likely to become increasingly important as more powerful and broadly capable AI systems are developed and deployed. [...]
There have been encouraging developments promoting the above narratives in recent years. ‘AI for global benefit’ is perhaps best exemplified by the 2017’s ITU summit on AI for Global Good (Butler 2017), although it also features prominently in narratives being put forward by the IEEE’s Ethically Aligned Design process (IEEE 2016), the Partnership on AI, and programmes and materials put forward by Microsoft, DeepMind and other leading companies. Collaboration on AI in safety-critical settings is also a thematic pillar for the Partnership on AI2 . Even more ambitious cooperative projects have been proposed by others, for example the call for a ‘CERN for AI’ from Professor Gary Marcus, through which participants “share their results with the world, rather than restricting them to a single country or corporation” (Marcus 2017).


In order to make future AGI projects more collaborative and co-operation focused, could we create incentives (via e.g. government policy) that would push today’s machine learning researchers towards more collaborative attitudes?

This might seem irrelevant, given that today’s machine learning researchers are mostly not working on AGI. However, external incentives can shape the internal norms of a culture. For example, holding companies responsible for accidents at their workplaces, means that they have an incentive to reduce accidents, which means that they have an incentive to create an internal culture of safety where everyone takes safety concerns seriously. And once such a culture is established, it will start having a life of its own, being propagated to future workers through the various sociological mechanisms by which norms and cultures normally propagate themselves, and may stay alive even if there’s a change to the external norms which led to that culture being originally created.

So my idea is something like:

  • figure out the kinds of external incentives that would affect machine learning companies and research that’s happening today, pushing it in a more collaborative direction

  • implementing these kinds of incentives via the right policy, will cause the field to more generally adopt the kinds of values and norms where collaboration is seen as a good thing

  • to the extent that the field which ends up developing AGI is a descendant of the field that does AI research today, the collaborative norms and values of today’s field will be inherited by that future field, shifting their prevailing attitudes away from “arms race” framings and increasing the chances of AGI being developed collaboratively

In a discussion, James Miller suggested that—among other things—codes of conduct, intellectual property laws, antitrust laws, tort law, and international agreements/​tariffs might be policy tools which could be used to shape external incentives.

A possible addition that comes to mind might be privacy laws; at least current ML systems require a lot of data, and there have been a lot of demands (e.g.) to reign in the ability of companies to collect information on people—information which could, among other things, be used to train ML systems. And e.g. the GDPR (which might be enforced more strictly after the recent Facebook revelations) establishes things like “Automated individual decision-making, including profiling [...] is contestable [...] Citizens have rights to question and fight significant decisions that affect them that have been made on a solely-algorithmic basis”; to the extent that decisions made by algorithms can be contested by the people who are affected by them, companies may have an incentive to be cooperative and e.g. develop the kinds of standards that they can follow in order to ensure that decisions made by their systems will be held up in court. (Doshi-Velez et al. (2017) is a paper attempting to establish some kinds of standards for how a legal right to explanation from AI systems could be met.)

Some other thoughts:

It might be worth thinking about a more specific definition for “cooperativeness”. For instance, one form of “cooperativeness” might be openness in AI development. Openness seems worth distinguishing from other forms of cooperation, since while general cooperativeness may make things safer, openness may make them less safe. But I would intuitively think that non-openness would be hard to reconcile with cooperativeness. Maybe it’s unavoidable for cooperativeness to lead to at least some degree of openness. (Bostrom (2017) notes on page 9 that openness could make AI development more competitive, but also more cooperative, if it removes incentives for competition: “The more that different potential AI developers (and their backers) feel that they would fully share in the benefits of AI even if they lose the race to develop AI first, the less motive they have for prioritizing speed over safety, and the easier it should be for them to cooperate with other parties to pursue a safe and peaceful course of development of advanced AI designed to serve the common good.”)

As Baum (2017) points out, it’s important to consider how AI developer communities react to external rules: if e.g. safety regulations are viewed as pointless annoyances, that may cause a lot of resentment. And it’s easy to adopt a patronizing mindset in thinking about this: “how could we get AI developers to understand that they shouldn’t destroy the world?”. We shouldn’t think about this that way (that’s not a particularly collaborative mindset 😉).

Rather, the better mindset is something like this: most people don’t want to destroy the world, AI developers included. But it’s easy to end up in situations where everyone has a rational incentive to do something that nobody wants. So what we want is to collaboratively design mechanisms that end up supporting people in better fulfilling their own preference of not destroying the world.

(thanks to James Miller as well as my colleagues at the Foundational Research Institute for discussions that contributed to this article)