Risk Contracts: A Crackpot Idea to Save the World

Time start: 18:17:30

I

This idea is probably going to sound pretty crazy. As far as seemingly crazy ideas go, it’s high up there. But I think it is interesting enough to at least amuse you for a moment, and upon consideration your impression might change. (Maybe.) And as a benefit, it offers some insight into AI problems if you are into that.

(This insight into AI may or may not be new. I am not an expert on AI theory, so I wouldn’t know. It’s elementary, so probably not new.)

So here it goes, in short form on which I will expand in a moment:

To manage global risks to humanity, they can be captured in “risk contracts”, freely tradeable on the market. Risk contracts would serve the same role as CO2 emissions contracts, which can likewise be traded, and ensure that the global norm is not exceeded as long as everyone plays along with the rules.

So e.g. if I want to run a dangerous experiment that might destroy the world, it’s totally OK as long as I can purchase enough of a risk budget. Pretty crazy, isn’t it?

As an added bonus, a risk contract can take into account the risk of someone else breaking the terms of contract. When you trasfer your rights to global risk, the contract obliges you to diminish the amount you transfer by the uncertainty about the other party being able to fullfill all obligations that come with such a contract. Or if you have not enough risk budget for this, you cannot transfer to that person.

II

Let’s go a little bit more into detail about a risk contract. Note that this is supposed to illustrate the idea, not be a final say on the shape and terms of such a contract.

Just to give you some idea, here are some example rules (with lots of room to specify them more clearly etc., it’s really just so that you have a clearer idea of what I mean by a “risk contract”):

  1. My initial risk budget is 5 * 10^-12 chance of destroying the world. I am going to track this budget and do everything in my power to make sure that it never goes below 0.

  2. For every action (or set of correlated actions) I take, I will subtract the probability that those actions destroy the world from my budget (using simple subtraction unless correlation between actions is very high).

  3. If I transfer my budget to an agent who is going to decide about its actions independently from me, I will first pay the cost from my budget for the probability that this agent might not keep the terms of the contract. I will use my best conservative estimates, and refuse the transaction if I cannot keep the risk within my budget.

  4. Any event in which a risk contract on world destruction is breached will use my budget as if it was equivalent to actually destroying the world.

  5. Whenever I create a new intelligent agent, I will transfer some risk budget to that agent, according to the rules above.

III

Of course, the application of this could be wider than just an AI which might recursively self-improve—some more “normal” human applications could be risk management in a company or government, or even using risk contract as an internal currency to make better decisions.

I admit though, that the AI case is pretty special—it gives an opportunity to actually control the ability of another agent to keep a risk contract that we are giving to them.

It is an interesting calculation to see roughly what are the costs of keeping a risk contract in the recursive AI case, with a lot of simplifying assumptions. Assume that to reduce risk of child AI going off the rails can be reduced by a constant factor (e.g. have it cut by half) by putting in an additional unit of work. Also assume the chain of child AIs might continue indefinitely, and no later AI will assume a finite ending of it. Then if the chain has no branches, we are basically reduced to a power series: the risk budget of a child AI is always the same fraction of its parent’s budget. That means we need linearly increasing amount of work on safety at each step. That in turn means that the total amount of work on safety is quadratic in the number of steps (child AIs).

Time end: 18:52:01

Writing stats: 21 wpm, 115 cpm (previous: 30167, 33183, 23128)