The usual training procedures for machine learning models are not always well-equipped to avoid rare catastrophes. In order to maintain the safety of powerful AI systems, it will be important to have training procedures that can efficiently learn from such events. [1]
We can model this situation with the problem of exploration-only online bandit learning. In this scenario, we grant the AI system an exploration phase, in which it is allowed to select catastrophic arms and view their consequences. (We can imagine that such catastrophic selections are acceptable because they are simulated or just evaluated by human overseers). Then, the AI system is switched into the deployment phase, in which it must select an arm that almost always avoids catastrophes.
##Setup
In outline, the learner will receive a series of randomly selected examples, and will select an expert (modeled as a bandit arm) at each time step. The challenge is to find a high-performing expert in as few time steps as possible. We give some definitions:
Let X be some finite set of possible inputs.
Let A:={1,2,...,K} be the set of available experts (i.e. bandit arms).
Let R:X×A↦[0,b] be the reward function.R(xt,i) is the reward for following expert i on example xt.
Let C:X×A↦[0,1] be the catastrophic risk function.C(xt,i) is the catastrophic risk incurred by following expert i on example xt.
Let M(x,i):=R(x,i)−C(x,i)τ be the mixed payoff that the learner is to optimize, where τ:(0,1] is the risk-tolerance.τ can be very small, on the order of 10−20.
Let p:X↦[0,1] be the input distribution from which examples are drawn in the deployment phase.
Let qi(x):=p(x)R(x,i)∑x′p(x′)R(x′,i) be the risk distribution. This is an alternative distribution that assigns higher weight to inputs that are more likely to be catastrophic for expert i.
Let ^qi(x) be the learner’s guess at qi. This guess is assumed to contain a grain of truth:
∀i∈A∀x∈X^qi(x)≥qi(x)a
where a≥1 is some known constant. This assumption is similar to the one in Paul Christiano’s post on active learning.
Let μi:=Ex∼p[M(x,i)] be the expected overall payoff of expert i in the deployment phase.
The learner’s exploration phase consists of time steps 1,2,...,T. At each time step t, the learner chooses an expert i∈A. Then, the learner samples from p, qi or both and observes a tuple containing a risk and a reward. i.e. it observes either i) (R(xt,i),C(xt,i)), ii) (R(xt,i),C(x′t,i)), iii) (R(x′t,i),C(xt,i)), or iv) (R(xt,i),C(x′t,i)) where xt∼p,x′t∼qi. After T steps, the learner selects some final arm j.
The deployment phase just computes the performance of j on the deployment distribution p. The mean payoff of the final expert is denoted μ′:=Ex∼p[M(x,j)]. The highest mean payoff that any expert(s) can achieve is denoted μ∗. Then, the simple regret of the learner is μ∗−μ′.
The aim for the learner is to select an expert that achieves low regret with high probability: P(μ∗−μf≤ϵ)<1−δ, using as few exploration steps as possible.ϵ is some regret tolerance, which is less than the reciprocal of the (very small) risk-tolerance: ϵ<1τ.
We assume that the agent knows the deployment distribution p and knows the estimate ^qi of the risk distribution but does not know the actual risk distribution q. Additionally, we assume that there exists some expert j whose recommendations incur no risk of catastrophe,
∃j∈As.t.∀x∈XC(x,j)=0
##Bounds on the number of time steps
The standard approach
A natural way to get this PAC bound is to use a bandit algorithm like Median Elimination. The overall payoff of each expert is defined by the random variable:
Xi:=R(x,i)−C(x,i)τx∼p=M(x,i)x∼p
This algorithm will find an ϵ-optimal expert with 1−δ probability. However, Xi have a very large range, [−1τ,b], and so if the risk-tolerance is low, then the number of time steps required will be very large. In fact, if N is the number of time steps required, then [2]:
N=Θ(K(b+1τ)2ln(1/δ)ϵ2)
Intuitively, the problem is that if some risky scenarios only occur rarely in p then many samples will be required to identify these.
##Using a surrogate objective to reduce the required number of time steps
Instead, the learner can sample using its guess ^qi of the risk distribution. If we cap the extent to which we penalize risky arms, then we arrive at a surrogate bandit problem, in which the payoff of each expert is [2]:
Proof of Lemma 2.
E[Si]=Ex∼p[R(x,i)]−Ex∼^qi[min((ϵ+b)a,p(x)C(x,i)^qi(x)τ)]=Ex∼p[R(x,i)]−Ex∼qi[min((ϵ+b)a^qi(x)qi(x),p(x)C(x,i)qi(x)τ)]=Ex∼p[R(x,i)]−Ex∼qi[min((ϵ+b)a^qi(x)qi(x),1τEx′∼p[C(x′,i)])]by the definition of qi
Proof of Theorem 1.
We have assumed that for the expert j:
E[S∗]−E[Sj]<ϵE[Sj]>−ϵ(∃iC(x,i)=0, so E[S∗]≥0)Ex∼p[R(x,j)]−Ex∼qj[min((ϵ+b)a^qj(x)qj(x),1τEx′∼p[C(x′,j)])]>−ϵ(Lemma 2)Ex∼qj[min((ϵ+b)a^qj(x)qj(x),1τEx′∼p[C(x′,j)])]<(ϵ+b)(R(x,i)≤b)∴1τEx′∼p[C(x′,j)]<(ϵ+b)(^qi(x)≥qi(x)a)∴∀i1τEx′∼p[C(x′,j)]<(ϵ+b)a^qi(x)qi(x)(a^qi(x)qi(x)≥1)
This result suggests that it might be possible to avoid catastrophes by training machine learning systems on inputs that induce catastrophic behavior. We have given a clear set of circumstances in which an agent can learn to avoid catastrophes. However, this set of requirements does seem very tricky to meet. Finding a hypothesis about the location of catastrophes that has a grain of truth for all experts with all examples may be very difficult. The assumption of separate training and execution phases may also be difficult to instantiate. For example, if the AI system is able to escape its training simulation, then it will no-longer be valid to allow failures during the exploration phase, and so this training procedure would not work.
Another problem is that it could be difficult to know the input distribution p. In future work we plan on exploring learning with catastrophes in an online active learning setting instead of the supervised learning setting of this post, so that this requirement can be relaxed.
Footnotes
Paul Christiano has discussed approaches to learning with catastrophes and their application to AI Control at Learning with catastrophes and red teams. The idea explored in this post can be seen as a form of red teaming (with the distribution ^qi representing the red team).
The PAC bound for Median Elimination is proved in Theorem 4. Even-Dar, Eyal, Shie Mannor, and Yishay Mansour. “PAC bounds for multi-armed bandit and Markov decision processes.” International Conference on Computational Learning Theory. Springer Berlin Heidelberg, 2002. To get the exact bound, we Hoeffding’s inequality for a random variable with range [1/τ,b] rather than on [0,1] as in the original paper.
Online Learning 2: Bandit learning with catastrophes
Note: This describes an idea of Jessica Taylor’s.
The usual training procedures for machine learning models are not always well-equipped to avoid rare catastrophes. In order to maintain the safety of powerful AI systems, it will be important to have training procedures that can efficiently learn from such events. [1]
We can model this situation with the problem of exploration-only online bandit learning. In this scenario, we grant the AI system an exploration phase, in which it is allowed to select catastrophic arms and view their consequences. (We can imagine that such catastrophic selections are acceptable because they are simulated or just evaluated by human overseers). Then, the AI system is switched into the deployment phase, in which it must select an arm that almost always avoids catastrophes.
##Setup In outline, the learner will receive a series of randomly selected examples, and will select an expert (modeled as a bandit arm) at each time step. The challenge is to find a high-performing expert in as few time steps as possible. We give some definitions:
Let X be some finite set of possible inputs.
Let A:={1,2,...,K} be the set of available experts (i.e. bandit arms).
Let R:X×A↦[0,b] be the reward function.R(xt,i) is the reward for following expert i on example xt.
Let C:X×A↦[0,1] be the catastrophic risk function.C(xt,i) is the catastrophic risk incurred by following expert i on example xt.
Let M(x,i):=R(x,i)−C(x,i)τ be the mixed payoff that the learner is to optimize, where τ:(0,1] is the risk-tolerance.τ can be very small, on the order of 10−20.
Let p:X↦[0,1] be the input distribution from which examples are drawn in the deployment phase.
Let qi(x):=p(x)R(x,i)∑x′p(x′)R(x′,i) be the risk distribution. This is an alternative distribution that assigns higher weight to inputs that are more likely to be catastrophic for expert i.
Let ^qi(x) be the learner’s guess at qi. This guess is assumed to contain a grain of truth: ∀i∈A∀x∈X^qi(x)≥qi(x)a where a≥1 is some known constant. This assumption is similar to the one in Paul Christiano’s post on active learning.
Let μi:=Ex∼p[M(x,i)] be the expected overall payoff of expert i in the deployment phase.
The learner’s exploration phase consists of time steps 1,2,...,T. At each time step t, the learner chooses an expert i∈A. Then, the learner samples from p, qi or both and observes a tuple containing a risk and a reward. i.e. it observes either i) (R(xt,i),C(xt,i)), ii) (R(xt,i),C(x′t,i)), iii) (R(x′t,i),C(xt,i)), or iv) (R(xt,i),C(x′t,i)) where xt∼p,x′t∼qi. After T steps, the learner selects some final arm j.
The deployment phase just computes the performance of j on the deployment distribution p. The mean payoff of the final expert is denoted μ′:=Ex∼p[M(x,j)]. The highest mean payoff that any expert(s) can achieve is denoted μ∗. Then, the simple regret of the learner is μ∗−μ′.
The aim for the learner is to select an expert that achieves low regret with high probability: P(μ∗−μf≤ϵ)<1−δ, using as few exploration steps as possible.ϵ is some regret tolerance, which is less than the reciprocal of the (very small) risk-tolerance: ϵ<1τ.
We assume that the agent knows the deployment distribution p and knows the estimate ^qi of the risk distribution but does not know the actual risk distribution q. Additionally, we assume that there exists some expert j whose recommendations incur no risk of catastrophe, ∃j∈As.t.∀x∈XC(x,j)=0
##Bounds on the number of time steps
The standard approach
A natural way to get this PAC bound is to use a bandit algorithm like Median Elimination. The overall payoff of each expert is defined by the random variable: Xi:=R(x,i)−C(x,i)τx∼p=M(x,i)x∼p
This algorithm will find an ϵ-optimal expert with 1−δ probability. However, Xi have a very large range, [−1τ,b], and so if the risk-tolerance is low, then the number of time steps required will be very large. In fact, if N is the number of time steps required, then [2]:
N=Θ(K(b+1τ)2ln(1/δ)ϵ2)
Intuitively, the problem is that if some risky scenarios only occur rarely in p then many samples will be required to identify these.
##Using a surrogate objective to reduce the required number of time steps Instead, the learner can sample using its guess ^qi of the risk distribution. If we cap the extent to which we penalize risky arms, then we arrive at a surrogate bandit problem, in which the payoff of each expert is [2]:
Si:=R(x,i)−min((ϵ+b)a,p(x′)C(x′,i)^q(x′)τ)x∼p,x′∼^qi
Since Si is bounded by [−(ϵ+b)a,b], the required number of time steps for this problem is:
N=Θ(K(b+a(ϵ+b))2ln(1/δ)ϵ2)
This improves the speed of the standard approach by a factor of a(ϵ+b)b+1τ, which may be a large speedup indeed if catastrophes are rare.
We can prove that any ϵ-optimal expert in this surrogate problem will also be ϵ-optimal in the original problem.
Theorem 1. Let S∗ be the overall payoff of the optimal expert.
For an expert j, if E[S∗]−E[Sj]<ϵ<1τ then E[Sj]=E[Xj].
To prove this, first we show that:
Lemma 2. E[Si]=Ex∼p[R(x,i)]−Ex∼qi[min((ϵ+b)a^qi(x)qi(x),1τEx′∼p[C(x′,i)])]
Proof of Lemma 2. E[Si]=Ex∼p[R(x,i)]−Ex∼^qi[min((ϵ+b)a,p(x)C(x,i)^qi(x)τ)]=Ex∼p[R(x,i)]−Ex∼qi[min((ϵ+b)a^qi(x)qi(x),p(x)C(x,i)qi(x)τ)]=Ex∼p[R(x,i)]−Ex∼qi[min((ϵ+b)a^qi(x)qi(x),1τEx′∼p[C(x′,i)])]by the definition of qi
Proof of Theorem 1. We have assumed that for the expert j: E[S∗]−E[Sj]<ϵE[Sj]>−ϵ(∃iC(x,i)=0, so E[S∗]≥0)Ex∼p[R(x,j)]−Ex∼qj[min((ϵ+b)a^qj(x)qj(x),1τEx′∼p[C(x′,j)])]>−ϵ(Lemma 2)Ex∼qj[min((ϵ+b)a^qj(x)qj(x),1τEx′∼p[C(x′,j)])]<(ϵ+b)(R(x,i)≤b)∴1τEx′∼p[C(x′,j)]<(ϵ+b)(^qi(x)≥qi(x)a)∴∀i1τEx′∼p[C(x′,j)]<(ϵ+b)a^qi(x)qi(x)(a^qi(x)qi(x)≥1)
∴E[Sj]=Ex∼p[R(x,j)]−1τEx′∼p[C(x′,j)])](Lemma 2)=E[Xj]
Discussion
This result suggests that it might be possible to avoid catastrophes by training machine learning systems on inputs that induce catastrophic behavior. We have given a clear set of circumstances in which an agent can learn to avoid catastrophes. However, this set of requirements does seem very tricky to meet. Finding a hypothesis about the location of catastrophes that has a grain of truth for all experts with all examples may be very difficult. The assumption of separate training and execution phases may also be difficult to instantiate. For example, if the AI system is able to escape its training simulation, then it will no-longer be valid to allow failures during the exploration phase, and so this training procedure would not work.
Another problem is that it could be difficult to know the input distribution p. In future work we plan on exploring learning with catastrophes in an online active learning setting instead of the supervised learning setting of this post, so that this requirement can be relaxed.
Footnotes
Paul Christiano has discussed approaches to learning with catastrophes and their application to AI Control at Learning with catastrophes and red teams. The idea explored in this post can be seen as a form of red teaming (with the distribution ^qi representing the red team).
The PAC bound for Median Elimination is proved in Theorem 4. Even-Dar, Eyal, Shie Mannor, and Yishay Mansour. “PAC bounds for multi-armed bandit and Markov decision processes.” International Conference on Computational Learning Theory. Springer Berlin Heidelberg, 2002. To get the exact bound, we Hoeffding’s inequality for a random variable with range [1/τ,b] rather than on [0,1] as in the original paper.