I’m saying: hopefully we can find a model that never fails catastrophically. By “catastrophic failure” I mean a failure that we can never recover from, even if it occurs in the lab. For that purpose, we get to cut an extremely wide safety margin around the “intended” interpretation, and the system can be very conservative about avoiding things that would be irreversibly destructive.

I’m confused about you saying this; it seems like this is incompatible with using the AI to substantially assist in doing big things like preventing nuclear war. You can split a big task into lots of small decisions such that it’s fine if a random independent small fraction of decisions are bad (e.g. by using a voting procedure), but that doesn’t help much, since it’s still vulnerable to multiple small decisions being made badly in a correlated fashion; this is the more likely outcome of the AI’s models being bad rather than uncorrelated errors.

Put in other words: if you’re using the AI to do a big thing, then you can’t section off “avoiding catastrophes” as a bounded subset of the problem, it’s intrinsic to all the reasoning the AI is doing.

My intuition is that the combination of these guarantees is insufficient for good performance and safety.

Say you’re training an agent; then the AI’s policy is π:O→ΔA for some set O of observations and A of actions (i.e. it takes in an observation and returns an action distribution). In general, your utility function will be a nonlinear function of the policy (where we can consider the policy to be a vector of probabilities for each (observation, action) pair). For example, if it is really important for the AI to output the same thing given observation “a” and given observation “b”, then this is a nonlinearity. If the AI is doing something like programming, then your utility is going to be highly nonlinear in the policy, since getting even a single character wrong in the program can result in a crash.

Say your actual utility function on the AI’s policy is U. If you approximate this utility using average performance, you get this approximation:

Vp,f(π):=Eo∼p,a∼π(o)[f(o,a)]

where p is some distribution over observations and f is some bounded performance function. Note that Vp,f is linear.

Catastrophe avoidance can handle some nonlinearities. Including catastrophe avoidance, we get this approximation:

Vp,f,c(π):=Eo∼p,a∼π(o)[f(o,a)]−maxo∈O[c(o,π(o)))]

where c is some bounded catastrophe function.

I don’t see a good argument for why, for any U you might have over the policy, there are some easy-to find p,f,c such that approximately maximizing Vp,f,c yields a policy that is nearly as good as if you had approximately maximized U .

Some examples of cases I expect to not work with linear+catastrophe approximation:

Some decisions are much more important than others, and it’s predictable which ones. (This might be easy to handle with importance sampling but that is an extension of the framework, and you have to handle things like “which observations the AI gets depends on the AI’s policy”)

The importance of a decision depends on the observations and actions of previous rounds. (e.g. in programming, typing a bad character is important if no bad characters have been typed yet, and not important if the program already contains a syntax error)

The AI has to be predictable; it has to do the same thing given similar-enough observations (this is relevant if you want different AIs to coordinate with each other)

The AI consists of multiple copies that must meet at the same point; or the AI consists of multiple copies that must meet at different points.

You could argue that we should move to an episodic RL setting to handle these, however I think my arguments continue to apply if you replace “AI takes an action” with “AI performs a single episode”. Episodes have to be short enough that they can be judged efficiently on an individual basis, and the operator’s utility function will be nonlinear in the performance on each of these short episodes.

In general my criticism here is pointing at a general criticism of feedback-optimization systems. One interpretation of this criticism is that it implies that feedback-optimization systems are too dumb to do relevant long-term reasoning, even with substantial work in reward engineering.

Evolution provides some evidence that feedback-optimization systems can, with an extremely high amount of compute, eventually produce things that do long-term reasoning (though I’m not that confident in the analogy between evolution and feedback-optimization systems). But then these agents’ long-term reasoning is not explained by their optimization of feedback. So understanding the resulting agents as feedback-optimizers is understanding them at the wrong level of abstraction (see this post for more on what “understanding at the wrong level of abstraction” means), and providing feedback based on an overseer’s values would be insufficient to get something the overseer wants.

See this post for discussion of some of these things.

Other points beyond those made in that post:

The easy way to think about performance is using marginal impact.

There will be non-convexities—e.g. if you need to get 3 things right to get a prize, and you currently get 0 things right, then the marginal effect of getting an additional thing right is 0 and you can be stuck at a local optimum. My schemes tend to punt these issues to the overseer, e.g. the overseer can choose to penalize the first mistake based on their beliefs about the value function of the trained system rather than the current system.

To the extent that any decision-maker has to deal with similar difficulties, then your criticism only makes sense in the context of some alternative unaligned AI that might outcompete the current AI. One alternative is the not-feedback-optimizing cognition of a system produced by gradient descent on some arbitrary goal (let’s call it an alien). In this case, I suspect my proposal would be able to compete iff informed oversight worked well enough to reflect the knowledge that the aliens use for long-term planning.

Note that catastrophe avoidance isn’t intended to overcome the linear approximation. It’s intended to prevent the importance weights from blowing up too much. (Though as we’ve discussed, it can’t do that in full generality—I’m going to shovel some stuff under “an AI that is trying to do the right thing” and grant that we aren’t going to actually get the optimal policy according to the overseer’s values. Instead I’m focused on avoiding some class of failures that I think of as alignment failures.)

I’m not including issues like “you want your AI to be predictable,” I’d say that “be very predictable” is a separate problem, just like “be really good at chess” is a separate problem. I agree that our preferences are better satisfied by AIs that solve these additional problems. And I agree that if our alignment techniques are fundamentally incompatible with other techniques that help with these desiderata then that should be considered an open problem for alignment (though we may end up disagreeing about the importance / about whether this happens).

One interpretation of this criticism is that it implies that feedback-optimization systems are too dumb to do relevant long-term reasoning, even with substantial work in reward engineering.

If this is true, it seems like a really important point that I need to understand better. Any chance you can surface this argument into a top-level post, so more people can see it and chime in with their thoughts? In particular I’d like to understand whether the problem is caused by current ML approaches not offering good/useful enough performance guarantees, which might change in the future, or if this a fundamental problem with ML and feedback-optimization that can’t be overcome. Also, can you suggest ways to test this empirically?

(I also can’t quite tell to what extent Paul’s response has addressed your criticism. If you decide to write a post maybe you can explain that as well?)

About the inductive argument for the two guarantees, suppose we can show that Amplify(A) satisfies the two guarantees and we’re trying to show this for Distill(Amplify(A)).

For the inductive step to go through, Amplify(A) has to detect and disapprove of any incorrigible optimization that may be introduced by Distill, but we don’t have a guarantee that Amplify(A) can do this, because the performance guarantee is “on average” so it might not detect some obscure kinds of incorrigible optimizations, and the control guarantee only says Amplify(A) won’t itself do any incorrigible optimization but not that it can detect every incorrigible optimization in others.

It seems like for the argument to go through, you need a third guarantee that says something like “can detect any violations of the control guarantee in an agent B as long as B belongs to C (i.e., some class of AIs that includes Distill(Amplify(A)), Distill(Amplify(Distill(Amplify(A)))), and so on), as well as any violations of this guarantee in B.” Does that make sense?

(There are things about the performance guarantee that I don’t entirely understand, like what distribution are you averaging over exactly, so possibly the way you’re thinking about it already implies this third guarantee, but in that case it still seems useful to state this explicitly.)

The task that Amplify(A) is doing is something like “Find an input on which the distilled agent will behave malignly, if any exist.” (That literal query won’t work, one important research goal is to find something along those lines that actually works.)

If Amplify(A) performs well on average, then it finds such inputs with reasonably high probability. To actually get this conclusion you obviously need to argue both about the properties of Amplify(A) and the properties of the problem. But the main point is that it has no dependence on the worst case behavior of Amplify(A).

If Amplify(A) can’t solve this task—if we have no way to train a model to certifiably be benign—then we are just screwed anyway.

(I agree that the concern you’ve mentioned potentially afflicts one of the proposals in my post, inductively defining specifications. It’s not clear to me whether it’s a serious problem, I’ll think about it at some point.)

There are things about the performance guarantee that I don’t entirely understand, like what distribution are you averaging over exactly, so possibly the way you’re thinking about it already implies this third guarantee, but in that case it still seems useful to state this explicitly.

We’ll construct the distribution in order to make the performance guarantee useful.

That is, for the final agent, we take the distribution over tasks I care about. For preceding agents, we take the distribution over subtasks that arise while trying to solve the tasks I care about (i.e. by applying amplification to the problem of evaluating a random task from the next agent’s distribution / to the problem of certifying that the next agent is benign).

If Amplify(A) performs well on average, then it finds such inputs with reasonably high probability.

But that’s not enough for the inductive step to go through, because it’s “high probability” only if you pick a random distilled agent out of the distribution over which the guarantee applies. As a counter-example, suppose Amplify(A) performs well on the task “Find an input on which the distilled agent will behave malignly, if any exist” for every distilled agents in the distribution, except the actual Distill(Amplify(A)). If it does that, then it will satisfy the performance guarantee (since it does well on average) but fail to train a Distill(Amplify(A)) that satisfies the control guarantee.

That is, for the final agent, we take the distribution over tasks I care about. For preceding agents, we take the distribution over subtasks that arise while trying to solve the tasks I care about (i.e. by applying amplification to the problem of evaluating a random task from the next agent’s distribution / to the problem of certifying that the next agent is benign).

The final agent may be a superintelligent AI faced with tasks that we can’t currently foresee. I don’t know how to define a distribution such that it covers those tasks, without also covering many other tasks (that I won’t care about). Is it ok to define the distribution extremely broadly, or does it need to be the actual distribution that the AI will face in some sense? (Or do we not actually need to explicitly define such a distribution? ETA: Does the overseer training the first distilled agent need to know the final distribution?)

But that’s not enough for the inductive step to go through, because it’s “high probability” only if you pick a random distilled agent out of the distribution over which the guarantee applies.

You keep giving Amplify(A) the task “Try to verify that the current agent is benign” over and over again.

The final agent may be a superintelligent AI faced with tasks that we can’t currently foresee. I don’t know how to define a distribution such that it covers those tasks, without also covering many other tasks (that I won’t care about).

I’m imagining collecting the problems that the agent actually faces and training on those.

More precisely, I’m thinking in terms of an online guarantee rather than fixing an actual distribution, since the distribution of tasks in the real world will shift over time.

As the distribution shifts you may get worse performance. The total amount of underperformance over time is basically bounded. Moreover, our competitors will generally face the same problem.

You keep giving Amplify(A) the task “Try to verify that the current agent is benign” over and over again.

I don’t see how this helps with the inductive argument. The same counter-example I gave in the grandparent comment also applies to this idea.

The top-level distribution for Amplify(A) puts 50% of its mass on “Try to verify that the current agent is benign.” (Or something like 50%, it’s <100% only because you are training A both to evaluate actions and to certify benignness.) If it performs badly on the current agent, then it’s not performing well on average. There is no other distribution over distilled agents that we are defining the performance guarantee with respect to.

Because the current agent is training over time, we really want to talk about online guarantees rather than assuming a fixed distribution. Amplify(A) is being trained on the sequence of tasks “Try to verify that the agent at t=0 is benign,” “Try to verify that the agent at t=1 is benign,” “Try to verify that the agent at t=2 is benign...” etc.

If there was a learnable model that performs well on most of the subtasks of “Try to verify that the current agent is benign,” then we get to infer that A in fact performs well on most of these subtasks. If amplification preserves average-case performance, then we get to conclude that Amplify(A) performs well on most tasks of the form “Try to verify that the current agent is benign.” If Amplify(A) consistently concludes that the agent is benign, e.g. if Amplify(A) consistently fails to find an input on which the agent behaves malignly, then we hope to conclude that the agent is in fact benign at most steps.

Thanks, I think I’m finally starting to understand your thinking here.

Because the current agent is training over time, we really want to talk about online guarantees rather than assuming a fixed distribution.

Sounds like I really need to figure out how online guarantees work in relation to your scheme. (It’s not clear to me how to map between your setup and the setup in the online learning survey that you cited, e.g., what corresponds to “receive true answer” after every prediction and what corresponds to the set of hypotheses that regret is being measured against.) I’ve been putting it off and just assuming a fixed distribution because you wrote “Overall, I don’t think this distinction is a huge deal.”

If there was a learnable model that performs well on most of the subtasks of “Try to verify that the current agent is benign,”

How do we determine this? (What if the current agent has moved into a part of the agent space such that there was no longer a learnable model that performs well on most of the subtasks of “Try to verify that the current agent is benign”?)

if Amplify(A) consistently fails to find an input on which the agent behaves malignly, then we hope to conclude that the agent is in fact benign at most steps.

What does “most” mean here and why is that good enough? (If there are more than 100 steps and “most” means 99% then you can’t rule out having malign agents in some of the steps, which seems like a problem?)

As part of designing a technique for optimizing worst-case performance, we need to argue that the overseer’s job isn’t too hard (so that Amplify(A) is qualified to perform the task). If we remove this restriction, then optimizing worst case performance wouldn’t be scary—adversarial training would probably work fine.

(It’s not clear to me how to map between your setup and the setup in the online learning survey that you cited, e.g., what corresponds to “receive true answer” after every prediction and what corresponds to the set of hypotheses that regret is being measured against.)

See the section “Limited feedback (bandits)” starting on page 177. Online learning doesn’t require seeing the true answer.

We don’t compete with any explicit set of hypotheses. When we say that the “model can learn to do X” then we are saying roughly “the model competes with a set of hypotheses including one that does X.”

What does “most” mean here and why is that good enough?

Most means most agents over the training process. But:

Once you have an agent that seems OK, you can freeze that agent and then run the certification process for significantly longer.

I expect the model is probably going to have some probability of behaving malignly on any given input anyway based on internal stochasticity. So you probably already need to do something based on ensembling / ensuring sufficient per-timestep robustness.

See the section “Limited feedback (bandits)” starting on page 177. Online learning doesn’t require seeing the true answer.

I’m still having trouble matching up that section with your setup. (It assumes that the agent sees the value of the loss function after every prediction, which I think is not the case in your setup?) Is Section 6 on Online Active Learning in this more comprehensive survey closer to what you have in mind? If so, can you say which of the subsections of Section 6 is the closest? Or alternatively, can you explain the actual formal setup and guarantee you’re hoping ML research will provide, which will be sufficient to accomplish what you need? (Or give an example of such formal setup/guarantee if multiple ones could work?)

Also, what if in the future the most competitive ML algorithms do not provide the kinds of guarantees you need? How likely do you think that is, and what’s the expected outcome (for your approach and AI alignment in general) conditional on that?

We don’t compete with any explicit set of hypotheses. When we say that the “model can learn to do X” then we are saying roughly “the model competes with a set of hypotheses including one that does X.”

Don’t we need to know the size of the set of hypotheses in order to derive a regret bound?

It assumes that the agent sees the value of the loss function after every prediction, which I think is not the case in your setup?

You do get to see the loss function, if you couldn’t see the loss function then we couldn’t train A.

Amplify(A) is computed by calling A a bunch of times. The point of amplification is to set things up so that Amplify(A) will work well if the average call to A works well. A random subset of the calls to A are then evaluated (by calling Amplify(A)), so we get to see their loss.

(Obviously you get identical expected regret bounds if you evaluate an x fraction of decisions at random, just with 1/x times more regret—you get a regret bound on the sequence whose loss you evaluate, and that regret is at least x times the total.)

What does d (the number of bandit arms) correspond to in your setup? I’m guessing it’s the size of the hypothesis class that you’re competing with, which must be exponentially large? Since the total regret bound is 2√dlog(d)T (page 181, assuming you see the loss every round) it seems that you’d have to see an exponential number of losses (i.e., calls to Amplify(A)) before you could get a useful per-round guarantee. What am I missing here?

The d under the log is the size of the hypothesis class (which is exponential in this case). The other d parameterizes the difficulty of the exploration problem. Exp4 is the simplest algorithm that pulls those two parameters apart (though it’s obviously not a good algorithm for this case). It’s hard to formally capture “the difficulty of the exploration problem”, but intuitively it’s something like what you’d expect—how many options do you have to try at random before you are guaranteed to get useful signal? This is upper bounded by the number of output options. You can get tighter formal bounds in many cases but it’s one of those things where the real bound is kind of a problem-specific mess.

There are two hopes for not needing exponential time:

In imitation+RL, the exploration difficulty should depend on something like the accuracy of your imitation rather than on the size of the underlying domain (or maybe even better). You don’t have to try everything at random to get signal, if you have access to an expert who shows you a good option in each round. We can train A with demonstrations (we can get a demonstration just by calling Amplify(A)).

Many RL problems have tractable exploration despite large domains for a whole mess of complicated reasons.

(And note that we’ll be able to tell whether this is working, so in practice this is probably something that we can validate empirically—not something where we are going up against adversarial optimization pressure and so need a provable bound.)

This doesn’t seem likely to be the place where my approach gets stuck, and I don’t think it seems worth thinking about it that much until we’ve made much more progress on understanding the task that Amplify(A) actually needs to perform for robustness and on how amplification works more broadly, since (a) those are way more likely to be dealbreakers, in which case this doesn’t matter, (b) it’s relatively likely that other progress will change our conception of the learning theory problem we need to solve or obsolete it entirely.

If you want to understand these intuitions in detail it likely requires doing the equivalent of a course in learning theory and reading a bunch of papers in the area (which doesn’t sound worth it to me, as a use of your time). Overall this isn’t something where I feel excited about engaging in detail, except with experts in the relevant areas who I expect to know something or have intuitions that I don’t.

I have some additional questions that I’m still not sure about, that maybe you can answer without spending too much time.

It seems that if you can get a good online guarantee you are immune to distributional shifts (the online guarantee gives you a bound that’s not based on any assumptions about the input data). But to be practically meaningful the hypothesis class (on which the regret bound is based) has to include one that can approximate X well over all possible inputs, otherwise you could prove a good regret bound, and even perform well in practice, but still suffer badly from some future distributional shift, right?

Are you thinking that it will be possible to design X and the ML algorithm together such that we’ll know it can learn to approximate X well over all possible inputs, or alternatively are you thinking of some sort of online guarantee that does make some assumptions about the input data (e.g., that its distribution doesn’t change too quickly)? Or both (in a disjunctive sense)?

Are there any relevant papers you can point to that gave you the intuitions that you have on these questions?

It seems that if you can get a good online guarantee you are immune to distributional shifts (the online guarantee gives you a bound that’s not based on any assumptions about the input data).

The online guarantee says that on average, over a large sequence of trials, you will perform well. But if I train my system for a while and then deploy it, it could perform arbitrarily poorly after deployment (until I incorporate corrective data, which will generally be impossible for catastrophic failures).

But to be practically meaningful the hypothesis class (on which the regret bound is based) has to include one that can approximate X well over all possible inputs, otherwise you could prove a good regret bound, and even perform well in practice, but still suffer badly from some future distributional shift, right?

I don’t understand this (might be related to the previous point). If there is a hypothesis that performs well over the sequence of actual cases that you train on, then you will perform well on the sequence of actual data cases that you train on. For any other inputs, the online guarantee doesn’t say anything.

Are you thinking that it will be possible to design X and the ML algorithm together such that we’ll know it can learn to approximate X well over all possible inputs, or alternatively are you thinking of some sort of online guarantee that does make some assumptions about the input data (e.g., that its distribution doesn’t change too quickly)? Or both (in a disjunctive sense)?

I don’t think that anything will be learning to approximate anything else well over all possible inputs.

What does “X” refer to here?

I’m not imagining making any assumptions on the input data.

Are there any relevant papers you can point to that gave you the intuitions that you have on these questions?

I don’t think I fully understood the questions.

The online guarantee says that on average, over a large sequence of trials, you will perform well. But if I train my system for a while and then deploy it, it could perform arbitrarily poorly after deployment (until I incorporate corrective data, which will generally be impossible for catastrophic failures).

Take the 2√dlog(d)T regret bound as an example. Suppose dlog(d) is small (what I meant by “a good online guarantee”), then total regret is essentially bounded by √T , which means that if you max out the regret during training, after deployment it shouldn’t accumulate more than about 1/√T regret per time step, regardless of distributional shifts. Am I misunderstanding something here?

What does “X” refer to here?

It’s what we were talking about previously, the set of subtasks of “Try to verify that the current agent is benign.”

I don’t think that anything will be learning to approximate anything else well over all possible inputs.

Earlier, you wrote:

When we say that the “model can learn to do X” then we are saying roughly “the model competes with a set of hypotheses including one that does X.”

And I thought one possible interpretation of “do X” is “approximates X well over all possible inputs”. If that’s not what you meant by “do X”, what does it mean?

To step back a bit, I’m finding it hard to understand online guarantees because the “regret bounds” being proven are all relative to a class of hypotheses, but in practice we actually care about performance relative to the ground truth, so in addition to the kinds of regret bounds we see in that OL survey paper, we also need to characterize the performance of the hypothesis class relative to the ground truth, right? What do you expect that characterization to look like? I’m trying to understand your thinking / intuitions about this.

You aren’t misunderstanding the result, what you say makes complete sense. Now I understand your questions.

“Max out the regret during training” is extremely infeasible—for example, you’d need to know the exact regret bound (which we don’t), and much more importantly you’d need to do perfect adversarial training to find every possible mistake the model makes (which we can’t).

I’m just hoping to directly use the bound on the total loss.

That is, suppose that I’m trying to apply Amplify(A) to some “not too hard” task. Then I want to argue something like:

1. If we’ve built Amplify() correctly, then in order for Amplify(A) to succeed at a “not too hard” task, A just needs to solve some “not too hard” subtasks. (This is the main constraint on Amplify() other than preserving alignment, without this constraint it would be relatively easy.)

2. Because those subtasks are “not too hard,” we think that A has a regret bound vs. some model that solves those tasks well (on average). Roughly speaking this is the definition of “not too hard,” more on this below.

3. So if we keep applying Amplify(A) to the task, and training A on the resulting subtasks, then eventually A will solve those tasks well (on average). At that point, Amplify(A) succeeds at the original task (on average).

4. This argument goes through even if the task is changing over time, as long as the “not too hard” bound applies uniformly over the sequence (such that there is a single model in step #2 that succeeds for every element in the sequence).

In the original example we were discussing, I’m applying Amplify(A) to the task {test whether model B is robust}. When I talk about robustness as a subproblem of alignment, I mean precisely setting up the task {test whether model B is robust} in such a way that it’s “not too hard,” i.e. such that we can apply the above argument to the task {test whether model B is robust}.

Performing well on average is adequate for this task because I can keep applying Amplify(A) to the task {test whether model B is robust}, over and over again. If it fails 10% of the time, that just means 10% of the tests fail, not that model B can fail catastrophically on 10% of inputs or with 10% probability. (This is the magic we want out of a solution to robustness—to turn an average-case guarantee into a worst-case guarantee.)

And I thought one possible interpretation of “do X” is “approximates X well over all possible inputs”. If that’s not what you meant by “do X”, what does it mean?

The X comes with some performance measure. I mean that the model performs well enough on average over the distribution of subtasks.

(This still depends on the distribution of top-level tasks. For now you can imagine imposing this requirement for the worst case distribution of top-level tasks that can occur during training, though I think we can be a little bit more precise in practice.)

In practice we actually care about performance relative to the ground truth, so in addition to the kinds of regret bounds we see in that OL survey paper, we also need to characterize the performance of the hypothesis class relative to the ground truth, right?

Yes, we need to argue that there is some hypothesis in the class that is able to perform well. This is what I mean by “not too hard.” Ultimately we will be assuming that our ML is able to do something impactful in the world, and then trying to argue that if it was able to do that impactful thing, then we could also solve the subtasks necessary to do the same impactful thing safely (since the goal of this approach is to compete with unaligned ML).

In order to argue that a task is not that hard, we will use some combination of:

The assumption that our ML is good enough at some concrete tasks whose difficulty doesn’t scale with the sophistication of the models we are training. This can be verified empirically in advance, and once it’s true it tends to become more true.

The assumption that our ML system is good enough at doing something impactful in the world. (Which we’ll combine with some argument about how the tasks we are interested in are not harder than that task, or “not much harder” in the sense that the difference falls into the preceding category, of “easy” tasks that we’ve demonstrated empirically that our model can solve.)

Overall, things are not really going to be this formal, but hopefully this helps communicate the direction of my thinking/hopes.

“Max out the regret during training” is extremely infeasible

If you can’t max out the regret during training, then I’m having trouble seeing how to make use of such a regret bound the way you want to use it. Let’s say the current total regret is 1000 less than the provable regret bound. Then as far as the provable bound is concerned, the model could answer the next 1000 questions incorrectly and still satisfy the bound, so you can’t just repeat a task some number of times and then conclude that there’s a small probability that all of the answers will be wrong.

For any sequence of queries S, and any model M in the comparison class...

(average performance of A on S) > (average performance of M on S) - (arbitrarily small constant)

This gives you a lower bound on (average performance of A on S). That’s what I want.

Yes, you could get a bad result 1000 times in a row. To guarantee a good result in that setting, you’d need to run 1001 times (which will still probably be a tiny fraction of your overall training time).

What if during training you can’t come close to maxing out regret for the agents that have to be trained with human involvement? That “missing” regret might come due at any time after deployment, and has to be paid with additional oversight/feedback/training data in order for those agents to continue to perform well, right? (In other words, there could be a distributional shift that causes the agents to stop performing well without additional training.) But at that time human feedback may be horribly slow compared to how fast AIs think, thus forcing IDA to either not be competitive with other AIs or to press on without getting enough human feedback to ensure safety.

Am I misunderstanding anything here? (Are you perhaps assuming that we can max out regret during training for the agents that have to be trained with human involvement, but not necessarily for the higher level agents?)

That “missing” regret might come due at any time after deployment, and has to be paid with additional oversight/feedback/training data in order for those agents to continue to perform well, right? (In other words, there could be a distributional shift that causes the agents to stop performing well without additional training.)

Yes. (This is true for any ML system, though for an unaligned system the new training data can just come from the world itself.)

Are you perhaps assuming that we can max out regret during training for the agents that have to be trained with human involvement, but not necessarily for the higher level agents?

Yeah, I’m relatively optimistic that it’s possible to learn enough from humans that the lower level agent remains universal (+ aligned etc.) on arbitrary distributions. This would probably be the case if you managed to consistently break queries down into simpler pieces until arriving at a very simple queries. And of course it would also be the case if you could eliminate the human from the process altogether.

Failing either of those, it’s not clear whether we can do anything formally (vs. expanding the training distribution to cover the kinds of things that look like they might happen, having the human tasks be pretty abstract and independent from details of the situation that change, etc.) I’d still expect to be OK but we’d need to think about it more.

(I still think it’s 50%+ that we can reduce the human to small queries or eliminate them altogether, assuming that iterated amplification works at all, so would prefer start with the “does iterated amplification work at all” question.)

And note that we’ll be able to tell whether this is working, so in practice this is probably something that we can validate empirically—not something where we are going up against adversarial optimization pressure and so need a provable bound.

This is kind of surprising. (I had assumed that you need a provable bound since you talk about guarantees and cite a paper that talks about provable bounds.)

If you have some ML algorithm that only has an exponential provable bound but works well in practice, aren’t you worried that you might hit a hard instance of some task in the future that it would perform badly on, or there’s a context shift that causes a whole bunch of tasks to become harder to learn? Is the idea to detect that at run time and either pay the increased training cost or switch to another approach if that happens?

If you want to understand these intuitions in detail it likely requires doing the equivalent of a course in learning theory and reading a bunch of papers in the area (which doesn’t sound worth it to me, as a use of your time).

Ok, that’s good to know. I think the explanations you gave so far is good enough for my purposes at this point. (You might want to consider posting them somewhere easier to find with a warning similar to this one, so people don’t try to figure out what your intuitions are from the OL survey paper like I did.)

I’m confused about you saying this; it seems like this is incompatible with using the AI to substantially assist in doing big things like preventing nuclear war. You can split a big task into lots of small decisions such that it’s fine if a random independent small fraction of decisions are bad (e.g. by using a voting procedure), but that doesn’t help much, since it’s still vulnerable to multiple small decisions being made badly in a correlated fashion; this is the more likely outcome of the AI’s models being bad rather than uncorrelated errors.

Put in other words: if you’re using the AI to do a big thing, then you can’t section off “avoiding catastrophes” as a bounded subset of the problem, it’s intrinsic to all the reasoning the AI is doing.

I totally agree that the risk of catastrophic failure is an inevitable part of life and we can’t split it off, I spoke carelessly.

I am mostly talking about the informal breakdown in this post.

My intuition is that the combination of these guarantees is insufficient for good performance and safety.

Say you’re training an agent; then the AI’s policy is π:O→ΔA for some set O of observations and A of actions (i.e. it takes in an observation and returns an action distribution). In general, your utility function will be a nonlinear function of the policy (where we can consider the policy to be a vector of probabilities for each (observation, action) pair). For example, if it is really important for the AI to output the same thing given observation “a” and given observation “b”, then this is a nonlinearity. If the AI is doing something like programming, then your utility is going to be highly nonlinear in the policy, since getting even a single character wrong in the program can result in a crash.

Say your actual utility function on the AI’s policy is U. If you approximate this utility using average performance, you get this approximation:

Vp,f(π):=Eo∼p,a∼π(o)[f(o,a)]

where p is some distribution over observations and f is some bounded performance function. Note that Vp,f is linear.

Catastrophe avoidance can handle some nonlinearities. Including catastrophe avoidance, we get this approximation:

Vp,f,c(π):=Eo∼p,a∼π(o)[f(o,a)]−maxo∈O[c(o,π(o)))]

where c is some bounded catastrophe function.

I don’t see a good argument for why, for any U you might have over the policy, there are some easy-to find p,f,c such that approximately maximizing Vp,f,c yields a policy that is nearly as good as if you had approximately maximized U .

Some examples of cases I expect to not work with linear+catastrophe approximation:

Some decisions are much more important than others, and it’s predictable which ones. (This might be easy to handle with importance sampling but that is an extension of the framework, and you have to handle things like “which observations the AI gets depends on the AI’s policy”)

The importance of a decision depends on the observations and actions of previous rounds. (e.g. in programming, typing a bad character is important if no bad characters have been typed yet, and not important if the program already contains a syntax error)

The AI has to be predictable; it has to do the same thing given similar-enough observations (this is relevant if you want different AIs to coordinate with each other)

The AI consists of multiple copies that must meet at the same point; or the AI consists of multiple copies that must meet at different points.

You could argue that we should move to an episodic RL setting to handle these, however I think my arguments continue to apply if you replace “AI takes an action” with “AI performs a single episode”. Episodes have to be short enough that they can be judged efficiently on an individual basis, and the operator’s utility function will be nonlinear in the performance on each of these short episodes.

In general my criticism here is pointing at a general criticism of feedback-optimization systems. One interpretation of this criticism is that it implies that feedback-optimization systems are too dumb to do relevant long-term reasoning, even with substantial work in reward engineering.

Evolution provides some evidence that feedback-optimization systems can, with an extremely high amount of compute, eventually produce things that do long-term reasoning (though I’m not that confident in the analogy between evolution and feedback-optimization systems). But then these agents’ long-term reasoning is not explained by their optimization of feedback. So understanding the resulting agents as feedback-optimizers is understanding them at the wrong level of abstraction (see this post for more on what “understanding at the wrong level of abstraction” means), and providing feedback based on an overseer’s values would be insufficient to get something the overseer wants.

See this post for discussion of some of these things.

Other points beyond those made in that post:

The easy way to think about performance is using marginal impact.

There will be non-convexities—e.g. if you need to get 3 things right to get a prize, and you currently get 0 things right, then the marginal effect of getting an additional thing right is 0 and you can be stuck at a local optimum. My schemes tend to punt these issues to the overseer, e.g. the overseer can choose to penalize the first mistake based on their beliefs about the value function of the trained system rather than the current system.

To the extent that any decision-maker has to deal with similar difficulties, then your criticism only makes sense in the context of some alternative unaligned AI that might outcompete the current AI. One alternative is the not-feedback-optimizing cognition of a system produced by gradient descent on some arbitrary goal (let’s call it an alien). In this case, I suspect my proposal would be able to compete iff informed oversight worked well enough to reflect the knowledge that the aliens use for long-term planning.

Note that catastrophe avoidance isn’t intended to overcome the linear approximation. It’s intended to prevent the importance weights from blowing up too much. (Though as we’ve discussed, it can’t do that in full generality—I’m going to shovel some stuff under “an AI that is

tryingto do the right thing” and grant that we aren’t going to actually get the optimal policy according to the overseer’s values. Instead I’m focused on avoiding some class of failures that I think of as alignment failures.)I’m not including issues like “you want your AI to be predictable,” I’d say that “be very predictable” is a separate problem, just like “be really good at chess” is a separate problem. I agree that our preferences are better satisfied by AIs that solve these additional problems. And I agree that if our alignment techniques are fundamentally incompatible with other techniques that help with these desiderata then that should be considered an open problem for alignment (though we may end up disagreeing about the importance / about whether this happens).

If this is true, it seems like a really important point that I need to understand better. Any chance you can surface this argument into a top-level post, so more people can see it and chime in with their thoughts? In particular I’d like to understand whether the problem is caused by current ML approaches not offering good/useful enough performance guarantees, which might change in the future, or if this a fundamental problem with ML and feedback-optimization that can’t be overcome. Also, can you suggest ways to test this empirically?

(I also can’t quite tell to what extent Paul’s response has addressed your criticism. If you decide to write a post maybe you can explain that as well?)

About the inductive argument for the two guarantees, suppose we can show that Amplify(A) satisfies the two guarantees and we’re trying to show this for Distill(Amplify(A)).

For the inductive step to go through, Amplify(A) has to detect and disapprove of any incorrigible optimization that may be introduced by Distill, but we don’t have a guarantee that Amplify(A) can do this, because the performance guarantee is “on average” so it might not detect some obscure kinds of incorrigible optimizations, and the control guarantee only says Amplify(A) won’t itself do any incorrigible optimization but not that it can detect every incorrigible optimization in others.

It seems like for the argument to go through, you need a third guarantee that says something like “can detect any violations of the control guarantee in an agent B as long as B belongs to C (i.e., some class of AIs that includes Distill(Amplify(A)), Distill(Amplify(Distill(Amplify(A)))), and so on), as well as any violations of

thisguarantee in B.” Does that make sense?(There are things about the performance guarantee that I don’t entirely understand, like what distribution are you averaging over exactly, so possibly the way you’re thinking about it already implies this third guarantee, but in that case it still seems useful to state this explicitly.)

The task that Amplify(A) is doing is something like “Find an input on which the distilled agent will behave malignly, if any exist.” (That literal query won’t work, one important research goal is to find something along those lines that actually works.)

If Amplify(A) performs well on average, then it finds such inputs with reasonably high probability. To actually get this conclusion you obviously need to argue both about the properties of Amplify(A) and the properties of the problem. But the main point is that it has no dependence on the worst case behavior of Amplify(A).

If Amplify(A) can’t solve this task—if we have no way to train a model to certifiably be benign—then we are just screwed anyway.

(I agree that the concern you’ve mentioned potentially afflicts one of the proposals in my post, inductively defining specifications. It’s not clear to me whether it’s a serious problem, I’ll think about it at some point.)

We’ll construct the distribution in order to make the performance guarantee useful.

That is, for the final agent, we take the distribution over tasks I care about. For preceding agents, we take the distribution over subtasks that arise while trying to solve the tasks I care about (i.e. by applying amplification to the problem of evaluating a random task from the next agent’s distribution / to the problem of certifying that the next agent is benign).

But that’s not enough for the inductive step to go through, because it’s “high probability” only if you pick a random distilled agent out of the distribution over which the guarantee applies. As a counter-example, suppose Amplify(A) performs well on the task “Find an input on which the distilled agent will behave malignly, if any exist” for every distilled agents in the distribution, except the actual Distill(Amplify(A)). If it does that, then it will satisfy the performance guarantee (since it does well on average) but fail to train a Distill(Amplify(A)) that satisfies the control guarantee.

The final agent may be a superintelligent AI faced with tasks that we can’t currently foresee. I don’t know how to define a distribution such that it covers those tasks, without also covering many other tasks (that I won’t care about). Is it ok to define the distribution extremely broadly, or does it need to be the actual distribution that the AI will face in some sense? (Or do we not actually need to explicitly define such a distribution? ETA: Does the overseer training the first distilled agent need to know the final distribution?)

You keep giving Amplify(A) the task “Try to verify that the current agent is benign” over and over again.

I’m imagining collecting the problems that the agent actually faces and training on those.

More precisely, I’m thinking in terms of an online guarantee rather than fixing an actual distribution, since the distribution of tasks in the real world will shift over time.

As the distribution shifts you may get worse performance. The total amount of underperformance over time is basically bounded. Moreover, our competitors will generally face the same problem.

I don’t see how this helps with the inductive argument. The same counter-example I gave in the grandparent comment also applies to this idea.

The top-level distribution for Amplify(A) puts 50% of its mass on “Try to verify that the current agent is benign.” (Or something like 50%, it’s <100% only because you are training A both to evaluate actions and to certify benignness.) If it performs badly on the

current agent, then it’s not performing well on average. There is no other distribution over distilled agents that we are defining the performance guarantee with respect to.Because the current agent is training over time, we really want to talk about online guarantees rather than assuming a fixed distribution. Amplify(A) is being trained on the sequence of tasks “Try to verify that the agent at t=0 is benign,” “Try to verify that the agent at t=1 is benign,” “Try to verify that the agent at t=2 is benign...” etc.

If there was a learnable model that performs well on most of the subtasks of “Try to verify that the current agent is benign,” then we get to infer that A in fact performs well on most of these subtasks. If amplification preserves average-case performance, then we get to conclude that Amplify(A) performs well on most tasks of the form “Try to verify that the current agent is benign.” If Amplify(A) consistently concludes that the agent is benign, e.g. if Amplify(A) consistently fails to find an input on which the agent behaves malignly, then we hope to conclude that the agent is in fact benign at most steps.

Thanks, I think I’m finally starting to understand your thinking here.

Sounds like I really need to figure out how online guarantees work in relation to your scheme. (It’s not clear to me how to map between your setup and the setup in the online learning survey that you cited, e.g., what corresponds to “receive true answer” after every prediction and what corresponds to the set of hypotheses that regret is being measured against.) I’ve been putting it off and just assuming a fixed distribution because you wrote “Overall, I don’t think this distinction is a huge deal.”

How do we determine this? (What if the current agent has moved into a part of the agent space such that there was no longer a learnable model that performs well on most of the subtasks of “Try to verify that the current agent is benign”?)

What does “most” mean here and why is that good enough? (If there are more than 100 steps and “most” means 99% then you can’t rule out having malign agents in some of the steps, which seems like a problem?)

As part of designing a technique for optimizing worst-case performance, we need to argue that the overseer’s job isn’t too hard (so that Amplify(A) is qualified to perform the task). If we remove this restriction, then optimizing worst case performance wouldn’t be scary—adversarial training would probably work fine.

See the section “Limited feedback (bandits)” starting on page 177. Online learning doesn’t require seeing the true answer.

We don’t compete with any explicit set of hypotheses. When we say that the “model can learn to do X” then we are saying roughly “the model competes with a set of hypotheses including one that does X.”

Most means most agents over the training process. But:

Once you have an agent that seems OK, you can freeze that agent and then run the certification process for significantly longer.

I expect the model is probably going to have some probability of behaving malignly on any given input anyway based on internal stochasticity. So you probably already need to do something based on ensembling / ensuring sufficient per-timestep robustness.

I’m still having trouble matching up that section with your setup. (It assumes that the agent sees the value of the loss function after every prediction, which I think is not the case in your setup?) Is Section 6 on Online Active Learning in this more comprehensive survey closer to what you have in mind? If so, can you say which of the subsections of Section 6 is the closest? Or alternatively, can you explain the actual formal setup and guarantee you’re hoping ML research will provide, which will be sufficient to accomplish what you need? (Or give an example of such formal setup/guarantee if multiple ones could work?)

Also, what if in the future the most competitive ML algorithms do not provide the kinds of guarantees you need? How likely do you think that is, and what’s the expected outcome (for your approach and AI alignment in general) conditional on that?

Don’t we need to know the size of the set of hypotheses in order to derive a regret bound?

You do get to see the loss function, if you couldn’t see the loss function then we couldn’t train A.

Amplify(A) is computed by calling A a bunch of times. The point of amplification is to set things up so that Amplify(A) will work well if the average call to A works well. A random subset of the calls to A are then evaluated (by calling Amplify(A)), so we get to see their loss.

(Obviously you get identical expected regret bounds if you evaluate an x fraction of decisions at random, just with 1/x times more regret—you get a regret bound on the sequence whose loss you evaluate, and that regret is at least x times the total.)

What does d (the number of bandit arms) correspond to in your setup? I’m guessing it’s the size of the hypothesis class that you’re competing with, which must be exponentially large? Since the total regret bound is 2√dlog(d)T (page 181, assuming you see the loss every round) it seems that you’d have to see an exponential number of losses (i.e., calls to Amplify(A)) before you could get a useful per-round guarantee. What am I missing here?

The d under the log is the size of the hypothesis class (which is exponential in this case). The other d parameterizes the difficulty of the exploration problem. Exp4 is the simplest algorithm that pulls those two parameters apart (though it’s obviously not a good algorithm for this case). It’s hard to formally capture “the difficulty of the exploration problem”, but intuitively it’s something like what you’d expect—how many options do you have to try at random before you are guaranteed to get useful signal? This is upper bounded by the number of output options. You can get tighter formal bounds in many cases but it’s one of those things where the real bound is kind of a problem-specific mess.

There are two hopes for not needing exponential time:

In imitation+RL, the exploration difficulty should depend on something like the accuracy of your imitation rather than on the size of the underlying domain (or maybe even better). You don’t have to try everything at random to get signal, if you have access to an expert who shows you a good option in each round. We can train A with demonstrations (we can get a demonstration just by calling Amplify(A)).

Many RL problems have tractable exploration despite large domains for a whole mess of complicated reasons.

(And note that we’ll be able to tell whether this is working, so in practice this is probably something that we can validate empirically—not something where we are going up against adversarial optimization pressure and so need a provable bound.)

This doesn’t seem likely to be the place where my approach gets stuck, and I don’t think it seems worth thinking about it that much until we’ve made much more progress on understanding the task that Amplify(A) actually needs to perform for robustness and on how amplification works more broadly, since (a) those are way more likely to be dealbreakers, in which case this doesn’t matter, (b) it’s relatively likely that other progress will change our conception of the learning theory problem we need to solve or obsolete it entirely.

If you want to understand these intuitions in detail it likely requires doing the equivalent of a course in learning theory and reading a bunch of papers in the area (which doesn’t sound worth it to me, as a use of your time). Overall this isn’t something where I feel excited about engaging in detail, except with experts in the relevant areas who I expect to know something or have intuitions that I don’t.

I have some additional questions that I’m still not sure about, that maybe you can answer without spending too much time.

It seems that if you can get a good online guarantee you are immune to distributional shifts (the online guarantee gives you a bound that’s not based on any assumptions about the input data). But to be practically meaningful the hypothesis class (on which the regret bound is based) has to include one that can approximate X well over all possible inputs, otherwise you could prove a good regret bound, and even perform well in practice, but still suffer badly from some future distributional shift, right?

Are you thinking that it will be possible to design X and the ML algorithm together such that we’ll know it can learn to approximate X well over all possible inputs, or alternatively are you thinking of some sort of online guarantee that does make some assumptions about the input data (e.g., that its distribution doesn’t change too quickly)? Or both (in a disjunctive sense)?

Are there any relevant papers you can point to that gave you the intuitions that you have on these questions?

The online guarantee says that on average, over a large sequence of trials, you will perform well. But if I train my system for a while and then deploy it, it could perform arbitrarily poorly after deployment (until I incorporate corrective data, which will generally be impossible for catastrophic failures).

I don’t understand this (might be related to the previous point). If there is a hypothesis that performs well over the sequence of actual cases that you train on, then you will perform well on the sequence of actual data cases that you train on. For any other inputs, the online guarantee doesn’t say anything.

I don’t think that anything will be learning to approximate anything else well over all possible inputs.

What does “X” refer to here?

I’m not imagining making any assumptions on the input data.

I don’t think I fully understood the questions.

Take the 2√dlog(d)T regret bound as an example. Suppose dlog(d) is small (what I meant by “a good online guarantee”), then total regret is essentially bounded by √T , which means that if you max out the regret during training, after deployment it shouldn’t accumulate more than about 1/√T regret per time step, regardless of distributional shifts. Am I misunderstanding something here?

It’s what we were talking about previously, the set of subtasks of “Try to verify that the current agent is benign.”

Earlier, you wrote:

And I thought one possible interpretation of “do X” is “approximates X well over all possible inputs”. If that’s not what you meant by “do X”, what does it mean?

To step back a bit, I’m finding it hard to understand online guarantees because the “regret bounds” being proven are all relative to a class of hypotheses, but in practice we actually care about performance relative to the ground truth, so in addition to the kinds of regret bounds we see in that OL survey paper, we also need to characterize the performance of the hypothesis class relative to the ground truth, right? What do you expect that characterization to look like? I’m trying to understand your thinking / intuitions about this.

You aren’t misunderstanding the result, what you say makes complete sense. Now I understand your questions.

“Max out the regret during training” is extremely infeasible—for example, you’d need to know the exact regret bound (which we don’t), and much more importantly you’d need to do perfect adversarial training to find every possible mistake the model makes (which we can’t).

I’m just hoping to directly use the bound on the total loss.

That is, suppose that I’m trying to apply Amplify(A) to some “not too hard” task. Then I want to argue something like:

1. If we’ve built Amplify() correctly, then in order for Amplify(A) to succeed at a “not too hard” task, A just needs to solve some “not too hard” subtasks. (This is the main constraint on Amplify() other than preserving alignment, without this constraint it would be relatively easy.)

2. Because those subtasks are “not too hard,” we think that A has a regret bound vs. some model that solves those tasks well (on average). Roughly speaking this is the definition of “not too hard,” more on this below.

3. So if we keep applying Amplify(A) to the task, and training A on the resulting subtasks, then eventually A will solve those tasks well (on average). At that point, Amplify(A) succeeds at the original task (on average).

4. This argument goes through even if the task is changing over time, as long as the “not too hard” bound applies uniformly over the sequence (such that there is a single model in step #2 that succeeds for every element in the sequence).

In the original example we were discussing, I’m applying Amplify(A) to the task {test whether model B is robust}. When I talk about robustness as a subproblem of alignment, I mean precisely setting up the task {test whether model B is robust} in such a way that it’s “not too hard,” i.e. such that we can apply the above argument to the task {test whether model B is robust}.

Performing well on average is adequate for this task because I can

keepapplying Amplify(A) to the task {test whether model B is robust}, over and over again. If it fails 10% of the time, that just means 10% of the tests fail, not that model B can fail catastrophically on 10% of inputs or with 10% probability. (This is the magic we want out of a solution to robustness—to turn an average-case guarantee into a worst-case guarantee.)The X comes with some performance measure. I mean that the model performs well enough on average over the distribution of subtasks.

(This still depends on the distribution of top-level tasks. For now you can imagine imposing this requirement for the worst case distribution of top-level tasks that can occur during training, though I think we can be a little bit more precise in practice.)

Yes, we need to argue that there is some hypothesis in the class that is able to perform well. This is what I mean by “not too hard.” Ultimately we will be assuming that our ML is able to do something impactful in the world, and then trying to argue that

ifit was able to do that impactful thing, then we could also solve the subtasks necessary to do the same impactful thing safely (since the goal of this approach is to compete with unaligned ML).In order to argue that a task is not that hard, we will use some combination of:

The assumption that our ML is good enough at some concrete tasks whose difficulty doesn’t scale with the sophistication of the models we are training. This can be verified empirically in advance, and once it’s true it tends to become more true.

The assumption that our ML system is good enough at doing something impactful in the world. (Which we’ll combine with some argument about how the tasks we are interested in are not harder than that task, or “not much harder” in the sense that the difference falls into the preceding category, of “easy” tasks that we’ve demonstrated empirically that our model can solve.)

Overall, things are not really going to be this formal, but hopefully this helps communicate the direction of my thinking/hopes.

If you can’t max out the regret during training, then I’m having trouble seeing how to make use of such a regret bound the way you want to use it. Let’s say the current total regret is 1000 less than the provable regret bound. Then as far as the provable bound is concerned, the model could answer the next 1000 questions incorrectly and still satisfy the bound, so you can’t just repeat a task some number of times and then conclude that there’s a small probability that all of the answers will be wrong.

If A satisfies a regret bound, then:

For any sequence of queries S, and any model M in the comparison class...

(average performance of A on S) > (average performance of M on S) - (arbitrarily small constant)

This gives you a lower bound on (average performance of A on S). That’s what I want.

Yes, you could get a bad result 1000 times in a row. To guarantee a good result in that setting, you’d need to run 1001 times (which will still probably be a tiny fraction of your overall training time).

What if during training you can’t come close to maxing out regret for the agents that have to be trained with human involvement? That “missing” regret might come due at any time after deployment, and has to be paid with additional oversight/feedback/training data in order for those agents to continue to perform well, right? (In other words, there could be a distributional shift that causes the agents to stop performing well without additional training.) But at that time human feedback may be horribly slow compared to how fast AIs think, thus forcing IDA to either not be competitive with other AIs or to press on without getting enough human feedback to ensure safety.

Am I misunderstanding anything here? (Are you perhaps assuming that we can max out regret during training for the agents that have to be trained with human involvement, but not necessarily for the higher level agents?)

Yes. (This is true for any ML system, though for an unaligned system the new training data can just come from the world itself.)

Yeah, I’m relatively optimistic that it’s possible to learn enough from humans that the lower level agent remains universal (+ aligned etc.) on arbitrary distributions. This would probably be the case if you managed to consistently break queries down into simpler pieces until arriving at a very simple queries. And of course it would also be the case if you could eliminate the human from the process altogether.

Failing either of those, it’s not clear whether we can do anything formally (vs. expanding the training distribution to cover the kinds of things that look like they might happen, having the human tasks be pretty abstract and independent from details of the situation that change,

etc.)I’d still expect to be OK but we’d need to think about it more.(I still think it’s 50%+ that we can reduce the human to small queries or eliminate them altogether, assuming that iterated amplification works at all, so would prefer start with the “does iterated amplification work at all” question.)

This is kind of surprising. (I had assumed that you need a provable bound since you talk about guarantees and cite a paper that talks about provable bounds.)

If you have some ML algorithm that only has an exponential provable bound but works well in practice, aren’t you worried that you might hit a hard instance of some task in the future that it would perform badly on, or there’s a context shift that causes a whole bunch of tasks to become harder to learn? Is the idea to detect that at run time and either pay the increased training cost or switch to another approach if that happens?

Ok, that’s good to know. I think the explanations you gave so far is good enough for my purposes at this point. (You might want to consider posting them somewhere easier to find with a warning similar to this one, so people don’t try to figure out what your intuitions are from the OL survey paper like I did.)