# Infrafunctions and Robust Optimization

This will be a fairly important post. Not one of those obscure result-packed posts, but something a bit more fundamental that I hope to refer back to many times in the future. It’s at least worth your time to read this first section up to its last paragraph.

There are quite a few places where randomization would help in designing an agent. Maybe we want to find an interpolation between an agent picking the best result, and an agent mimicking the distribution over what a human would do. Maybe we want the agent to do some random exploration in an environment. Maybe we want an agent to randomize amongst promising plans instead of committing fully to the plan it thinks is the best.

However, all of these run into the standard objection that any behavior like this, where a randomized action is the best thing to do, is unstable as the agent gets smarter and has the ability to rewrite itself. If an agent is randomizing to sometimes take actions that aren’ t optimal according to its utility function, then there will be an incentive for the agent to self-modify to eliminate its randomization into those suboptimal actions.

The formalization of this is the following proposition.

**Proposition 1:** *Given some compact metric space of options , if is a bounded function, *

Intuitively, what this is saying is that the only possible way for a mixture of options to be an optimal move is if each component option is an optimal move. So, utility functions can *only* give you randomization behavior if the randomization is between optimal actions. The set of such will typically only contain a single point. And so, in general, for *any utility function at all*, an agent using it will experience a convergent pressure towards deterministic decision-making.

Every single clever alignment trick involving an agent behaving randomly or sampling from a distribution is thereby guaranteed to fail, as it’s not stable under reflection as the agent gets smarter, for anything worthy of being called an agent (in the sense that it has an implicit utility function and acts to achieve it).

The rest of this post will be about how the above sentence is false. There’s a mathematically principled, reflectively stable, way an agent can be, where randomization behavior persists. No matter how smart it gets, it won’t want to remove its randomization behavior. Reflectively stable quantilizers are back on the menu, as are reflectively stable human-imitators, reflectively stable Thompson samplers, and more.

**What’s an Infrafunction?**

Intuitively, just as infradistributions are a generalization of probability distributions, infrafunctions are a generalization of functions/random variables in the same direction. The next paragraph will be informal (and somewhat wrong) to not clog it with caveats.

The Fundamental Theorem of Inframeasures says that there are two ways of viewing inframeasures. The first way is to view an inframeasure as a closed convex set of measures, where the worst-case measure is picked. You don’t know what distribution in the set will be picked by reality, and so you model it as an adversarial process and plan for the worst-case. As for the second way to view an inframeasure, the thing you *do* with probability distributions is to take expectations of functions with them. For instance, the probability of an event is just the expected value of the function that’s 1 if the event happens, and 0 otherwise. So an inframeasure may also be viewed as a functional that takes a function as an input, and outputs the expectation value, and which must fulfill some weak additional properties like concavity. Measures fulfill the much stronger property of inducing a linear function .

Moving away from that, it’s important to note that the vector space of continuous functions , and the vector space of (finite signed) measures on (denoted ), are dual to each other. A function and a measure are combined to get an expectation value. (ie, taking the expectation) is the special function of type . Every continuous linear function corresponds to taking expectations with respect to some finite signed measure, and every continuous linear function corresponds to taking expectations with respect to some continuous function .

Since the situation is so symmetric, what happens if we just take all the mathematical machinery of the Fundamental Theorem of Inframeasures, but swap measures and functions around? Well… compare the next paragraph against the earlier paragraph about the Fundamental Theorem of Inframeasures.

The Fundamental Theorem of Infrafunctions says that there are two ways of viewing infrafunctions. The first way is to view an infrafunction as a closed convex set of functions , where the worst-case function is picked. You don’t know what function in the set will be picked by reality, and so you model it as an adversarial process and plan for the worst-case. As for the second way to view an infrafunction, a thing you *do* with functions to is you combine them with a probability distribution to get an expectation value. So an infrafunction may also be viewed as a function that takes a probability distribution as an input, and outputs the expectation value, and which must fulfill some weak additional properties like concavity. Functions fulfill the much stronger property of inducing a linear function .

For the following theorem, a set of functions is called upper-complete if, whenever and , as well. And a function will be called minimal in if and implies that .

**Theorem 1: Fundamental Theorem of Infrafunctions** *If is a compact metric space, there is a bijection between concave upper-semicontinuous functions of type , and closed convex upper-complete sets of continuous functions .*

**Conjecture 1: Continuity=Compactness** *If is a compact metric space, there is a bijection between concave continuous functions of type , and closed convex upper-complete sets of continuous functions where the subset of minimal functions has compact closure.*

Ok, so, infrafunctions can alternately be viewed as concave (hill-like) functions , or closed convex upwards-complete sets of continuous functions.

Effectively, this is saying that *any* concave scoring function on the space of probability distributions (like the negative KL-divergence) can equivalently be viewed as a worst-case process that adversarially selects functions. For any way of scoring distributions over “what to do” where randomization ends up being the highest-scoring option, the scoring function on is *probably* going to curve down (be concave), and so it’ll *implicitly* be optimizing for the worst-case score amongst a set of functions .

Now, going from a desired randomization behavior, to a concave scoring function with that desired behavior as the optimum, to the set of functions the scoring function is implicitly worst-casing over, takes nontrivial mathematical effort. There’s a whole bunch of possible randomization behaviors for which I don’t know what sort of worst-case beliefs induce that sort of randomization.

For instance, what sort of uncertainty around a utility function makes an agent softmax over it? I don’t know. Although, to be fair, I haven’t tried particularly hard.

**Generalization of Quantilization**

However, one example that *can* be worked out (and has already been worked out by Jessica Taylor and Vanessa Kosoy) is quantilization.

If you haven’t seen quantilization before, it’s where you take a reference distribution over actions, , and instead of picking the *best* action to optimize the function , you condition on the event “the action I picked is in the top 1 percent of actions sampled from , in terms of how much likes it” and sample from *that* distribution.

I mean, it doesn’t have to be 1 percent specifically, but this general process of “take a reference distribution, plot out how good it is w.r.t. some function , condition on the score being beyond some threshold, and sample from that updated distribution” is referred to as quantilization.

Well, if that’s the sort of optimization behavior we’re looking for, we might ask “what sort of concave scoring function on probability distributions has quantilization as the optimum?”, or “what sort of utility function uncertainty produces that scoring function?”

As it turns out, quantilization w.r.t. a reference distribution , where your utility function is , corresponds to worst-casing amongst the following set of functions for some . The epistemic state that leads to quantilization is therefore one where you think that your utility function is unfixably corrupted, but that its deviation from the true utility function is low relative to a given reference distribution . Specifically, if is your utility function you see, and is the true utility function which is a corruption of, you believe that and have no other beliefs about the true utility function .

You’d be very wary about going off-distribution because you’d think “the corruption can be arbitrarily bad off-distribution because all I believe is that is low, so in a region where is low-probability, it’s possible that is super-low there”. You’d also be very wary about deterministically picking a single spot where is high, because maybe the corruption is concentrated on those sparse few spots where is the highest.

However, if you randomize uniformly amongst the top quantile of where is high, this is actually the *optimal* response to this sort of utility function corruption. No matter how the corruption is set up (as long as it’s small relative to ), the quantilizing policy is unlikely to do poorly w.r.t (the agent’s irreducibly uncertain beliefs about) the true utility function.

These are some rather big claims. Exactly as stated above, they’ve already been proven a long time ago by people who aren’t me. However, there’s a further-reaching generalization that the above results are a special case of, which allows interpolating between quantilization and maximization, which is novel.

**Introduction to Lp Spaces (skippable)**

If you already know what spaces are, you can skip straight to the general theorem, but if you don’t, time to take a detour and explain them.

Given some nice space and probability distribution , we can make the vector space of “measurable functions which are equivalent w.r.t. ”. After all, you can add functions together, and multiply by constants, so it makes a vector space. However, the elements of this vector space aren’t *quite* functions, they’re equivalence classes of functions that are equivalent w.r.t. . Ie, if and are two different functions, but has zero probability of selecting a point where , then and will be the same point in the vector space we’re constructing.

This vector space can be equipped with a norm. Actually, it can be equipped with a *lot* of norms. One for each real number in . The norm on the space of “functions that are equivalent w.r.t. ” is:
For the norm, it’d be
Compare this to euclidean distance!
So, our set-of-functions which induces quantilization behavior,
Can also be expressed as “an -sized ball around w.r.t. the norm and ”, and this is the core of how to generalize further, for we may ask, what’s so special about the norm? What about the norm for all the ? What do those do?

**Theorem 2: Ball Theorem**

*For any , , , and , the infrafunction corresponding to (Knightian uncertainty over the ball of size centered at , w.r.t ), is where . Further, given a function , the optimal to pick is the distribution , for some constants and .*

Some special cases of this are as follows. For , you get quantilization. Worst-casing over little balls means that your task is to pick the probability distribution which maximizes , and this maximizing probability distribution is rescaled by the function that’s 1 when exceeds the threshold value and 0 otherwise (as this is the limit of as ) This can be restated as conditioning on the event that exceeds a certain threshold, and so we get quantilizers.

For , you get a more aggressive sort of optimization. Worst-casing over little balls means that your task is to pick the probability distribution which maximizes , and this maximizing probability distribution is but rescaled linearly with how much exceeds the threshold value . So, for example, given two points and , if , the probability density at is enhanced by a factor of 3 over what it’d be at .

For , you basically just get argmax over the support of the distribution . Worst-casing over little balls means that your task is to pick the probability distribution which maximizes , and this maximizing probability distribution is but rescaled according to for arbitrarily large . Ie, incredibly large at the highest value of , which dominates the other values. So pretty much, a dirac-delta distribution at the best point in the support of .

There’s a massive amount of work to be done on how various sorts of randomization behavior from agents relate to various sorts of concave scoring rules for distributions, and how those relate with various sorts of (restricted) worst-case assumptions about how the utility function got corrupted.

**Dynamic Consistency??**

But what’s this about agents with infrafunctions as their utility functions being stable under reflection? Well, we’d want an agent to be incentivized to keep its utility function the same (or at least not change it in unpredictable ways) no matter what it sees. Making this more precise, if an agent has a utility (infra)function , then it *should* believe that optimizing for the infrafunction ( but modified in a predictable way to account for having seen history ) after seeing history , will produce equal or better results (according to ) than optimizing for any competitor (infra)function after seeing . This is a necessary condition to give the starting agent no incentive to alter the utility function of its future self in an unpredictable way (ie, alter it in a way that differs from ).

For example, if an agent with an infrafunction ever ends up thinking “If I was an optimizer for the utility function (not infrafunction!) , I’d do better, hang on, lemme just rewrite myself to optimize that instead”, that would be instability under reflection. That just should not ever happen. Infrafunctions shouldn’t collapse into utility functions.

And, as it turns out, if you’ve got an agent operating an an environment with a utility infrafunction, there is a way to update the infrafunction over time which makes this happen. The agent won’t want to change its infrafunction in any way other than by updating it. However, the update shares an undesirable property with the dynamically consistent way of updating infradistributions, though. Specifically, the way to update a utility infrafunction (after you’ve seen a history) depends on what the agent’s policy would do in other branches.

If you’re wondering why the heck we need to update our utility infrafunction over time, and why updating would require knowing what happens in alternate timelines, here’s why. The agent is optimizing worst-case expected value of the functions within its set-of-functions. Thus, the agent will tend to focus its marginal efforts on optimizing for the utility functions in its set which have the lowest expected value, in ways that don’t destroy too much value for the utility functions which are already doing well in expectation. And so, for a given function (the set induced by the infrafunction ), it matters very much whether the agent is doing quite well according to in alternate branches (it’s doing well in expectation so it’s safe to mostly ignore it in this branch), or whether the agent is scoring horribly according to in the alternate branches (which means that it needs to be optimized in this branch).

Time to introduce the notation to express how the update works. If is a finite history, and is a stochastic partial policy that tells the agent what to do in all situations except where the history has as a prefix, and is a stochastic partial policy that tells the agent what to do in all situations where the history has as a prefix, then is the overall policy made by gluing together those two partial policies.

Also, if is an environment, and is a stochastic policy, refers to the distribution over infinite histories produced by the policy interacting with the environment. By abuse of notation, can be interpreted as a probability distribution on This is because, obeying , everything either works just fine and keeps telling you what your next action is and you build some infinite history , or the partial history happens and stops telling you what to do.

The notation is the indicator function that’s 1 when the full history lacks as a prefix, and 0 on the partial history . is the function , but with a restricted domain so it’s only defined on infinite bitstrings with as a prefix.

With those notations out of the way, given an infrafunction (and using for the utility functions in the corresponding set ), we can finally say how to define the updated form of , where we’re updating on some arbitrary history , environment , and off-history partial stochastic policy which tells us how we act for all histories that lack as a prefix.

**Definition 1: Infrafunction Update** *For an infrafunction of type , history , environment , and partial stochastic policy which specifies all aspects of how the agent behaves except after history , , the update of the infrafunction, is the infrafunction corresponding to the set of functions*
*Or, restated,*

So, basically, what this is doing is it’s taking all the component functions, restricting them to just be about what happens beyond the partial history , scaling them down, and taking the behavior of those functions off-h to determine what constant to add to the new function. So, functions which do well off-h get a larger constant added to them than functions which do poorly off-h.

And now we get to our theorem, that if you update infrafunctions in this way, it’s always better (from the perspective of the start of time) to optimize for the updated infrafunction than to go off and rewrite yourself to optimize for something else.

**Theorem 3: Dynamic Consistency** *For any environment , finite history , off-history policy , and infrafunctions and , we have that*
*Or, restating in words, selecting the after-h policy by argmaxing for makes an overall policy that outscores the policy you get by selecting the after-h policy to argmax for .*

Ok, but what does this sort of update mean in practice? Well, intuitively, if you’re optimizing according to an infrafunction, and some of the component functions you’re worst-casing over are sufficiently well-satisfied in other branches, they kind of “drop out”. We’re optimizing the worst-case, so the functions that are doing pretty well elsewhere can be ignored as long as you’re not acting disastrously with respect to them. You’re willing to take a hit according to those functions, in order to do well according to the functions that aren’t being well-satisfied in other branches.

**Why Worst Case?**

Worst case seems a bit sketchy. Aren’t there more sane things to do like, have a probability distribution on utility functions, and combine them according to geometric average? That’s what Nash Bargaining does to aggregate a bunch of utility functions into one! Scott Garrabrant wrote an entire sequence about that sort of stuff!

Well, guess what, it fits in the infrafunction framework. Geometric averaging of utility functions ends up being writeable as an infrafunction! (But I don’t know what it corresponds to worst-casing over). First up, a handy little result.

**Proposition 2: Double Integral Inequality** *If , let be an abbreviation for . Then for all , and and and , we have that*

**Corollary 1: -Averaging is Well-Defined** *Given any distribution over a family of functions or infrafunctions , define the -average of this family (for ) as the function
-averaging always produces an infrafunction.*

**Corollary 2: Geometric Mean Makes Infrafunctions** *The geometric mean of a distribution over utility functions is an infrafunction.*

Proof: The geometric mean of a distribution over utility functions is the function However, the geometric mean is the same as the integral. So we get that it’s actually writeable as And we can apply Corollary 2 to get that it’s an infrafunction.

So, this -mixing, for is… well, for 1, it’s just usual mixing. For 0, it’s taking the geometric average of the functions. For , it’s taking the minimum of all the functions. So, it provides a nice way to interpolate between minimization, geometric averaging, and arithmetic averaging, and all these ways of aggregating functions produce infrafunctions.

Just don’t ask me what utility functions are actually *in* the infrafunction corresponding to a geometric mixture.

**The Crappy Optimizer Theorem**

Technically this theorem doesn’t actually belong in this post. But it’s close enough to the subject matter of this post to throw it in anyways. It turns out that “every” (not really) vaguely optimizer-ish process can be reexpressed as some sort of ultradistribution. An ultradistribution is basically an infradistribution (a closed convex set of probability distributions), except it maximizes functions instead of minimizing them.

And so, “every” (not really) optimizer-y process can be thought of as just argmax operating over a more restricted set of probability distributions.

Try not to read *too much* into the Crappy Optimizer Theorem, I’d very strongly advise that you take your favorite non-argmax process and work out how it violates the assumptions of the theorem. Hopefully that’ll stop you from thinking this theorem is the final word on optimization processes.

Anyways, let’s discuss this. The type signature of argmax is . Let’s say we’re looking for some new sort of optimizer that isn’t argmax. We want a function of the same type signature, that “doesn’t try as hard”.

We don’t actually know what is! It could be anything. However, there’s a function , which I’ll call the “score-shift” function, and it’s defined as follows. Basically, given an optimizer and a function, you run the optimizer on the function to get a good input, and shove that input through the function to get a score. As a concrete example, . If you have a function , argmax over it, and plug the result of argmax back into , that’s the same as taking the function and producing a score of .

So, instead of studying the magical optimizer black box , we’ll be studying instead, and characterizing the optimization process by what scores it attains on various functions. There are four properties in particular, which it seems like any good optimizer should fulfill.

*1: -additivity*
For any constant function and function , .

*2: Homogenity*
For any and function , .

*3: Subadditivity*
For any functions , .

*4: Zero bound*
For any function , .

Rephrasing this, though we don’t know what the optimization-y process *is*, it’s quite plausible that it’ll fulfill the following four properties.

1: If you add a constant to the input function, you’ll get that constant added to your score.

2: If you rescale the input function, it rescales the score the optimization process attains.

3: Optimizing the sum of two functions does worse than optimizing them separately and adding your best scores together.

4: Optimizing a function that’s never positive can’t produce a positive score.

Exercise: Which of these properties does softmax break? Which of these properties does gradient ascent with infinitesimal step-size break?

Also, notice that the first two properties combined are effectively saying “if you try to optimize a utility function, the optimization process will ignore scales and shifts in that utility function”.

**Theorem 4: Crappy Optimizer Theorem** *For any selection process where fulfills the four properties above, will hold for some closed convex set of probability distributions . Conversely, the function for any closed convex set will fulfill the four properties of an optimization process.*

**Informal Corollary 3:** *Any selection process where fulfills the four properties is effectively just argmax but using a restricted set of probability distributions.*

**Other Aspects of Infrafunctions**

Infrafunctions are the analogue of random variables in inframeasure theory. Here are two useful properties of them.

First off, in the classical case of functions, we can average functions together. There’s a distinguished averaging function . Pretty obvious.

When we go to infrafunctions, this gets extended somewhat. There’s a distinguished function , where is the space of infradistributions, and is the space of infrafunctions. If you know enough category theory, we can say that the space of infrafunctions is a -algebra, where is the infradistribution monad. If you don’t know enough category theory, basically, there’s an infra-version of “averaging points together” and it makes all the diagrams commute really nicely.

**Proposition 3:** *The space of infrafunctions is a -algebra, with the function being defined as .*

Also, in the classical case, if you’ve got a function , that produces a function in a pretty obvious way. Just precompose. .

A similar thing holds here. Except, in this case, instead of a function, we can generalize further to a continuous infrakernel . Again, to get a function , you just precompose. . Take the distribution , shove it through the infrakernel to get an infradistribution on , and shove that through the infrafunction .

**Proposition 4:** *All continuous infrakernels induce a function via *

So, given an infrakernel (and functions are a special case of this) going one way, you can transfer infrafunctions backwards from one space to the other.

The restriction to continuous infrakernels, though, actually *does* matter here. A function induces *two* infrakernels, one of type (the image), and one of type (the preimage). So, theoretically we could get a function of type by routing through the preimage function. However, since it requires continuity of the function to make things work out, you can only reverse the direction if the function mapping a point to its preimage is Hausdorff-continuous. So, for functions with Hausdorff-continuous inverses, you can flip the usual direction and go . But this trick doesn’t work in general, only is valid in general.

There’s a bunch of other things you can do, like intersection of infrafunctions making a new infrafunction, and union making a new infrafunction. Really, most of the same stuff as works with infradistributions.

The field is wide-open. But it’s a single framework that can accomodate Scott’s generalized epistemic states, Scott’s geometric averaging, Knightian uncertainty, intersecting and unioning that uncertainty, averaging, quantilizers, dynamic consistency, worst-case reasoning, and approximate maximization. So it seems quite promising for future use.

- The Learning-Theoretic Agenda: Status 2023 by 19 Apr 2023 5:21 UTC; 135 points) (
- My hopes for alignment: Singular learning theory and whole brain emulation by 25 Oct 2023 18:31 UTC; 61 points) (
- Optimisation Measures: Desiderata, Impossibility, Proposals by 7 Aug 2023 15:52 UTC; 35 points) (
- 10 May 2023 13:56 UTC; 4 points) 's comment on All AGI Safety questions welcome (especially basic ones) [May 2023] by (

Here are the most interesting things about these objects to me that I think this post does not capture.

Given a distribution over non-negative non-identically-zero infrafunctions, up to a positive scalar multiple, the pointwise geometric expectation exists, and is an infra function (up to a positive scalar multiple).

(I am not going to give all the math and be careful here, but hopefully this comment will provide enough of a pointer if someone wants to investigate this.)

This is a bit of a miracle. Compare this with arithmetic expectation of utility functions. This is not always well defined. For example, if you have a sequence of utility functions U_n, each with weight 2^{-n}, but which alternate in which of two outcomes they prefer, and each utility function gets an internal weighting to cancel out their small weight an then some, the expected utility will not exist. There will be a series of larger and larger utility monsters canceling each other out, and the limit will not exist. You could fix this requiring your utility functions are bounded, as is standard for dealing with utility monsters, but it is really interesting that in the case of infra functions and geometric expectation, you don’t have to.

If you try to do a similar trick with infra functions, up to a positive scalar multiple, geometric expectation will go to infinity, but you can renormalize everything since you are only working up to a scalar multiple, to make things well defined.

We needed the geometric expectation to only be working up to a scalar multiple, and you cant expect a utility function if you take a geometric expectation of utility functions. (but you do get an infrafunction!)

If you start with utility functions, and then merge them geometrically, the resulting infrafunction will be maximized at the Nash bargaining solution, but the entire infrafunction can be thought of as an extended preference over lotteries of the pair of utility functions, where as Nash bargaining only told you the maximum. In this way geometric merging of infrafunctions is starting with an input more general than the utility functions of Nash bargaining, and giving an output more structured than the output of Nash bargaining, and so can be thought of as a way of making Nash bargaining more compositional. (Since the input and output are now the same type, so you can stack them on top of each other.)

For these two reasons (utility monster resistance and extending Nash bargaining), I am very interested in the mathematical object that is non-negative non-identically-zero infrafunctions defined only up to a positive scalar multiple, and more specifically, I am interested in the set of such functions as a

convexset where mixing is interpreted as pointwise geometric expectation.I have been thinking about this same mathematical object (although with a different orientation/motivation) as where I want to go with a weaker replacement for utility functions.

I get the impression that for Diffractor/Vanessa, the heart of a concave-value-function-on-lotteries is that it represents the worst case utility over some set of possible utility functions. For me, on the other hand, a concave value function represents the capacity for compromise—if I get at least half the good if I get what I want with 50% probability, then I have the capacity to merge/compromise with others using tools like Nash bargaining.

This brings us to the same mathematical object, but it feels like I am using the definition of convex set related to the line segment connecting any two points in the set is also in the set, where Diffractor/Vanessa is using the definition of convex set related to being an intersection of half planes.

I think this pattern where I am more interested in merging, and Diffractor and Vanessa are more interested in guarantees, but we end up looking at the same math is a pattern, and I think the dual definitions of convex set in part explains (or at least rhymes with) this pattern.

note: I tagged this “Infrabayesianism” but wasn’t actually sure whether it was or not according to you.

I forget if I already mentioned this to you, but another example where you can interpret randomization as worst-case reasoning is MaxEnt RL, see this paper. (I reviewed an earlier version of this paper here (review #3).)

Can I check that I follow how you recover quantilization?

Are you evaluating distributions over actions, and caring about the worst-case expectation of that distribution?

If so, proposing a particular action is evaluated badly? (Since there’s a utility function in your set that spikes downward at that action.)

But proposing a range of actions to randomize amongst can be assessed to have decent worst-case expected utility, since particular downward spikes get smoothed over, and you can rely on your knowledge of “in-distribution” behaviour?

Edited to add: fwiw it seems awesome to see quantilization formalized as popping out of an adversarial robustness setup! I haven’t seen something like this before, and didn’t notice if the infrabayes tools were building to these kinds of results. I’m very much wanting to understand why this works in my own native-ontology-pieces.

If that’s correct, here are some places this conflicts with my intuition about how things should be done:

I feel awkward about the randomness is being treated essential. I’d rather be able to do something other than randomness in order to get my mild optimization, and something feels unstable/non-compositional about needing randomness in place for your evaluations… (Not that I have an alternative that springs to mind!)

I also feel like “worst case” is perhaps problematic, since it’s bringing maximization in, and you’re then needing to rely on your convex set being some kind of smooth in order to get good outcomes. If I have a distribution over potential utility functions, and quantilize for the worst 10% of possibilities, does that do the same sort of work that “worst case” is doing for mild optimization?

For the “Crappy Optimizer Theorem”, I don’t understand why condition 4, that if f≤0 , then Q(s)(f)≤0 , isn’t just a tautology

^{[1]}. Surely if ∀x∈X,f(x)≤c , then no-matter what s:(X→R)→X is being used,as Q(s)(f):=f(s(f)) , then letting x=s(f) , then f(x)≤c , and so Q(s)(f)=f(s(f))=f(x)≤c .

I guess if the 4 conditions are seen as conditions on a function F:(X→R)→R (where they are written for F=Q(s) ), then it no-longer is automatic, and it is just when specifying that F=Q(s) for some s, that condition 4 becomes automatic?

______________

[start of section spitballing stuff based on the crappy optimizer theorem]

Spitball 1:

What if instead of saying s:(X→R)→X , we had s:(X→R)→ΔX ? would we still get the results of the crappy optimizer theorem?

If we define if s(f) is now a distribution over X, then, I suppose instead of writing Q(s)(f)=f(s(f)) should write Q(s)(f) = s(f)(f) , and, in this case, the first 2 and 4th conditions seem just as reasonable. The third condition… seems like it should also be satisfied?

Spitball 2:

While I would expect that the 4 conditions might not be

exactlysatisfied by, e.g. gradient descent, I would kind of expect basically any reasonable deterministic optimization process to at least “almost” satisfy them? (like, maybe gradient-descent-in-practice would fail condition 1 due to floating point errors, but not too badly in reasonable cases).Do you think that a modification of this theorem for functions Q(s) which only approximately satisfy conditions 1-3, would be reasonably achievable?

______________

I might be stretching the meaning of “tautology” here. I mean something provable in our usual background mathematics, and which therefore adding it as an additional hypothesis to a theorem, doesn’t let us show anything that we couldn’t show without it being an explicit hypothesis.

I really like infrafunctions as a way of describing the goals of mild optimizers. But I don’t think you’ve described the correct reasons why infrafunctions help with reflective stability. The main reason is you’ve hidden most of the difficulty of reflective stability in the ∫|U−V|dν≤ϵ bound.

My core argument is that a normal quantilizer is reflectively stable

^{[1]}if you have such a bound. In the single-action setting, where it chooses a policy once at the beginning and then follows that policy, it must be reflectively stable because if the chosen policy constructs another optimizer that leads to low true utility, then that policy must have very low base probability (or the bound can’t have been true). In a multiple-action setting, we can sample each action conditional on the previous actions, according to the quantilizer distribution, and this will be reflectively stable in the same way (given the bound).Adding in observations doesn’t change anything here if we treat U and V as being expectations over environments.

The way you’ve described reflective stability in the dynamic consistency section is an incentive to keep the same utility infrafunction no matter what observations are made. I don’t see how this is necessary or even strongly related to reflective stability. Can’t we have a reflectively stable CDT agent?

Two core difficulties of reflective stabilityI think the two core difficulties of reflective stability are 1) getting the ∫|U−V|dν≤ϵ bound (or similar) and 2) describing an algorithm that lazily does a ~minimal amount of computation for choosing the next few actions. I expect realistic agents need 2 for efficiency. I think utility infrafunctions do help with both of these, to some extent.

The key difficulty of getting a tight ∫|U−V|dν≤ϵ bound with normal quantilizers is that simple priors over policies don’t clearly distinguish policies that create optimizers. So there’s always a region at the top where “create an optimizer” makes up most of the mass. My best guess for a workaround for this is to draw simple conservative OOD boundaries in state-space and policy-space (the base distribution is usually just over policy space, and is predefined). When a boundary is crossed, it lowers the lower bound on the utility (gives Murphy more power). These boundaries need to be simple so that they can be learned from relatively few (mostly in-distribution) examples, or maybe from abstract descriptions. Being simple and conservative makes them more robust to adversarial pressure.

Your utility infrafunction is a nice way to represent lots of simple out-of-distribution boundaries in policy-space and state-space. This is much nicer than storing this information in the base distribution of a quantilizer, and it also allows us to modulate how much optimization pressure can be applied to different regions of state or policy-space.

With 2, an infrafunction allows on-the-fly calculation that the consequences of creating a particular optimizer are bad. It can do this as long as the infrafunction treats the agent’s own actions and the actions of child-agents as similar, or if it mostly relies on OOD states as the signal that the infrafunction should be uncertain (have lots of low spikes), or some combination of these. Since the max-min calculation is the motivation for randomizing in the first place, an agent that uses this will create other agents that randomize in the same way. If the utility infrafunction is only defined over policies, then it doesn’t really give us an efficiency advantage because we already had to calculate the consequences of most policies when we proved the bound.

One disadvantage, which I think can’t be avoided, is that an infrafunction over histories is incentivized to stop humans from doing actions that lead to out-of-distribution worlds, whereas an infrafunction over policies is not (to the extent that stopping humans doesn’t itself cross boundaries). This seems necessary because it needs to consider the consequences of the actions of optimizers it creates, and this generalizes easily to all consequences since it needs to be robust.

Where I’m defining reflective stability as: If you have an anti-Goodhart modification in your decision process (e.g. randomization), ~never follow a plan that indirectly avoids the anti-Goodhart modification (e.g. making a non-randomized optimizer).

The key difficulty here being that the default pathway for achieving a difficult task involves creating new optimization procedures, and by default these won’t have the same anti-Goodhart properties as the original.

I thought CDT was considered not reflectively-consistent because it fails Newcomb’s problem?

(Well, not if you define reflective stability as meaning preservation of anti-Goodhart features, but, CDT doesn’t have an anti-Goodhart feature (compared to some base thing) to preserve, so I assume you meant something a little broader?)

Like, isn’t it true that a CDT agent who anticipates being in Newcomb-like scenarios would, given the opportunity to do so, modify itself to be not a CDT agent? (Well, assuming that the Newcomb-like scenarios are of the form “at some point in the future, you will be measured, and based on this measurement, your future response will be predicted, and based on this the boxes will be filled”)

My understanding of reflective stability was “the agent would not want to modify its method of reasoning”. (E.g., a person with an addiction is not reflectively stable, because they want the thing (and pursue the thing), but would rather not want (or pursue) the thing.

The idea being that, any ideal way of reasoning, should be reflectively stable.

And, I thought that what was being described in the part of this article about recovering quantilizers, was not saying “here’s how you can use this framework to make quantalizers better”, so much as “quantilizers fit within this framework, and can be described within it, where the infrafunction that produces quantilizer-behavior is this one: [the (convex) set of utility functions which differ (in absolute value) from the given one, by, in expectation under the reference policy, at most epsilon]”

So, I think the idea is that, a quantilizer for a given utility function U and reference distribution ν is, in effect, optimizing for an infrafunction that is/corresponds-to the set of utility functions V satisfying the bound in question,

and, therefore, any quantilizer, in a sense, is as if it “has this bound” (or, “believes this bound”)

And that therefore, any quantilizer should -

- wait.. that doesn’t seem right..? I was going to say that any quantilizer should therefore be reflectively stable, but that seems like it must be wrong? What if the reference distribution includes always taking actions to modify oneself in a way that would result in not being a quantilizer? uhhhhhh

Ah, hm, it seems to me like the way I was imagining the distribution ν and the context in which you were considering it, are rather different. I was thinking of ν as being an accurate distribution of behaviors of some known-to-be-acceptably-safe agent, whereas it seems like you were considering it as having a much larger support, being much more spread out in what behaviors it has as comparably likely to other behaviors, with things being more ruled-out rather than ruled-in ?

Good point on CDT, I forgot about this. I was using a more specific version of reflective stability.

> - wait.. that doesn’t seem right..?

Yeah this is also my reaction. Assuming that bound seems wrong.

I think there is a problem with thinking of ν as a known-to-be-acceptably-safe agent, because how can you get this information in the first place? Without running that agent in the world? To construct a useful estimate of the expected value of the “safe”-agent, you’d have to run it lots of times, necessarily sampling from it’s most dangerous behaviours.

Unless there is some other non-empirical way of knowing an agent is safe?

Yeah I was thinking of having large support of the base distribution. If you just rule-in behaviours, this seems like it’d restrict capabilities too much.

Well, I was kinda thinking of ν as being, say, a distribution of human behaviors in a certain context (as filtered through a particular user interface), though, I guess that way of doing it would only make sense within limited contexts, not general contexts where whether the agent is physically a human or something else, would matter. And in this sort of situation, well, the action of “modify yourself to no-longer be a quantilizer” would not be in the human distribution, because the actions to do that are not applicable to humans (as humans are, presumably, not quantilizers, and the types of self-modification actions that would be available are not the same). Though, “create a successor agent” could still be in the human distribution.

Of course, one doesn’t have practical access to “the true probability distribution of human behaviors in context M”, so I guess I was imagining a trained approximation to this distribution.

Hm, well, suppose that the distribution over human-like behaviors includes both making an agent which is a quantilizer and making one which isn’t, both of equal probability. Hm. I don’t see why a general quantilizer in this case would pick the quantilizer over the plain optimizer, as the utility...

Hm...

I get the idea that the “quantilizers correspond to optimizing an infra-function of form [...]” thing is maybe dealing with a distribution over a single act?

Or.. if we have a utility function over histories until the end of the episode, then, if one has a model of how the environment will be and how one is likely to act in all future steps, given each of one’s potential actions in the current step, one gets an expected utility conditioned on each of the potential actions in the current step, and this works as a utility function over actions for the current step,

and if one acts as a quantilizer over that, each step.. does that give the same behavior as an agent optimizing an infra-function defined using the condition with the L1 norm described in the post, in terms of the utility function over histories for an entire episode, and reference distributions for the whole episode?

argh, seems difficult...