I just became aware of the good regulator theorem through John Wentworth’s post.
Then I tried to read the paper he talked about, Every Good Regulator Of A System Must Be A Model Of That System. This is among the worst papers I’ve ever skimmed. It’s barely comprehensible. Reading this paper is not like reading a regular text. I had to try to read their intentions instead.

This post is my attempt to make sense of the setup and the results of the paper. Summed up, my conclusions are:

Their main theorem is correct but trivial and not very interesting. (It can be strengthened too, see the proposition below.) Their proof is a mess I haven’t bothered looking at, it’s way too complex.

Their interesting conclusion, that an optimal action can only be made if you have a model of the system (i.e., everything outside your control), is false.

Some thoughts about terminology:

It’s definition of “most simple” probably doesn’t make sense.

The definition of “isomorphisms” at least makes sense, but shouldn’t have been called “isomorphism”.

Setup of the paper

I don’t follow the notation or terminology in the paper as it can be replaced with more familiar notation from decision theory.

Define the sets (the name in the paper in parenthesis)
Ω(D):the domain of inputs, can be thought of as a probability space,X(S):everything outside our control,A(R):the set of actions we can take.Y(Z):the final output, combining X and a, defined below.
Define the functions
X(ϕ):Ω→X,a random variable we don't control,a(ρ):Ω→A,our action, choosen by us,ψ(ψ):X×A→Y,the outcome space.
For each uncontrollable outcome x we have a non-empty set G(x) describing the good outcomes. Two examples are:

Goal sets. Let G⊆Y be a goal set, a set of outcomes that are good, independently of the value of x. Assume that for all x there is an action a so that ψ(x,a)∈G. Then G(x)={(x,a)∣ψ(x,a)∈G} is non-empty for all x. This is what they do in the beginning of their paper.

Utility maximization. Let u be a utility function and assume that mina∈Au(ψ(a,x)) exists for all x. Define Gu(x)={(x,a)∣a∈maxa∈Au(ψ(x,a))}. In the paper they talk about minimizing entropy specifically, i.e., G(x)={(x,a)∣a∈maxa∈A−Entropy(ψ(x,a))}.

Note: The authors don’t use G(x); they only use the entropy.

Our goal is to find an optimal action function a that satisfies ψ(X(ω),a(ω))∈G(X(ω)),for all ω. In other words, we want to make sure that every outcome is good. This is the regulation problem.

The regulation problem is equivalent to a utility-maximization problem with no randomness. Let u:Y→R+ be an arbitrary utility function. An optimal action a(ω) according to the regulator problem satisfies

ψ(X(ω),a(ω))∈Gu(X(ω))={ψ(X(w),a)∣a∈mina∈Au(ψ(X(ω),a))}, but that is equivalent to the optimal action as defined by ordinary decision theory, a(ω)∈mina∈Au(ψ(X(ω),a)).The other way around is similar.

The theorem

The point of the paper is to prove that any simple action function a has to be isomorphic to the mapping X:Ω→X. Their main theorem states

Theorem: The simplest optimal regulator R of a reguland S produces events R which are related to the events S by a mapping h:S→R.

In my notation this is

Theorem: The “simplest” optimal action function a can be written as a(ω)=h(X(ω)) for some function h.

In the abstract they claim that “making a model is necessary,” where “making a model” presumably means that you have to reconstruct X(ω) in order to calculate the optimal action a(ω). This conclusion is false. I guess they made the mistake due to their misapplication of the word isomorphism.

Their notion of isomorphism (used elsewhere in the paper, including in the abstract, but not in the main theorem) is quite straight-forward, but not similar to any other use of word that I’ve seen. They say (equation 8) that two functions f,g are isomorphic to one another if there is any h so that f(x)=h(g(x)). This is not even close to the spirit of isomorphism, as the fundamental property of isomorphisms is to preserve structure perfectly. For instance, isomorphisms should be transitive, i.e., if A∼B and B∼C,then A∼B. This is obviously not the case in this example, as it’s easy to see that the constant function is “isomorphic” to any function. I could agree to f and g being homomorphic, as homomorphism do not require invertibility, but isomorphism is just simply the wrong word. (The point here is not to argue over definitions, just that you shouldn’t use words in a way that confuses yourself and other people.)

That said, their notion of isomorphism isn’t useless. It allows you to calculate the value of f(x) from the value of g(x). And that is a nice property. But it doesn’t tell you that you needg(x) to calculate f(x), as claimed in the abstract. If we were talking about real isomorphisms on the other hand, you would’ve always been able to at least recover g(x) from f(x) by the inverse mapping g(x)=h−1(f(x)) -- and I think that’s closer to what they actually wanted to show.

I haven’t made much of an effort to understand their usage of the word “simple”. But it turns out it doesn’t matter, because their result holds for all action functions a, not only the “simple” ones.

What is the spirit of their claims? I think the point is that you should expect X(ω) to be either

(a) directly used in the computation of a, or

(b) recoverable from a, at least provided we know ψ.

Both claims are really easy to disprove, you only need to have x independent of the outcome ψ(x,a). Then we don’t need to use X(ω) and we can’t recover X(ω) from a if the space X contains more than one element. I care about claim (b) because it’s one of the few ways to formalize that a “uses” or “knows” X. Recall that a is just a function, a collection of ordered pairs (ω,a(ω)). This function doesn’t know anything. (I think this might be relevant for the ELK problem too. A function doesn’t have any latent knowledge. It’s just a function.)

The following proposition is, as far as I understand their paper, an extension of their main theorem. There they claim there is a mapping h:X→Y that produces the optimal outcome in the sense that a(ω)=h(X(ω)). This is true, and in fact an even stronger statement is true. But it doesn’t mean what they think it means.

Proposition

Let fx,x∈X be a parameterized family of choice functions on A. Define h{fx}:X→A by h{fx}(x)=fx({a∣ψ(x,a)∈G(x)}). Then

(a) The function a(ω)=h{fx}(X(ω)) is an optimal action function (a solution to the regulation problem),

(b) If a is an optimal action function (solution to the regulation problem) there is a family of choice functions so that a(ω)=h{fx}(X(ω)),

(c) Even if we know ψ and one of the families of choice functions fx,x∈X so that a(ω)=h{fx}(X(ω)), we will not always be able to reconstruct X(ω).

Proof

The first two claims are trivial. The third was explained above. The point of the parameterized choice functions is to make it possible to construct functions that choose any element from {a∣ψ(x,a)∈G(x)}. There are probably more elegant ways to formulate it.

Rough explanation

The regulation problem is equivalent to a utility-maximization problem as already explained. A function is an optimal action function iff it satisfies
a(ω)∈mina∈Au(ψ(X(ω),a)). Here mina∈Au(ψ(X(ω),a)) can be a set, that’s why I use the ∈ symbol. There are potentially many optimal action functions. For instance, if |Ω|=10 and each set mina∈Au(ψ(X(ω),a)) contains two elements, there are 210 action functions. This should explain the rôle of the choice functions. You can see from the defining property of a(w) that it only depends on ω through X(ω); this explains the existence of the h functions.

## Thoughts on the good regulator theorem

I just became aware of the good regulator theorem through John Wentworth’s post. Then I tried to read the paper he talked about, Every Good Regulator Of A System Must Be A Model Of That System. This is among the worst papers I’ve ever skimmed. It’s barely comprehensible. Reading this paper is not like reading a regular text. I had to try to read their intentions instead.

This post is my attempt to make sense of the setup and the results of the paper. Summed up, my conclusions are:

Their main theorem is correct but trivial and not very interesting. (It can be strengthened too, see the proposition below.) Their proof is a mess I haven’t bothered looking at, it’s way too complex.

Their interesting conclusion, that an optimal action can only be made if you have a model of the system (i.e., everything outside your control), is false.

Some thoughts about terminology:

It’s definition of “most simple” probably doesn’t make sense.

The definition of “isomorphisms” at least makes sense, but shouldn’t have been called “isomorphism”.

## Setup of the paper

I don’t follow the notation or terminology in the paper as it can be replaced with more familiar notation from decision theory.

Define the sets (the name in the paper in parenthesis) Ω(D):the domain of inputs, can be thought of as a probability space,X(S):everything outside our control,A(R):the set of actions we can take.Y(Z):the final output, combining X and a, defined below. Define the functions X(ϕ):Ω→X,a random variable we don't control,a(ρ):Ω→A,our action, choosen by us,ψ(ψ):X×A→Y,the outcome space. For each uncontrollable outcome x we have a non-empty set G(x) describing the

good outcomes. Two examples are:Goal sets.Let G⊆Y be a goal set, a set of outcomes that are good, independently of the value of x. Assume that for all x there is an action a so that ψ(x,a)∈G. Then G(x)={(x,a)∣ψ(x,a)∈G} is non-empty for all x. This is what they do in the beginning of their paper.Utility maximization.Let u be a utility function and assume that mina∈Au(ψ(a,x)) exists for all x. Define Gu(x)={(x,a)∣a∈maxa∈Au(ψ(x,a))}. In the paper they talk about minimizing entropy specifically, i.e., G(x)={(x,a)∣a∈maxa∈A−Entropy(ψ(x,a))}.Note:The authors don’t use G(x); they only use the entropy.Our goal is to find an optimal action function a that satisfies ψ(X(ω),a(ω))∈G(X(ω)),for all ω. In other words, we want to make sure that every outcome is good. This is the

regulation problem.The regulation problem is equivalent to a utility-maximization problem with no randomness. Let u:Y→R+ be an arbitrary utility function. An optimal action a(ω) according to the regulator problem satisfies

ψ(X(ω),a(ω))∈Gu(X(ω))={ψ(X(w),a)∣a∈mina∈Au(ψ(X(ω),a))}, but that is equivalent to the optimal action as defined by ordinary decision theory, a(ω)∈mina∈Au(ψ(X(ω),a)).The other way around is similar.

## The theorem

The point of the paper is to prove that any simple action function a has to be isomorphic to the mapping X:Ω→X. Their main theorem states

In my notation this is

In the abstract they claim that “making a model is necessary,” where “making a model” presumably means that you have to reconstruct X(ω) in order to calculate the optimal action a(ω). This conclusion is false. I guess they made the mistake due to their misapplication of the word

isomorphism.Their notion of isomorphism (used elsewhere in the paper, including in the abstract, but not in the main theorem) is quite straight-forward, but not similar to any other use of word that I’ve seen. They say (equation 8) that two functions f,g are isomorphic to one another if there is any h so that f(x)=h(g(x)). This is not even close to the spirit of isomorphism, as the fundamental property of isomorphisms is to preserve structure perfectly. For instance, isomorphisms should be transitive, i.e., if A∼B and B∼C,then A∼B. This is obviously not the case in this example, as it’s easy to see that the constant function is “isomorphic” to

anyfunction. I could agree to f and g being homomorphic, as homomorphism do not require invertibility, but isomorphism is just simply the wrong word. (The point here is not to argue over definitions, just that you shouldn’t use words in a way that confuses yourself and other people.)That said, their notion of isomorphism isn’t useless. It allows you to calculate the value of f(x) from the value of g(x). And that is a nice property. But it doesn’t tell you that you

needg(x) to calculate f(x), as claimed in the abstract. If we were talking about real isomorphisms on the other hand, you would’ve always been able to at least recover g(x) from f(x) by the inverse mapping g(x)=h−1(f(x)) -- and I think that’s closer to what they actually wanted to show.I haven’t made much of an effort to understand their usage of the word “simple”. But it turns out it doesn’t matter, because their result holds for

allaction functions a, not only the “simple” ones.What is the spirit of their claims? I think the point is that you should expect X(ω) to be either

(a) directly used in the computation of a, or

(b) recoverable from a, at least provided we know ψ.

Both claims are really easy to disprove, you only need to have x independent of the outcome ψ(x,a). Then we don’t need to use X(ω) and we can’t recover X(ω) from a if the space X contains more than one element. I care about claim (b) because it’s one of the few ways to formalize that a “uses” or “knows” X. Recall that a is just a function, a collection of ordered pairs (ω,a(ω)). This function doesn’t know anything. (I think this might be relevant for the ELK problem too. A function doesn’t have any latent knowledge. It’s just a function.)

The following proposition is, as far as I understand their paper, an extension of their main theorem. There they claim there is a mapping h:X→Y that produces the optimal outcome in the sense that a(ω)=h(X(ω)). This is true, and in fact an even stronger statement is true. But it doesn’t mean what they think it means.

## Proposition

Let fx,x∈X be a parameterized family of choice functions on A. Define h{fx}:X→A by h{fx}(x)=fx({a∣ψ(x,a)∈G(x)}). Then

(a) The function a(ω)=h{fx}(X(ω)) is an optimal action function (a solution to the regulation problem),

(b) If a is an optimal action function (solution to the regulation problem) there is a family of choice functions so that a(ω)=h{fx}(X(ω)),

(c) Even if we know ψ and one of the families of choice functions fx,x∈X so that a(ω)=h{fx}(X(ω)), we will not always be able to reconstruct X(ω).

## Proof

The first two claims are trivial. The third was explained above. The point of the parameterized choice functions is to make it possible to construct functions that choose any element from {a∣ψ(x,a)∈G(x)}. There are probably more elegant ways to formulate it.

## Rough explanation

The regulation problem is equivalent to a utility-maximization problem as already explained. A function is an optimal action function iff it satisfies a(ω)∈mina∈Au(ψ(X(ω),a)). Here mina∈Au(ψ(X(ω),a)) can be a set, that’s why I use the ∈ symbol. There are potentially many optimal action functions. For instance, if |Ω|=10 and each set mina∈Au(ψ(X(ω),a)) contains two elements, there are 210 action functions. This should explain the rôle of the choice functions. You can see from the defining property of a(w) that it only depends on ω through X(ω); this explains the existence of the h functions.