Proofs Theorems 2,3

Theorem 2: Acausal Commutative Square: The following diagram commutes for any belief function . Any infradistribution where also makes this diagram commute and induces a belief function.

Here’s how the proof proceeds. In order to show that the full diagram commutes, we first select properties on the four corners such that for each morphism, if the source fulfills the relevant property, then the target fulfills the relevant property. Obviously, the property for the bottom-right corner must be “is a belief function”. There are eight phases for this, some of which are quite easy, but the two most difficult are at the start, corresponding to the two morphisms on the bottom side, because we must exhaustively verify the belief function conditions and infradistribution conditions. As usual, lower-semicontinuity is the trickiest part to show. Then, since we have four sides of the square to show are isomorphisms, we have another eight phases where we verify going forward to a different corner and back is identity. Finally, there’s an easy step where we rule out that going around in a loop produces a nontrivial automorphism, by showing that both paths from first-person static to third-person dynamic are equal.

First, let’s define our properties.

The property for third-person static is , and is an infradistribution.

The property for third-person dynamic is , and , and is an infradistribution.

The property for first-person static is that is a belief function.

The property for first-person dynamic is , and when , and when , and is an infradistribution.

T2.1 Our first step is showing that all 8 morphisms induce the relevant property in the target if the source has the property.

T2.1.1 3-static to 1-static. Our definition is

and we must check the five conditions of a belief function for , assuming is an infradistribution on and it has the property

T2.1.1.1 First, bounded Lipschitz constant. Our proof goal is

Let be the Lipschitz constant of , which is finite because is an infradistribution. Then let be arbitrary (they must be bounded-continuous, though). We have

and then unpack the projection to get

and unpack the update to get

and then by Lipschitzness for ,

Now, when , the two functions are equal, so we have

We’ll leave lower-semicontinuity for the end, it’s hard.

T2.1.1.2 Now, normalization, ie To do this, we have

Then this can be rewritten as

and then as a semidirect product.

and then, from our defining property for , and normalization for since it’s an infradistribution, we have

The same proof works for 0 as well.

T2.1.1.3 Now, sensible supports. Pick a where and are identical on histories compatible with . We want to show

To start,

and undoing the projection and update, we have

Remember that actually, is supported over , ie, the subset of where the destiny is compatible with the policy, so when , and so and behave identically. Making this substitution in, we have

which then packs up as and we’re done.

T2.1.1.4 Now for agreement on max value. For the type signature on inframeasures, we want to show

Let be arbitrary. We have

and then the projection and update unpack to

and then, working in reverse from there, we get and we’re done. For the type signature, we want to show

Let be arbitrary. We have

which then unpacks as

and then, working in reverse, we get and we’re done.

T2.1.1.5 Time to revisit lower-semicontinuity. We want to show that

This is going to take some rather intricate work with Propositions 2 and 3, and Theorem 1 from Inframesures and Domain Theory. Namely, the supremum of continuous functions is lower-semicontinuous, any lower-semicontinuous lower-bounded function has an ascending sequence of lower-bounded continuous functions which limits to it pointwise, and for any ascending sequence which limits to a lower-semicontinuous function pointwise, you can shuffle the limit outside of an inframeasure. Fix some continuous bounded , either in or in , and sequence of policies which converge to .

Our first task is to establish that the subset of where or or … or is closed. This can be viewed as the following subset of :

is compact, and is compact as well, since it limits to a point and the limit point is included. So that product is a compact set, and must be closed. The latter set is itself. And so, since this set is the intersection of a closed set with , it’s closed in with the subspace topology.

Now, define the functions as follows. If has or or or… or , then return . Otherwise, return 1 (or for the other type signature).

is the function where, if , return , otherwise return 1 (or ).

Our first task is to show that all the , and are lower-semicontinuous. This is pretty easy to do. They are 1 (or ) outside of a closed set, and less than that inside the closed set.

Lower-semicontinuity happens because we have one of three cases holding. In the first case, the limit point is outside of the closed set (in the complement open set). Then, at some finite stage and forever afterwards, the sequence is outside closed set of interest, and the sequence becomes just a constant value of 1 (or ), and so liminf equals the value of the limit.

In the second case, the limit point is inside the closed set, and there’s a subsequence of policy-tagged destinies which remains within the closed set of interest. The liminf of the original sequence of policy-tagged destinies evaluated by equals the liminf of the subsequence which stays in the “safe” closed set, because all other points in the sequence get maximal value according to (because they’re outside the closed set), so they’re irrelevant for evaluating the liminf. Then, inside the closed set, acts like , which is continuous, so the liminf of inside the closed set equals the limit of inside the closed set, which equals applied to the limit point (in the closed set)

In the third case, the limit point is inside the closed set, but there’s no subsequence of policy-tagged destinies which remains within the closed set of interest. So, after some finite stage, the value of is 1 or infinity, so the liminf is high and the value of the limit point is low, and again we get lower-semicontinuity.

Finally, we need to show that is an ascending sequence of functions which has as the pointwise limit. The “ascending sequence” part is easy, because as m increases, maps a smaller and smaller set (fewer policies) to the value reports, and more points to 1 (or infinity).

If has , then regardless of n, , so that works out. If has , then the sequence can’t hit infinitely often, otherwise , so after some finite n, drops out of the closed set associated with , and we have .

Just one more piece of setup. For each , since it’s lower-semicontinuous, we can apply our Proposition 3 from the domain theory post to construct an ascending sequence of bounded continuous functions where pointwise.

NOW we can begin showing lower-semicontinuity of .

by how was defined. And then we unpack the projection and update to get

First, we observe that the function inside is greater than , because copies when or or … and is maximal otherwise, while the interior function just copies when and is maximal otherwise. So, by monotonicity, we can go

And then, we can use monotonicity again to get

Wait, what’s this? Well, if , then , because was constructed to be an ascending sequence of functions (in n) limiting to . And is an ascending sequence of functions (in m), so it’s below . Thus, the function lies below , we just applied monotonicity.

Our next claim is that is an ascending sequence of continuous functions which limits pointwise to . Here’s how to show it. The functions are continuous because they’re all the supremum of finitely many continuous functions . They ascend because

The first inequality was from considering one less function in the sup. the second inequality is because, for all m, is an ascending sequence of functions in n.

For showing that the sequence limits (in n) pointwise to , we fix an arbitrary and do

The inequality is because eventually n goes past the fixed number , and then we’re selecting a single function out of the supremum of functions. The equality is because limits pointwise in to . Now, with this, we can go:

Let’s break that down. The first equality was because we showed that is the pointwise limit of the . The second equality is because makes an ascending sequence of functions, so the supremum of the functions would just be itself. The first inequality is because , it’s one of the approximating continuous functions. The second inequality was the thing we showed earlier, about how the limit exceeds regardless of . Finally, we use again that is the pointwise limit of the .

We just showed a quantity is greater than or equal to itself, so all these inequalities must be equalities, and we have

So, the ascending sequence of continuous functions limits pointwise to . Now that we have that, let’s recap what we’ve got so far.We have

Since is an ascending sequence of continuous functions in , then by monotonicity of , that sequence must be increasing, so we have

and then, we can apply the monotone convergence theorem for infradistributions, that a sequence of continuous functions ascending pointwise to a lower-semicontinuous function has the limit of the infradistribution expectations equaling the expectation of the lower-semicontinuous supremum, so we have

and then we pack this back up as an update and projection

and back up to get

and lower-semicontinuity of has been shown. So translating over an infradistribution makes a belief function.

T2.1.2 1-static to 3-static.

Now, we’ll show the five infradistribution conditions on , along with the sixth condition on that it equals .

First, to show it’s even well-defined, we have

That inner function is lower-semicontinuous in , because, starting with

then we have that, because is continuous and is compact, is uniformly continuous. So, as n increases, the function uniformly limits to . Since all the have a uniform upper bound on the Lipschitz constant (from being a belief function) they converge to agreeing that has similar value as , so we get

and then, by lower-semicontinuity for ,

So, since the inner function is lower-semicontinuous, it can be evaluated.

T2.1.2.1 Now for the condition that

We start with and unpack the definition of , to get

And unpack the definition of semidirect product and what means, to get

This step happens because, if , then will be assessing the value of the constant-1 function (or constant-infinity function), and will return a maximal value (by condition 5 on belief functions, that they all agree on the value of 1 (or )). So the only way to minimize that line via a choice of is to make it equal to , as that means that we’re evaluating instead, which may be less than the maximal value.

Now we pack up what means, and the semidirect product, to get

Pack up the definition of to get

Pack up the definition of “1-update on ” to get

Pack up how projection mappings work, to get:

Pack up what means and the semidirect product to get

So we have

as desired, since was arbitrary.

T2.1.2.2 Now to verify the infradistribution conditions on . First, monotonicity. Let

And unpack the definition of semidirect product and what means, to get

Then, by monotonicity for all the , we have

which packs back up in reverse as .

T2.1.2.3 Now for concavity. Start with and unpack in the usual way to get

Then, by concavity for the , as they’re inframeasures, we have

and distribute the inf

Pulling the constants out, we have

and packing back up in the usual way, we have

Concavity is shown.

T2.1.2.4 For Lipschitzness, we start with and then partially unpack in the usual way to get

And, since is a sharp infradistribution, it has a Lipschitz constant of 1, so we have

Since all the have a uniform bound on their Lipschitz constants (one of the belief function conditions), we then have

which rearranges as

and we’re done, we got a Lipschitz constant.

T2.1.2.5 The CAS property for is trival because is a closed subset of which is a compact space, and so you can just take all of as a compact support.

T2.1.2.6 That just leaves normalization. We have

by unpacking and applying normalization for a belief function.

So now, is an infradistribution.

T2.1.3 For 3-static to 3-dynamic, we have

(which fulfills one of the 3-dynamic conditions), and

Since is an infradistribution, is too, and for the condition on , we have:

and we’re done, since had the relevant condition.

T2.1.4 For 3-dynamic to 3-static, we have

First, we must show that the niceness conditions are fulfilled for the infinite semidirect product to be defined at all. maps to . This is clearly a continuous mapping, so in particular, it’s lower-semicontinuous. Dirac-delta distributions are 1-Lipschitz when interpreted as inframeasures (all probability distributions have that property, actually). The compact-shared-CAS condition is redundant because is already a compact space. And all probability distributions (dirac-delta distributions are probability distributions) map constants to the same constant, so we get the increase-in-constants property and the 1 maps to 1 property. So we can indeed take the infinite semidirect product of the .

Now, we’ll show that unrolling like this just produces exactly. We’ll use for the initial state (pair of policy and destiny, also written as ), and for the successive unrolled history of states, actions, and observations. First, by unpacking ’s definition,

Then we unpack the projection and semidirect product

And pack things up into a projection

And then we can observe that unrolling the initial state forever via “advance the destiny 1 step, popping actions and observations off, and advance the policy”, when you project down to just the actions and observations (not the intermediate states), yields just the point distribution on that destiny, so we get:

And then substitute the value of that dirac-delta in to get

So, our . Now we can go

and we’re done, showing had the relevant condition from having it. Since is an infradistribution, is too.

T2.1.5 For 3-dynamic to 1-dynamic, we have

For the action (which is compatible with the start of the history), we have

Applying our definition, we have

Unpacking the projection, we have (where is a state)

Unpacking the update, we have

And then we consider that is just the dirac-delta distribution on , so we have

Substitute the values in, and we have

and then pack up the dirac-delta and we get

So, is just the dirac-delta distribution on and the pair of and the policy advanced one step, as it should be. For that aren’t as it should be, we have

Applying our definition, we have

Unpacking the projection, we have (where is a state)

Unpacking the update, we have

And then we consider that is just the dirac-delta distribution on , so we have

Substitute the values in, and remembering that and are assumed to be different, and we have (regardless of )

Or infinity for the other type signature. Therefore,

So our transition kernel works out. For the other conditions on , just observe that , and use the extreme similarity of their defining conditions.

T2.1.6 For 1-dynamic to 3-dynamic, we have so we can use our defining property of again, clearly show that has the relevant defining properties, so that just leaves cleaning up the defining property for the infrakernel. We remember that

Therefore, we have:

This occurs because the history must be compatible with the policy. Then, we can unpack as:

We unpack our , getting

Then, we observe that for any , . So, must be , and we have

Now, is the dirac-delta distribution on , so making that substitution in, we have

Ie

And this holds for all , so we get our desired result that

T2.1.7 For 1-static to 1-dynamic, we just have

So this automatically makes the infrakernel have the desired properties. For showing the relevant properties of , we just copy the proof at the start of getting the properties for from being a belief function, since and are both defined in the same way.

T2.1.8 For 1-dynamic to 1-static, we have

We’ll work on getting that latter quantity into a better form, but first we have to verify that
is even well-defined at all, w need the to fulfill the niceness conditions. They’re defined as

For lower-semicontinuity, is lower-semicontinuous, because it returns on every action but one (this is a clopen set), and for the action that pairs up with the initial state, the policy-tagged history just has the history and policy advanced, which is continuous. behaves similarly, continuous variation in input leads to continuous variation in output, because eventually the behavior of the policy settles down to a fixed action as the history of actions and observations stabilizes amongst one of the finitely many options.

For making inframeasures, always returns a dirac-delta distribution or , so we’re good there. Similarly, for 1-Lipschitzness, both and dirac-delta distributions are 1-Lipschitz. For compact-shared CAS, as the target space is , it’s compact and you don’t need to worry about that. Finally, both dirac-delta distributions and map constants to either the same constant or higher, and in the type signature, both and dirac-delta distributions map 1 to 1. So all niceness conditions are fulfilled.

Let’s work on unpacking the value

unpacking the projection and semidirect product, we get

And unpacking the update, we have

We can make a substitution which changes nothing (in the scope of the indicator function where ), that

and then we can write this as a projection in the inside

And then we can realize that when we unroll , it just always deterministically unrolls the history, along with the policy, since for to be an input state, must be compatible with , so picking actions from means there’s never any action mismatches. Projecting this down to the actions and observations just makes the history exactly. So we have

Substituting the dirac-delta value in, we have

And we pack the update and projection back up to get

Since was arbitrary, we have

And so, an alternate way to define is as

We can run through the same proof that is indeed a belief function, back from the 3-static to 1-static case, because fulfills all the same properties that did.

Alright, now we have, for all four corners, a condition that’s basically “came from an acausal belief function” that is preserved under morphisms. Now we need to show that all 8 back-and-forth directions are identity.

T2.2.1 For 1-static to 3-static back to 1-static, we want to show

This is (we assume the hard version first)

Then unpack the projection and update

Then unpack the semidirect product and to get

Then realize that the minimizer is picking exactly, otherwise you’d just get maximal value and all the agree on what a maximal input maps to.

And we’re done.

T.2.2.2 For 3-static to 1-static to 3-static being identity, we want to show

This is just exactly the condition we’re assuming on , so we trivially fulfill this (the nontrivial part was shoved into showing that going belief-function to infradistribution over policy-tagged states produced this property).

T2.2.3 For 3-static to 3-dynamic back to 3-static, we need to show that

For this, we abbreviate states beyond the first one as , so is the initial state, and is an element of . Taking the complicated part, it’s

And then we unpack the semidirect product a bit, to get

And then we can write this as a projection, to get

Now, we observe that because , when repeatedly unrolled, is just emitting the actions and observations from the starting destiny/​history, this projection is just the point distribution on the action-observation sequence that is h.

Then we evaluate the expectation, yielding

And we’re done.

T2.2.4 For going 3-dynamic to 3-static to 3-dynamic, we need that

To show this, just reuse the exact same proof from above, just with instead of . Also, works out.

At this point, we’ve established isomorphism for two sides of the square. We’ve just got two more sides left to address, then showing that going around the square results in identity instead of a nontrivial automorphism.

T2.2.5,6 For translating back and forth between 3-dynamic and 1-dynamic, we observe that both translations keep the initial infradistribution over policy-tagged destinies the same, and our 3-dynamic to 1-dynamic, and 1-dynamic to 3-dynamic proofs verified that the and behave as they should when they go back and forth, so we don’t need to worry about this.

T2.2.7 Next is 1-dynamic to 1-static to 1-dynamic. The infrakernel is guaranteed to have the right form, we just need to show that is unchanged. So, we must show

But, since our condition on 1-dynamic (that we showed is a property of applying the morphism to go from any belief function to first-person dynamic) was

The only thing we need to show is that, regardless of ,

For if we could show that, then we could go:

and we’d be done. So, we’ll show that

instead, as that’s all we need. Again, we’ll massage the more complicated side until we get it into the simple form for the other side.

We undo the projection, and abbreviate states as , and action-observation-state sequences as , to yield

Then we reexpress the semidirect product, to yield

We unpack the initial state a fuzz to yield

We apply the update to yield

Then, we realize that if , we can swap out for in the relevant function associated with it.

Now, we can partially pack this up as a projection, to get

At this point, we can realize something interesting. is “start with an initial policy of and h compatible with , then repeatedly feed in actions as if they were created by forever, and forget about the states, leaving just the action-observation sequence”. Now, since the action (being produced by ) always lines up with what the encoded in the state would do, this process never hits the infradistribution, it keeps going on and on and advancing the history with no issue. In particular, the history unrolls to completion, and the resulting action-observation sequence you get would just be the original destiny packed up in the state. So, this infradistribution is just . Making this substitution in, we get

Now we pack up the update again, to get

And then realize that this is a projection, to get

And we’re done, we showed our desired result to establish that going first-person dynamic to static back to dynamic is identity.

T2.2.8 Time for the last leg of the square, that going first-person static to first-person dynamic and first-person static is identity. We want to show that, regardless of ,

Again, like usual, we’ll take the complicated thing and repeatedly rewrite it to reach the simpler thing, for an arbitrary function.

Using for states, we can rewrite the projection, to get

Then we rewrite the semidirect product, and unpack the initial state into a policy and destiny h compatible with , to get

=

We can then rewrite the interior as a projection to get

Now, it’s time to rewrite the update. It rewrites as:

Now, since inside the scope of the indicator function, we can rewrite as

And use our usual argument from before that is just the dirac-delta distribution on h, to get

Now, we can unpack the semidirect product as well as what means, to get:

Now, if , then that inner function turns into 1 (or infinity) which is assigned maximum value by all , so it’s minimized when , so we get:

And we’re done with the last “doing this path is identity” result. All that remains in our proof is just showing that taking both paths from one corner to another corner makes the same result, to show that going around in a loop is identity, ruling out nontrivial automorphisms.

T2.3 The easiest pair of corners for this is going from first-person static to third-person dynamic in both ways. Obviously, the transition kernel would be the same no matter which path you took, which just leaves verifying that the starting infradistribution is the same. Down one path, we have

Down the other path, we have

Obviously, both these paths produce the same result when you try to define in both ways from . And we’re done!

Theorem 3: Pseudocausal Commutative Square: The following diagram commutes for any choice of pseudocausal belief function . Any choice of infradistribution where also makes this diagram commute.

Again, the work we need to put in is getting conditions in the four corners that are “came from a pseudocausal belief function”, then, for phase 1, verifying that all 8 morphisms preserve the relevant property in the target if the source had it, then for phase 2, verifying that all 8 ways of going from a corner to 1 away and back result in identity to get 4 isomorphisms, then finally showing that starting in one corner and going to the other corner by two different paths make the same result, to rule out nontrivial automorphisms.

The property for third-person static is

The property for third-person dynamic is and

The property for first-person static is pseudocausality,

The property for first-person dynamic is and when , and when .

Time to show preservation of these properties by all the morphisms, as well as that the translations make a belief function/​infradistribution.

Our first step is showing that all 8 morphisms induce the relevant property in the target if the source has the property.

T3.1.1 3-static to 1-static. Our definition is

and we must check the five conditions of a belief function, as well as pseudocausality.

T3.1.1.1 Checking the five conditions of a belief function proceeds in almost exactly the same way as checking the five conditions of a belief function in the “Acausal Commutative Square” theorem. We leave it as an exercise to the reader, and will just elaborate on the only nontrivial new argument needed.

The only nontrivial modification is in our proof of lower-semicontinuity. In order for it to work, we need that for any m, the subset of where or or … or is closed.

We can rephrase the set as the projection down to the coordinate of the set

is compact, and is compact as well, since it limits to a point and the limit point is included. So that product is a compact set. The set is closed. So is the intersection of a compact and a closed set, and so is compact. And projections of compact sets are compact, and compactness implies closure. So, the set of histories is closed regardless of m, even if .

The rest of the proof works out with no issue, so translating over an infradistribution makes a belief function.

T3.1.1.2 We still have to check pseudocausality however. Our translation is

and we want to show

Reexpressing this desired statement in terms of , we have

Let and be arbitrary. Then the left side is

We unpack the update

And unpack the update again, to get

Looking at this, the function is when h is compatible with both and , and 1 otherwise. This is a greater function than when h is compatible with just , and 1 otherwise, so by monotonicity for infradistributions, we can get:

and pack up the update to get

And we’re done, we get pseudocausality.

T3.1.2 1-static to 3-static. Our translation is

We’ll show the infradistribution conditions on . First, to show it’s even well-defined, we have

That inner function is lower-semicontinuous in , because was assumed to have that property in its first argument. Since the inner function is lower-semicontinuous, it can be evaluated.

Now for the condition that

We start with and unpack the definition of , to get

And then undo the projection and semidirect product and what means, to get

And then, by pseudocausality for (for all , and also because is supported entirely on histories compatible with ), we can swap out for

And then unpack the update, to get

Now we pack up what means, and the semidirect product, to get

Write this as a projection

Pack up the definition of to get

Pack up the definition of “1-update on ” to get

Pack up what means and the semidirect product and the projection to get

So we have

as desired, since was arbitrary.

Now to verify the infradistribution conditions on . This proof is pretty much identical to the proof in the “Acausal Commutative Square” theorem, interested readers can fill it in.

T3.1.3 For 3-static to 3-dynamic, we have (which fulfills one of the 3-dynamic conditions), and

Since is an infradistribution, is too, and for the condition on , we have:

and we’re done, since had the relevant condition.

T3.1.4 For 3-dynamic to 3-static, we have

First, we must show that the niceness conditions are fulfilled for the infinite semidirect product to be defined. maps to . This is clearly a continuous mapping, so in particular, it’s lower-semicontinuous. Dirac-delta distributions are 1-Lipschitz when interpreted as inframeasures (all probability distributions have that property, actually). The compact-shared-CAS condition is redundant because is already a compact space. And all probability distributions (dirac-delta distributions are probability distributions) map constants to the same constant, so we get the increase-in-constants property and the 1 maps to 1 property. So we can indeed take the infinite semidirect product of the .

Now, we’ll show that unrolling like this just produces exactly. We’ll use for the initial destiny state, and for the successive unrolled history of states, actions, and observations. First, by unpacking ’s definition,

Then we unpack the projection and semidirect product

And pack things up into a projection

And then we can observe that unrolling the initial destiny forever via “advance the destiny 1 step, popping actions and observations off”, when you project down to just the actions and observations, yields just the point distribution on that destiny, so we get:

And then substitute the value of that dirac-delta in to get

So, our . Now we can go

and we’re done, showing had the relevant condition from having it. Since is an infradistribution, is too.

T3.1.5 For 3-dynamic to 1-dynamic, we have

Let’s write as (unpacking the destiny a bit). Now, for the action (which is compatible with the start of the history), we have

Applying our definition, we have

Unpacking the projection, we have

Unpacking the update, we have

And then we consider that is just the dirac-delta distribution on , so we have

Substitute the values in, and we have

So, is just the dirac-delta distribution on and , as it should be.

For that aren’t as it should be, we have

Applying our definition, we have

Unpacking the projection, we have

Unpacking the update, we have

And then we consider that is just the dirac-delta distribution on , so we have

and because , substituting the dirac-delta in produces 1 (or infinity), ie, . Therefore,

So our transition kernel works out. For the other conditions, just observe that , and use the extreme similarity of their defining conditions.

T3.1.6 For 1-dynamic to 3-dynamic, we have so we can use our defining property to again, clearly show that the resulting 3-dynamic starting infradistribution has the relevant property, so that just leaves cleaning up the defining property for the infrakernel. We remember that

Therefore, we have:

Then, we can unpack the semidirect product as:

We unpack our , getting

Then, we observe that for any , . So, must be , and we have

Now, is the dirac-delta distribution on , so making that substitution in, we have

And this holds for all , so we get our desired result that

T3.1.7: For 1-static to 1-dynamic, we just have

So this automatically makes the infrakernel have the desired properties. For showing the relevant property of , we just copy the proof at the start of getting the properties for from being a belief function, since and are both defined in the same way.

T3.1.8 For 1-dynamic to 1-static, we have

We’ll work on getting that latter quantity into a better form, but first we have to verify that
has the fulfilling the niceness conditions. It’s defined as

For lower-semicontinuity, is lower-semicontinuous, because it returns on every action but one (this is a clopen set), and for the action that pairs up with the initial destiny, it just advances the destiny, which is continuous. behaves similarly, continuous variation in input leads to continuous variation in output, because eventually the behavior of the policy settles down to a fixed action as the history of actions and observations stabilizes amongst one of the finitely many options.

For making inframeasures, always returns a dirac-delta distribution or , so we’re good there. Similarly, for 1-Lipschitzness, both and dirac-delta distributions are 1-Lipschitz. For compact-shared CAS, as the target space is , it’s compact and you don’t need to worry about that. Finally, both dirac-delta distributions and map constants to either the same constant or higher, and in the type sinature, both and dirac-delta distributions map 1 to 1. So all niceness conditions are fulfilled.

Let’s work on unpacking the value

unpacking the projection and semidirect product, we get

And then we can write this as a projection in the inside

One of two things can happen. In the first case, h is compatible with , so playing against it never hits , and it unrolls to completion, and projecting down just yields the dirac-delta distribution on h itself. In the second case, h is incompatible with , so eventually we hit , which maps everything to maximal value (either 1 or infinity). Thus, we can write this as an indicator function

Substituting the dirac-delta value in, we have

And we pack the update back up to get

Since was arbitrary, we have

And so, an alternate way to define is as

We can then run through the same proof that is indeed a belief function, back from the 3-static to 1-static case, because fulfills all the same properties that did.

Alright, now we have, for all four corners, a condition that’s basically “came from a pseudocausal belief function” that is preserved under morphisms. Now we need to show that all 8 back-and-forth directions are identity.

T3.2.1 For 1-static to 3-static back to 1-static, we want to show

This is (we assume the hard version first)

Then unpack the update

And the projection

Then unpack the semidirect product and to get

then rewrite this as an update

And then we can apply pseudocausality of \BF to get that the minimizer must pick exactly because and so we get

and we’re done.

T3.2.2 For 3-static to 1-static to 3-static, we must show

Well, the property for a third-person static is exactly that.

T3.2.3 For 3-static to 3-dynamic to 3-static, we want that

Taking the complicated part, it’s

We unpack the projection

And then we unpack the semidirect product a bit, to get

And then we can write this as a projection, to get

Now, we observe that because , when repeatedly unrolled, is just emitting the actions and observations from the starting destiny, this projection is just the point distribution on the action-observation sequence that is .

Then we evaluate the expectation, yielding

And we’re done.

T3.2.4 For going 3-dynamic to 3-static to 3-dynamic, we need that

To show this, just reuse the exact same proof from above, just with instead of . Also, works out as it should.

At this point, we’ve established isomorphism for two sides of the square. We’ve just got two more sides left to address, then showing that going around the square results in identity instead of a nontrivial automorphism.

T3.2.5,6 For translating back and forth between 3-dynamic and 1-dynamic, we observe that both translations keep the initial infradistribution over destinies the same, and our 3-dynamic to 1-dynamic, and 1-dynamic to 3-dynamic proofs verified that the and behave as they should when they go back and forth, so we don’t need to worry about this.

T3.2.7 Next is 1-dynamic to 1-static to 1-dynamic. The infrakernel is guaranteed to have the right form, we just need to show that is unchanged. So, we must show

But, since our condition on 1-dynamic (that we showed is a property of applying the morphism to go from any belief function to first-person dynamic) was

The only thing we need to show is that, regardless of ,

For if we could show that, then we could go:

and we’d be done. However, from the 1-dynamic to 1-static proof of property preservation, we already showed that

back then, so we’re just done.

T3.2.8 Time for the last leg of the square, that going first-person static to dynamic and back to static is identity. We want to show that, regardless of ,

To do this, we use the result from the 1-dynamic to 1-static proof of property preservation, that

Applying this fact with as an abbreviation for , we can rewrite our proof goal equivalently with this fact as

But we showed this exact result when we showed that 1-static to 3-static to 1-static was identity, so we’re done.

And we’re done with the last “doing this path is identity” result. All that remains in our proof is just showing that taking both paths from one corner to another corner makes the same result, to show that going around in a loop is identity, ruling out nontrivial automorphisms.

T3.3 The easiest pair of corners for this is going from first-person static to third-person dynamic in both ways. Obviously, the transition kernel would be the same no matter which path you took, which just leaves verifying that the starting infradistribution is the same. Down one path, we have

Down the other path, we have

Obviously, both these paths produce the same result when you try to define in both ways from . And we’re done!

No comments.