Proposition 2.16:Consider some Γ0, Γ1,
Φ and Θ∈□c(Γ0×Γ1×Φ).
Define π:elΓ0×Γ1→elΓ0
by π(y,z,α):=(y,{y′∈Γ0∣(y′,z)∈α}).
Then,(π×idΦ)∗BrΓ0×Γ1(Θ)⊆prelΓ0×ΦBrΓ0(Θ)⊆BrΓ0(prΓ0×ΦΘ)
We can prove the second subset inclusion directly as a corollary of Proposition 2.10, just let the t of Proposition 2.10 be the projection function Γ1×Φ→Φ, so that just leaves the first subset inclusion direction. If you’ve seen the proofs so far, you know we do a thing where we try to show subset inclusion with expectations of functions and inequalities instead. And that the proofs all proceed by transforming the expectations until we get a maximum over contribution expectation values, and that’s always where the hard part of proving the inequalities shows up. So, let’s just get that part over with, an interested reader can work it out with previous proofs as a guide. Unusually, we’ll be keeping track of identity functions here.
Plugging in some f, and doing our usual activities to get every term into the appropriate form, we can get this result if we manage to show that
maxθ′∈BrΓ0×Γ1(Θ)(π×idΦ)∗θ′(λy0α0x.f(y0,x,α0))≤maxθ∈BrΓ0(Θ)prelΓ0×Φθ(λy0α0x.f(y0,x,α0))
So, to establish this, we’ll show that, given some θ′∈BrΓ0×Γ1(Θ), we have
(π×idΓ1×Φ)∗(θ′)∈BrΓ0(Θ), and that
prelΓ0×Φ((π×idΓ1×Φ)∗θ′)=(π×idΦ)∗θ′
because, if we show that, then it means that BrΓ0(Θ) is a rich enough set for the right-hand side of the equation to match anything the left-hand-side can put out.
First off,
prelΓ0×Φ((π×idΓ1×Φ)∗θ′)=(π×idΦ)∗θ′
is pretty trivial to show. The only difference between the two processes is that the Γ1 coordinate of θ′ is discarded immediately on the right-hand-side, and it’s preserved for one step and then discarded on the second step for the left-hand-side.
Now for our inequality of interest. Let θ′∈BrΓ0×Γ1(Θ), and we’re trying to show that
(π×idΓ1×Φ)∗(θ′)∈BrΓ0(Θ)
First off, showing the support condition for (π×idΓ1×Φ)∗(θ′) which is somewhat nontrivial this time around. We start off with a guarantee that (y0,y1)∈α. This happens iff
y0∈{y′0|(y′0,y1)∈α}=π(y0,y1,α)2Γ0
And so, we get that y0∈α0 is guaranteed for that pushforward, support condition established.
endofunction condition time. It’s important to remember that we want to treat elΓ0 as the computation side of things, and Γ1×Φ as the environment side of things, for our bridge transform we’re working with.s:Γ0→Γ0 and g:Γ0×Γ1×Φ→[0,1]. Begin.
(π×idΓ1×Φ)∗θ′(λy0y1α0x.χs(y0)∈α0g(s(y0),y1,x))=θ′(λy0y1αx.χs(y0)∈π(y0,α,y1)2Γ0g(s(y0),y1,x))
Let’s unpack precisely what that set is.
=θ′(λy0y1αx.χs(y0)∈{y′0|(y′0,y1)∈α}g(s(y0),y1,x))=θ′(λy0y1αx.χ(s(y0),y1)∈αg(s(y0),y1,x))
And we can rewrite the endofunction a little bit
=θ′(λy0y1αx.χ(s×idΓ1)(y0,y1)∈αg((s×idΓ1)(y0,y1),x))
And finally apply our endofunction condition, since we’ve now got the function in a form that’s treating y0,y1 as part of the computational universe…
≤Θ(λy0y1x.g(y0,y1,x))
And we’re done, this establishes our desired result. ■
Proposition 2.17:Br(Θ) is a continuous function
of Θ.
The way this proof will work is by describing a composition of functions that makes Br(Θ) from Θ, and then showing that each of these functions is continuous, if elΓ×Φ is a finite set.
Claim: The bridge transform of some Θ is equal to (using χelΓ to denote restricting an ultradistribution to the event y∈α and χ−1elΓ to denote the inverse of said function, mapping an ultradistribution on elΓ to the largest ultradistribution that could have produced it via restriction)
χelΓ(⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ)))))
Breaking down the unfamilar notation, the type of pr is elΓ×Φ→Γ×Φ, just the usual projection. That asterisk up top is pullback along that function. The type of ι is elΓ×Φ→Γ×2Γ×Φ. And s∗ is pullback along the function Γ×2Γ×Φ→Γ×2Γ×Φ given by (s,id2Γ,idΦ).
Let’s unpack the exact conditions that cause a θ to lie in the set
χelΓ(⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ)))))
First off, a θ is in this set iff it is supported over the event y∈α, and it lies in the set ⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ))))
Which occurs iff θ is supported over the event y∈α, and for all s:Γ→Γ, θ lies in the set
s∗(χ−1elΓ(ι∗(pr∗(Θ))))
Which occurs iff θ is suported over the event y∈α, and for all s:Γ→Γ, s∗(θ) lies in the set
χ−1elΓ(ι∗(pr∗(Θ)))
Which occurs iff θ is supported over the event y∈α, and for all s:Γ→Γ, χelΓ(s∗(θ)) lies in the set ι∗(pr∗(Θ))
Now, ι is just doing a little bit of type conversion, so we’re justified in ignoring it. Anways, the previous thing occurs iff θ is supported over the event y∈α, and for all s:Γ→Γ, pr∗(χelΓ(s∗(θ)))∈Θ.
Which happens iff θ is supported over the event y∈α and for all s:Γ→Γ and g:Γ×Φ→[0,1],
pr∗(χelΓ(s∗(θ)))(λyx.g(y,x))≤Θ(λyx.g(y,x))
However, unpacking the left-hand side, we get
pr∗(χelΓ(s∗(θ)))(λyx.g(y,x))=χelΓ(s∗(θ))(λyαx.g(y,x))=s∗(θ)(λyαx.χy∈αg(y,x))=θ(λyαx.χs(y)∈αg(s(y),x))
Which is the exact condition for θ to lie in the bridge transform. So, we have an equivalence.
Now, since we’ve phrased the bridge transform as
χelΓ(⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ)))))
We just need to establish that when all the sets are finite, then pullbacks are continuous, pushforwards are continuous, un-restrictions are continuous, intersections are continuous, and restrictions are continuous. Then, this would just be a particularly fancy continuous function, and accordingly, if Θn limited to Θ, then Br(Θn) would limit to Br(Θ).
Let’s establish that when the sets are finite, pullbacks are continuous. Let g:X→Y, and Y and X be finite sets, and ψ∈□Y. Then, we have
g∗(ψ)(λx.f(x)):=ψ(λy.maxx∈g−1(y)f(x))
With the convention that maximizing over the empty set produces a value of 0. That is an alternate phrasing of pullback. We can then go
limn→∞d(g∗(ψn),g∗(ψ))=limn→∞supf:X→[0,1]|g∗(ψn)(f)−g∗(ψ)(f)|=limn→∞supf:X→[0,1]|ψn(λy.maxx∈g−1(y)f(x))−ψ(λy.maxx∈g−1(y)f(x))|≤limn→∞suph:Y→[0,1]|ψn(h)−ψ(h)|=limn→∞d(ψn,ψ)=0
Admittedly, this isn’t quite what our usual modified KR metric usually looks like. The reason we can do this is because we’re just dealing with functions in [0,1], so the norm part of the modified KR metric doesn’t matter, and since our sets are finite, we can say that all points are distance 1 from each other, so all functions are 1-Lipschitz, and then the two metrics coincide. So, pullback along any function is continuous.
For pushforward, it’s easy because, if ψ∈□X, then we’ve got
limn→∞d(g∗(ψn),g∗(ψ))=limn→∞suph:Y→[0,1]|g∗(ψn)(h)−g∗(ψ)(h)|=limn→∞suph:Y→[0,1]|ψn(λx.h(g(x)))−ψ(λx.h(g(x)))|≤limn→∞supf:X→[0,1]|ψn(f)−ψ(f)|=limn→∞d(ψn,ψ)=0
For showing restrictions continuous, for the set E⊆X that we’re updating on,
limn→∞d(χE(ψn),χE(ψ))=limn→∞supf:X→[0,1]|χE(ψn)(f)−χE(ψ)(f)|=limn→∞supf:X→[0,1]|ψn(χx∈Ef(x))−ψn(χx∈Ef(x))|≤limn→∞supf:X→[0,1]|ψn(f)−ψ(f)|=limn→∞d(ψn,ψ)=0
For intersections… that will take a bit more work. We’ll have to use the equivalent formulation of closeness, that ψn limits to ψ iff the Hausdorff distance between the corresponding sets (according to the generalized KR measure) limits to 0. So, our task is to assume that ψn limits to ψ, and ϕn limits to ϕ, and show that ψn∩ϕn limits to ψ∩ϕ. The bound we’ll manage to prove is that
d(ψn∩ϕn,ψ∩ϕ)≤|X|max(d(ψn,ψ),d(ϕn,ϕ))
Where |X| is the number of elements in the finite set X. Here’s the basic argument. For any particular point in the set ψn, there’s a nearby point in ψ (since the Hausdorff distance is low) with only ϵ measure moved around or deleted. So, in particular, if all the measure moved or deleted was just deleted from ψn instead, then that resulting contribution would be below the nearby contribution in ψ that we picked, and so it would lie in ψ as well due to downwards closure.
So, in particular, if ψn and ψ only have a Hausdorff distance of ϵ, then, taking any contribution in ψn and subtracting ϵ measure from \emph{all points} (if possible, if not, just remove measure till you’re at 0) is \emph{guaranteed} to make a point in ψ, and vice-versa.
And a corollary of that is that, given any contribution in ψn∩ϕn, the “subtract max(d(ψn,ψ),d(ϕn,ϕ)) measure from each point” contribution is in ψ, also in ϕ, and at a maximum distance of |X|max(d(ψn,ψ),d(ϕn,ϕ)) from the original contribution. And this argument can be reversed to show that the limit of the intersections is the intersection of the limits (because hausdorff distance between the two goes to 0), so we do in fact have intersection being continuous.
And that just leaves un-restricting. Again, this will take a Hausdorff-distance argument. Fixing some contribution in χ−1E(ψn), it can be broken down into an on-E part θn,E, and an off-E part θn,¬E. When you restrict to E, then θn,E∈ψn. Since ψn is within ϵ of ψ, there’s some θE∈ψ that’s within ϵ of θn,E. Then, let your point in χ−1E(ψ) be θE+θn,¬E (if there’s slightly more than 1 measure there, delete ϵ measure from θn,¬E, or all the measure if there’s less than ϵ present). It’s close to θn,E+θn,¬E because θn,E is close to θE, the other component of it is unchanged, and maybe we deleted a little bit of excess measure which didn’t do much.
This line of argument shows that ψn being close to the limit ψ is sufficient to establish that the un-restriction of the two of them are comparably close together. So we have continuity for that, which is the last thing we needed.
Since we wrote the bridge transform as a sequence of continuous functions, we know it’s continuous (as long as all the involved sets are finite) ■
Proposition 3.1:Let X be a finite poset, f:X→R
and Θ∈□cX downward closed. Define fmax:X→R
by fmax(x):=maxy≤xf(y). Observe that fmax is
always non-decreasing. Then, Θ(f)=Θ(fmax).
Proof: Pick a θ′∈Θ s.t. θ′(fmax)=Θ(fmax). Ie, a maximizing contribution. Let k:X→X be defined as k:=λx.argmaxy≤xf(y). Ie, it moves a point down to somewhere below it where it can attain the highest value according to f. Now, consider k∗(θ′). It’s present in Θ because Θ was, by assumption, downwards closed, and we just moved all the measure down.
Now, we have
Θ(f)=maxθ∈Θθ(f)≥k∗(θ′)(f)=θ′(λx.f(k(x)))=θ′(λx.f(argmaxy≤xf(y)))=θ′(λx.maxy≤xf(y))=θ′(fmax)=Θ(fmax)≥Θ(f)
And so, all inequalities must be equalities, proving that Θ(fmax)≥Θ(f). In order, the connectives were: unpacking definitions, using downward closure to conclude that k∗(θ′)∈Θ, unpacking pushforwards, unpacking the definition of k, using that applying a function to the argmax of inputs to that function just makes the max of the function, folding the definition of fmax back up, using that θ′ was selected to maximize fmax, and applying monotonicity. Done! ■
Proposition 4.1:Consider some Γ, Φ,
a relation Q⊆Γ×Φ and a PUCK Ξ over Q.
Let Θ:=⊤Γ⋉Ξ. Then,Br(Θ)=[⊤Γ⋉(susΘ⋊Ξ)]↓=[⊤Γ⋉(Q−1⋊Ξ)]↓
First off, I’m not terribly picky about variable ordering, so I’ll just write our desired proof target as
Br(Θ)=[⊤Γ⋉Ξ⋉susΘ]↓=[⊤Γ⋉Ξ⋉Q−1]↓
The way we’ll do this is by establishing the following result. For all monotone f′:elΓ×Φ→[0,1], we have
Br(Θ)(f′)≤[⊤Γ⋉Ξ⋉susΘ](f′)≤[⊤Γ⋉Ξ⋉Q−1](f′)≤Br(Θ)(f′)
Why does that suffice? Well, assume hypothetically that the result held. Since the inequalities go in a circle, we have equality for all monotone functions. And then, for some non-monotone function f, we can go
Br(Θ)(f)=Br(Θ)(fmax)=[⊤Γ⋉Ξ⋉susΘ](fmax)=[⊤Γ⋉Ξ⋉susΘ]↓(fmax)=[⊤Γ⋉Ξ⋉susΘ]↓(f)
and swap out susΘ for Q−1 to show the other equality, and then we’d have equality of the three ultradistributions on all functions, so they’re equal.
For the equalities in the above equation, the first one arose because of Proposition 2.4 (bridge transforms are always downwards closed) and Proposition 3.1 (downwards-closed things let you swap out f for fmax and it doesn’t affect the value). The second equality arose because fmax is a monotone function and by assumption, we have equality for monotone functions. The third equality would arise because taking the downwards closure doesn’t affect the expectation value of monotone functions. If you add a bunch of contributions made by measure flowing down, that’s just strictly worse from the perspective of a monotone function and doesn’t change expectation value. And the fourth equality arises from Proposition 3.1 again.
So, we just need to prove the following three inequalities, for monotone functions f.
Br(Θ)(f)≤[⊤Γ⋉Ξ⋉susΘ])(f)≤[⊤Γ⋉Ξ⋉Q−1](f)≤Br(Θ)(f)
The first one is easily addressable by Proposition 2.7. By proposition 2.7 and the definition of Θ, we have
Br(Θ)⊆(Θ⋉susΘ)↓=[⊤Γ⋉Ξ⋉susΘ]↓
And so, for monotone functions f, we have
Br(Θ)(f)≤[⊤Γ⋉Ξ⋉susΘ])(f)
Done.
Now to show our second inequality.
(⊤Γ⋉Ξ⋉susΘ)(λyαx.f(y,x,α))=(⊤Γ⋉Ξ)(λyx.δsusΘ(x)(λα.f(y,x,α)))=(⊤Γ⋉Ξ)(λyx.f(y,x,susΘ(x)))
Unpack the definition of the set
=(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|(y′,x)∈supp Θ}))
Unpack the definition of Θ=(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|(y′,x)∈supp ⊤Γ⋉Ξ}))
The condition (y′,x)∈supp ⊤Γ⋉Ξ is equivalent to x∈supp Ξ(y′). After all, if x∈supp Ξ(y′), the distribution δy′ lies in ⊤Γ, so δy′⋉Ξ would certify that (y′,x)∈supp ⊤Γ⋉Ξ. And if x∉supp Ξ(y′), then no matter the distribution in ⊤Γ or kernel selected from Ξ, if y′ gets picked, then the kernel selected from Ξ isn’t going to be making x along with it. Since we have that iff characterization, we have
=(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈supp Ξ(y′)}))Ξ(y′) is the union of a bunch of k(y′) for k∈Π (and convex hull), so its support is equal to the union of the supports for the k(y′).
=(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈⋃k∈Πsupp k(y′)}))
Then, since each k is a PoCK over Q, k(y′) is the restriction of some measure ϖk to the set Q(y), which will be written as χQ(y′)ϖk.
=(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈⋃k∈Πsupp (χQ(y′)ϖk)}))
And now we’re about to get an inequality.f is monotone, so making the associated set bigger (easier to fulfill the defining condition) should always increase the value of f, and by monotonicity, increase the expectation value, so we get
≤(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈Q(y′)}))
Then restate
=(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|(x,y′)∈Q}))=(⊤Γ⋉Ξ)(λyx.f(y,x,Q−1(x)))
And pack back up as a semidirect product.
=(⊤Γ⋉Ξ)(λyx.δQ−1(x)(λα.f(y,x,α)))=(⊤Γ⋉Ξ⋉Q−1)(λyαx.f(y,x,α))
And we have our second ≤ inequality established!
Now, onto the third inequality.
(⊤Γ⋉Ξ⋉Q−1)(λyαx.f(y,x,α))
Unpack the semidirect products
=⊤Γ(λy.Ξ(y)(λx.δQ−1(x)(λα.f(y,x,α))))
And what top means
=maxy∈ΓΞ(y)(λx.δQ−1(x)(λα.f(y,x,α)))
And as for Ξ… well, each Ξ(y) is the convex hull of the various k(y), for k∈Π. So, the expectation for Ξ(y) is the maximum expectation for the various k(y), so we can rewrite as
=maxy∈Γmaxk∈Πk(y)(λx.δQ−1(x)(λα.f(y,x,α)))
Pick a particular y∗ and k∗ that attain the maximal value
=k∗(y∗)(λx.δQ−1(x)(λα.f(y∗,x,α)))
Reexpress a little bit
=δy∗(λy.k∗(y)(λx.δQ−1(x)(λα.f(y,x,α)))
And pack this back up as a semidirect product
=(δy∗⋉k∗⋉Q−1)(λyαx.f(y,x,α))
And then we’ll be showing that this contribution lies in Br(Θ). Once we’ve done that, we can go
≤maxθ′∈Br(Θ)θ′(λyαx.f(y,x,α))=Br(Θ)(λyαx.f(y,x,α))
And we’d be done, having proven the third inequality and the last one to finish up the proof. So, now our proof target switches to showing that (δy∗⋉k∗⋉Q−1)∈Br(Θ). We can show this if we show the support condition and the endofunction condition.
For the support condition, we have
(δy∗⋉k∗⋉Q−1)(λyαx.χy∉α)=δy∗(λy.k∗(y)(λx.δQ−1(x)(λα.χy∉α)))=δy∗(λy.k∗(y)(λx.χy∉Q−1(x)))=k∗(y∗)(λx.χy∗∉Q−1(x))
And then we use that the k∗(y∗) are all of the form “take this measure, restrict it to Q(y∗)”, to get
=(χQ(y∗)ϖk∗)(λx.χy∗∉Q−1(x))=ϖk∗(λx.χx∈Q(y∗)χy∗∉Q−1(x))
Unpacking the definitions, we get
=ϖk∗(λx.χ(x,y∗)∈Qχ(x,y∗)∉Q)=0
And so, this contribution is indeed supported on (y,α) pairs s.t. y∈α.
Now for the endofunction condition. As usual, fix an s and a g.
(δy∗⋉k∗⋉Q−1)(λyαx.χs(y)∈αg(s(y),x))
Unpack the semidirect product
=δy∗(λy.k∗(y)(λx.δQ−1(x)(λα.χs(y)∈αg(s(y),x))))
Plug in the dirac-deltas
=k∗(y∗)(λx.χs(y∗)∈Q−1(x)g(s(y∗),x))
Reexpress the set membership criterion a bit
=k∗(y∗)(λx.χx∈Q(s(y∗))g(s(y∗),x))
And the contribution at the start
=(χQ(y∗)ϖk∗)(λx.χx∈Q(s(y∗))g(s(y∗),x))
Distribute it in as an indicator function.
=ϖk∗(λx.χx∈Q(y∗)χx∈Q(s(y∗))g(s(y∗),x))
Pull the other indicator function out.
=(χQ(s(y∗))ϖk∗)(λx.χx∈Q(y∗)g(s(y∗),x))
Rewrite with k∗=k∗(s(y∗))(λx.χx∈Q(y∗)g(s(y∗),x))
Use an inequality to get rid of the indicator function
≤k∗(s(y∗))(λx.g(s(y∗),x))
Rewrite it a bit
=δs(y∗)(λy.k∗(y)(λx.g(y,x)))
Swap out k∗(y) for Ξ(y), the latter is larger
≤δs(y∗)(λy.Ξ(y)(λx.g(y,x)))
Swap out δs(y∗) for ⊤Γ, the latter is larger
≤⊤Γ(λy.Ξ(y)(λx.g(y,x)))=(⊤Γ⋉Ξ)(λyx.g(y,x))
Abbreviate
=Θ(λyx.g(y,x))
And bam, endofunction condition is shown, the entire proof goes through now. ■
Corollary 4.3:Suppose that for any d∈D and π:H→A
s.t. d∈supp W(π), it holds that dCπ. That is, the observations
W predicts to receive from the computer are consistent with the
chosen policy. Let L:D→R be a Cartesian loss function
and π:H→A a policy. Then,(prelΓBr(ΘW)∩Cπfair)(Lphys)=W(π;L)
I’m going to be proceeding very cautiously here. First off, make our π value visually distinct by swapping it out for π∗(prelΓBr(ΘW)∩Cπ∗fair)(Lphys)
Now, by the identifications we made earlier, we can identify Γ with AH, the space of policies. Using that to unpack the function a little bit, we have
=(prelΓBr(ΘW)∩Cπ∗fair)(λπα.Lphys(π,α))
Now, we note that intersecting with top of a particular set is equivalent to updating on the indicator function for that set. Using definition 1.5 to unpack Cπ∗fair, we get
=(prelΓBr(ΘW))(λπα.χ∀h∈Hπ,α:Gπ(h)=π∗(h)Lphys(π,α))
Apply that Gπ(h) is “what would the agent do on h if the agent is copying the behavior of π”, so we can rephrase as:
=(prelΓBr(ΘW))(λπα.χ∀h∈Hπ,α:π(h)=π∗(h)Lphys(π,α))
Pull off the projection, and use d for a destiny in D.
=Br(ΘW)(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)Lphys(π,α))
At this point, we use that ΘW:=⊤Γ⋉W, and that W is a PUCK over Q0 and Proposition 4.1 to go
=[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)Lphys(π,α))
Before we can remove the downwards closure, we’ll want to verify the function is monotone. So, we’ll want to start unpacking the physicalist loss next. Applying definition 3.1, and using d′ instead of g to remember it’s a destiny, we have
=[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)minha:ha∈Xπ,αmaxd′:ha⊑d′L(d′))
Next up is unpacking Xπ,α. Using definition 3.1, it’s
=[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)minha:ha∈Hπ,α×A∧(∀π′∈α:Gπ′(h)=a)maxd′′:ha⊑d′L(d′))
At this point, we can, again, treat Gπ′(h) the same as π′(h).
=[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)minha:ha∈Hπ,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′′L(d′))
And now we need to take a moment to show that Hπ,α gets smaller when α gets larger. Applying definition 1.5, the event h∈Hπ,α unpacks as
(∀h′a⊏h,π′∈α:Gπ′(h′)=a)∧(∃d′:h⊏d′∧d′Cπ)
Now, if α becomes a larger set, then it gets harder for the first condition to be fulfilled, so the set Hπ,α shrinks. Now, since this happens, it means that if α gets bigger, it gets more difficult for the prerequisite of the implication in the indicator function to be fulfilled, so the implication is more likely to hold. Further, the minimization is taking place over a smaller set, so the loss goes up. So our function is monotone in α, and we can remove the downwards closure.
=(⊤Γ⋉W⋉Q−10)(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)minha:ha∈Hπ,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′′L(d′))
Unpacking the semidirect product, it is
=⊤Γ(λπ.W(π)(λd.δQ−10(d)(λα.χ∀h∈Hπ,α:π(h)=π∗(h)minha:ha∈Hπ,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′L(d′))))
Substituting in the dirac-delta everywhere that α is, we get
=⊤Γ(λπ.W(π)(λd.χ∀h∈Hπ,Q−10(d):π(h)=π∗(h)minha:ha∈Hπ,Q−10(d)×A∧(∀π′∈Q−10(d):π′(h)=a)maxd′:ha⊑d′L(d′)))
Now, Q−10(d) is the set of policies π′ s.t. π′Q0d. The “this policy is consistent with this destiny” relation. Also let’s swap out ⊤Γ for maximization
=maxπW(π)(λd.χ∀h∈Hπ,Q−10(d):π(h)=π∗(h)minha:ha∈Hπ,Q−10(d)×A∧(∀π′Q0d:π′(h)=a)maxd′:ha⊑d′L(d′))
Now, we’re going to try to address that minimum, and show that the only ha that fulfill the conditions are exactly those ha⊑d. This requires showing that ha⊑d is a sufficient condition to fulfill the relevant properties, and then to show that ha⋢d implies a failure of one of the properties.
So, first up. Assume ha⊑d. Then, for any π′, dQ0π′ and ha⊑d \emph{must} imply that π′(h)=a, that’s what policy consistency means. Also, h∈Hπ,Q−10(d) unpacks as the two conditions
∀h′a′,π′:h′a′⊏h∧dQ0π′→π′(h′)=a′∃d′:h⊏d′∧d′Cπ
As for the first condition,clearly, if π′ is consistent with d, it’s consistent with ha because ha⊑d, and so it must be consistent with any prefix of ha, so the first condition holds.
For the second condition, d is a valid choice, because we assumed ha⊑d, and dCπ occurs always, because W(π) always being supported on d s.t.dCπ was one of our problem assumptions.
So, we have one implication direction down. Now for the reverse implication direction. Assume ha⋢d. Then there are two possibilities. the first possibility is that ha first diverges from d on an observation. The second possibility is that ha first diverges from d on an action.
For the first possibility, it’s possible to make two policies which are consistent with d but also differ in their actions on history h, because h isn’t a prefix of d if ha first differs from d on an observation.
For the second possibility, it’s ruled out by either the condition for h∈Hπ,Q−10(d) that goes
∀h′a′,π′:h′a′⊏h∧π′Q0d→π′(h′)=a′
or the extra condition that
∀π′:π′Q0d→π′(h)=a
applied to the first a-history prefix that deviates from d, because π′Q0d implies that π′
Infra-Bayesian physicalism: proofs part II
This post is an appendix to “Infra-Bayesian physicalism: a formal theory of naturalized induction”.
Proposition 2.16: Consider some Γ0, Γ1, Φ and Θ∈□c(Γ0×Γ1×Φ). Define π:elΓ0×Γ1→elΓ0 by π(y,z,α):=(y,{y′∈Γ0∣(y′,z)∈α}). Then, (π×idΦ)∗BrΓ0×Γ1(Θ)⊆prelΓ0×ΦBrΓ0(Θ)⊆BrΓ0(prΓ0×ΦΘ)
We can prove the second subset inclusion directly as a corollary of Proposition 2.10, just let the t of Proposition 2.10 be the projection function Γ1×Φ→Φ, so that just leaves the first subset inclusion direction. If you’ve seen the proofs so far, you know we do a thing where we try to show subset inclusion with expectations of functions and inequalities instead. And that the proofs all proceed by transforming the expectations until we get a maximum over contribution expectation values, and that’s always where the hard part of proving the inequalities shows up. So, let’s just get that part over with, an interested reader can work it out with previous proofs as a guide. Unusually, we’ll be keeping track of identity functions here.
Plugging in some f, and doing our usual activities to get every term into the appropriate form, we can get this result if we manage to show that maxθ′∈BrΓ0×Γ1(Θ)(π×idΦ)∗θ′(λy0α0x.f(y0,x,α0)) ≤maxθ∈BrΓ0(Θ)prelΓ0×Φθ(λy0α0x.f(y0,x,α0)) So, to establish this, we’ll show that, given some θ′∈BrΓ0×Γ1(Θ), we have (π×idΓ1×Φ)∗(θ′)∈BrΓ0(Θ), and that prelΓ0×Φ((π×idΓ1×Φ)∗θ′)=(π×idΦ)∗θ′ because, if we show that, then it means that BrΓ0(Θ) is a rich enough set for the right-hand side of the equation to match anything the left-hand-side can put out.
First off, prelΓ0×Φ((π×idΓ1×Φ)∗θ′)=(π×idΦ)∗θ′ is pretty trivial to show. The only difference between the two processes is that the Γ1 coordinate of θ′ is discarded immediately on the right-hand-side, and it’s preserved for one step and then discarded on the second step for the left-hand-side.
Now for our inequality of interest. Let θ′∈BrΓ0×Γ1(Θ), and we’re trying to show that (π×idΓ1×Φ)∗(θ′)∈BrΓ0(Θ) First off, showing the support condition for (π×idΓ1×Φ)∗(θ′) which is somewhat nontrivial this time around. We start off with a guarantee that (y0,y1)∈α. This happens iff y0∈{y′0|(y′0,y1)∈α}=π(y0,y1,α)2Γ0 And so, we get that y0∈α0 is guaranteed for that pushforward, support condition established.
endofunction condition time. It’s important to remember that we want to treat elΓ0 as the computation side of things, and Γ1×Φ as the environment side of things, for our bridge transform we’re working with.s:Γ0→Γ0 and g:Γ0×Γ1×Φ→[0,1]. Begin. (π×idΓ1×Φ)∗θ′(λy0y1α0x.χs(y0)∈α0g(s(y0),y1,x)) =θ′(λy0y1αx.χs(y0)∈π(y0,α,y1)2Γ0g(s(y0),y1,x)) Let’s unpack precisely what that set is. =θ′(λy0y1αx.χs(y0)∈{y′0|(y′0,y1)∈α}g(s(y0),y1,x)) =θ′(λy0y1αx.χ(s(y0),y1)∈αg(s(y0),y1,x)) And we can rewrite the endofunction a little bit =θ′(λy0y1αx.χ(s×idΓ1)(y0,y1)∈αg((s×idΓ1)(y0,y1),x)) And finally apply our endofunction condition, since we’ve now got the function in a form that’s treating y0,y1 as part of the computational universe… ≤Θ(λy0y1x.g(y0,y1,x)) And we’re done, this establishes our desired result. ■
Proposition 2.17: Br(Θ) is a continuous function of Θ.
The way this proof will work is by describing a composition of functions that makes Br(Θ) from Θ, and then showing that each of these functions is continuous, if elΓ×Φ is a finite set.
Claim: The bridge transform of some Θ is equal to (using χelΓ to denote restricting an ultradistribution to the event y∈α and χ−1elΓ to denote the inverse of said function, mapping an ultradistribution on elΓ to the largest ultradistribution that could have produced it via restriction) χelΓ(⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ))))) Breaking down the unfamilar notation, the type of pr is elΓ×Φ→Γ×Φ, just the usual projection. That asterisk up top is pullback along that function. The type of ι is elΓ×Φ→Γ×2Γ×Φ. And s∗ is pullback along the function Γ×2Γ×Φ→Γ×2Γ×Φ given by (s,id2Γ,idΦ).
Let’s unpack the exact conditions that cause a θ to lie in the set χelΓ(⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ))))) First off, a θ is in this set iff it is supported over the event y∈α, and it lies in the set ⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ)))) Which occurs iff θ is supported over the event y∈α, and for all s:Γ→Γ, θ lies in the set s∗(χ−1elΓ(ι∗(pr∗(Θ)))) Which occurs iff θ is suported over the event y∈α, and for all s:Γ→Γ, s∗(θ) lies in the set χ−1elΓ(ι∗(pr∗(Θ))) Which occurs iff θ is supported over the event y∈α, and for all s:Γ→Γ, χelΓ(s∗(θ)) lies in the set ι∗(pr∗(Θ))
Now, ι is just doing a little bit of type conversion, so we’re justified in ignoring it. Anways, the previous thing occurs iff θ is supported over the event y∈α, and for all s:Γ→Γ, pr∗(χelΓ(s∗(θ)))∈Θ.
Which happens iff θ is supported over the event y∈α and for all s:Γ→Γ and g:Γ×Φ→[0,1], pr∗(χelΓ(s∗(θ)))(λyx.g(y,x))≤Θ(λyx.g(y,x)) However, unpacking the left-hand side, we get pr∗(χelΓ(s∗(θ)))(λyx.g(y,x)) =χelΓ(s∗(θ))(λyαx.g(y,x)) =s∗(θ)(λyαx.χy∈αg(y,x)) =θ(λyαx.χs(y)∈αg(s(y),x)) Which is the exact condition for θ to lie in the bridge transform. So, we have an equivalence.
Now, since we’ve phrased the bridge transform as χelΓ(⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ))))) We just need to establish that when all the sets are finite, then pullbacks are continuous, pushforwards are continuous, un-restrictions are continuous, intersections are continuous, and restrictions are continuous. Then, this would just be a particularly fancy continuous function, and accordingly, if Θn limited to Θ, then Br(Θn) would limit to Br(Θ).
Let’s establish that when the sets are finite, pullbacks are continuous. Let g:X→Y, and Y and X be finite sets, and ψ∈□Y. Then, we have g∗(ψ)(λx.f(x)):=ψ(λy.maxx∈g−1(y)f(x)) With the convention that maximizing over the empty set produces a value of 0. That is an alternate phrasing of pullback. We can then go limn→∞d(g∗(ψn),g∗(ψ))=limn→∞supf:X→[0,1]|g∗(ψn)(f)−g∗(ψ)(f)| =limn→∞supf:X→[0,1]|ψn(λy.maxx∈g−1(y)f(x))−ψ(λy.maxx∈g−1(y)f(x))| ≤limn→∞suph:Y→[0,1]|ψn(h)−ψ(h)|=limn→∞d(ψn,ψ)=0 Admittedly, this isn’t quite what our usual modified KR metric usually looks like. The reason we can do this is because we’re just dealing with functions in [0,1], so the norm part of the modified KR metric doesn’t matter, and since our sets are finite, we can say that all points are distance 1 from each other, so all functions are 1-Lipschitz, and then the two metrics coincide. So, pullback along any function is continuous.
For pushforward, it’s easy because, if ψ∈□X, then we’ve got limn→∞d(g∗(ψn),g∗(ψ))=limn→∞suph:Y→[0,1]|g∗(ψn)(h)−g∗(ψ)(h)| =limn→∞suph:Y→[0,1]|ψn(λx.h(g(x)))−ψ(λx.h(g(x)))| ≤limn→∞supf:X→[0,1]|ψn(f)−ψ(f)|=limn→∞d(ψn,ψ)=0 For showing restrictions continuous, for the set E⊆X that we’re updating on, limn→∞d(χE(ψn),χE(ψ))=limn→∞supf:X→[0,1]|χE(ψn)(f)−χE(ψ)(f)| =limn→∞supf:X→[0,1]|ψn(χx∈Ef(x))−ψn(χx∈Ef(x))| ≤limn→∞supf:X→[0,1]|ψn(f)−ψ(f)|=limn→∞d(ψn,ψ)=0 For intersections… that will take a bit more work. We’ll have to use the equivalent formulation of closeness, that ψn limits to ψ iff the Hausdorff distance between the corresponding sets (according to the generalized KR measure) limits to 0. So, our task is to assume that ψn limits to ψ, and ϕn limits to ϕ, and show that ψn∩ϕn limits to ψ∩ϕ. The bound we’ll manage to prove is that d(ψn∩ϕn,ψ∩ϕ)≤|X|max(d(ψn,ψ),d(ϕn,ϕ)) Where |X| is the number of elements in the finite set X. Here’s the basic argument. For any particular point in the set ψn, there’s a nearby point in ψ (since the Hausdorff distance is low) with only ϵ measure moved around or deleted. So, in particular, if all the measure moved or deleted was just deleted from ψn instead, then that resulting contribution would be below the nearby contribution in ψ that we picked, and so it would lie in ψ as well due to downwards closure.
So, in particular, if ψn and ψ only have a Hausdorff distance of ϵ, then, taking any contribution in ψn and subtracting ϵ measure from \emph{all points} (if possible, if not, just remove measure till you’re at 0) is \emph{guaranteed} to make a point in ψ, and vice-versa.
And a corollary of that is that, given any contribution in ψn∩ϕn, the “subtract max(d(ψn,ψ),d(ϕn,ϕ)) measure from each point” contribution is in ψ, also in ϕ, and at a maximum distance of |X|max(d(ψn,ψ),d(ϕn,ϕ)) from the original contribution. And this argument can be reversed to show that the limit of the intersections is the intersection of the limits (because hausdorff distance between the two goes to 0), so we do in fact have intersection being continuous.
And that just leaves un-restricting. Again, this will take a Hausdorff-distance argument. Fixing some contribution in χ−1E(ψn), it can be broken down into an on-E part θn,E, and an off-E part θn,¬E. When you restrict to E, then θn,E∈ψn. Since ψn is within ϵ of ψ, there’s some θE∈ψ that’s within ϵ of θn,E. Then, let your point in χ−1E(ψ) be θE+θn,¬E (if there’s slightly more than 1 measure there, delete ϵ measure from θn,¬E, or all the measure if there’s less than ϵ present). It’s close to θn,E+θn,¬E because θn,E is close to θE, the other component of it is unchanged, and maybe we deleted a little bit of excess measure which didn’t do much.
This line of argument shows that ψn being close to the limit ψ is sufficient to establish that the un-restriction of the two of them are comparably close together. So we have continuity for that, which is the last thing we needed.
Since we wrote the bridge transform as a sequence of continuous functions, we know it’s continuous (as long as all the involved sets are finite) ■
Proposition 3.1: Let X be a finite poset, f:X→R and Θ∈□cX downward closed. Define fmax:X→R by fmax(x):=maxy≤xf(y). Observe that fmax is always non-decreasing. Then, Θ(f)=Θ(fmax).
Proof: Pick a θ′∈Θ s.t. θ′(fmax)=Θ(fmax). Ie, a maximizing contribution. Let k:X→X be defined as k:=λx.argmaxy≤xf(y). Ie, it moves a point down to somewhere below it where it can attain the highest value according to f. Now, consider k∗(θ′). It’s present in Θ because Θ was, by assumption, downwards closed, and we just moved all the measure down.
Now, we have Θ(f)=maxθ∈Θθ(f)≥k∗(θ′)(f)=θ′(λx.f(k(x)))=θ′(λx.f(argmaxy≤xf(y))) =θ′(λx.maxy≤xf(y))=θ′(fmax)=Θ(fmax)≥Θ(f) And so, all inequalities must be equalities, proving that Θ(fmax)≥Θ(f). In order, the connectives were: unpacking definitions, using downward closure to conclude that k∗(θ′)∈Θ, unpacking pushforwards, unpacking the definition of k, using that applying a function to the argmax of inputs to that function just makes the max of the function, folding the definition of fmax back up, using that θ′ was selected to maximize fmax, and applying monotonicity. Done! ■
Proposition 4.1: Consider some Γ, Φ, a relation Q⊆Γ×Φ and a PUCK Ξ over Q. Let Θ:=⊤Γ⋉Ξ. Then, Br(Θ)=[⊤Γ⋉(susΘ⋊Ξ)]↓=[⊤Γ⋉(Q−1⋊Ξ)]↓
First off, I’m not terribly picky about variable ordering, so I’ll just write our desired proof target as Br(Θ)=[⊤Γ⋉Ξ⋉susΘ]↓=[⊤Γ⋉Ξ⋉Q−1]↓ The way we’ll do this is by establishing the following result. For all monotone f′:elΓ×Φ→[0,1], we have Br(Θ)(f′)≤[⊤Γ⋉Ξ⋉susΘ](f′)≤[⊤Γ⋉Ξ⋉Q−1](f′)≤Br(Θ)(f′) Why does that suffice? Well, assume hypothetically that the result held. Since the inequalities go in a circle, we have equality for all monotone functions. And then, for some non-monotone function f, we can go Br(Θ)(f)=Br(Θ)(fmax)=[⊤Γ⋉Ξ⋉susΘ](fmax) =[⊤Γ⋉Ξ⋉susΘ]↓(fmax)=[⊤Γ⋉Ξ⋉susΘ]↓(f) and swap out susΘ for Q−1 to show the other equality, and then we’d have equality of the three ultradistributions on all functions, so they’re equal.
For the equalities in the above equation, the first one arose because of Proposition 2.4 (bridge transforms are always downwards closed) and Proposition 3.1 (downwards-closed things let you swap out f for fmax and it doesn’t affect the value). The second equality arose because fmax is a monotone function and by assumption, we have equality for monotone functions. The third equality would arise because taking the downwards closure doesn’t affect the expectation value of monotone functions. If you add a bunch of contributions made by measure flowing down, that’s just strictly worse from the perspective of a monotone function and doesn’t change expectation value. And the fourth equality arises from Proposition 3.1 again.
So, we just need to prove the following three inequalities, for monotone functions f. Br(Θ)(f)≤[⊤Γ⋉Ξ⋉susΘ])(f)≤[⊤Γ⋉Ξ⋉Q−1](f)≤Br(Θ)(f) The first one is easily addressable by Proposition 2.7. By proposition 2.7 and the definition of Θ, we have Br(Θ)⊆(Θ⋉susΘ)↓=[⊤Γ⋉Ξ⋉susΘ]↓ And so, for monotone functions f, we have Br(Θ)(f)≤[⊤Γ⋉Ξ⋉susΘ])(f) Done.
Now to show our second inequality. (⊤Γ⋉Ξ⋉susΘ)(λyαx.f(y,x,α)) =(⊤Γ⋉Ξ)(λyx.δsusΘ(x)(λα.f(y,x,α))) =(⊤Γ⋉Ξ)(λyx.f(y,x,susΘ(x))) Unpack the definition of the set =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|(y′,x)∈supp Θ})) Unpack the definition of Θ =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|(y′,x)∈supp ⊤Γ⋉Ξ})) The condition (y′,x)∈supp ⊤Γ⋉Ξ is equivalent to x∈supp Ξ(y′). After all, if x∈supp Ξ(y′), the distribution δy′ lies in ⊤Γ, so δy′⋉Ξ would certify that (y′,x)∈supp ⊤Γ⋉Ξ. And if x∉supp Ξ(y′), then no matter the distribution in ⊤Γ or kernel selected from Ξ, if y′ gets picked, then the kernel selected from Ξ isn’t going to be making x along with it. Since we have that iff characterization, we have =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈supp Ξ(y′)})) Ξ(y′) is the union of a bunch of k(y′) for k∈Π (and convex hull), so its support is equal to the union of the supports for the k(y′). =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈⋃k∈Πsupp k(y′)})) Then, since each k is a PoCK over Q, k(y′) is the restriction of some measure ϖk to the set Q(y), which will be written as χQ(y′)ϖk. =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈⋃k∈Πsupp (χQ(y′)ϖk)})) And now we’re about to get an inequality.f is monotone, so making the associated set bigger (easier to fulfill the defining condition) should always increase the value of f, and by monotonicity, increase the expectation value, so we get ≤(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈Q(y′)})) Then restate =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|(x,y′)∈Q})) =(⊤Γ⋉Ξ)(λyx.f(y,x,Q−1(x))) And pack back up as a semidirect product. =(⊤Γ⋉Ξ)(λyx.δQ−1(x)(λα.f(y,x,α))) =(⊤Γ⋉Ξ⋉Q−1)(λyαx.f(y,x,α)) And we have our second ≤ inequality established!
Now, onto the third inequality. (⊤Γ⋉Ξ⋉Q−1)(λyαx.f(y,x,α)) Unpack the semidirect products =⊤Γ(λy.Ξ(y)(λx.δQ−1(x)(λα.f(y,x,α)))) And what top means =maxy∈ΓΞ(y)(λx.δQ−1(x)(λα.f(y,x,α))) And as for Ξ… well, each Ξ(y) is the convex hull of the various k(y), for k∈Π. So, the expectation for Ξ(y) is the maximum expectation for the various k(y), so we can rewrite as =maxy∈Γmaxk∈Πk(y)(λx.δQ−1(x)(λα.f(y,x,α))) Pick a particular y∗ and k∗ that attain the maximal value =k∗(y∗)(λx.δQ−1(x)(λα.f(y∗,x,α))) Reexpress a little bit =δy∗(λy.k∗(y)(λx.δQ−1(x)(λα.f(y,x,α))) And pack this back up as a semidirect product =(δy∗⋉k∗⋉Q−1)(λyαx.f(y,x,α)) And then we’ll be showing that this contribution lies in Br(Θ). Once we’ve done that, we can go ≤maxθ′∈Br(Θ)θ′(λyαx.f(y,x,α)) =Br(Θ)(λyαx.f(y,x,α)) And we’d be done, having proven the third inequality and the last one to finish up the proof. So, now our proof target switches to showing that (δy∗⋉k∗⋉Q−1)∈Br(Θ). We can show this if we show the support condition and the endofunction condition.
For the support condition, we have (δy∗⋉k∗⋉Q−1)(λyαx.χy∉α) =δy∗(λy.k∗(y)(λx.δQ−1(x)(λα.χy∉α))) =δy∗(λy.k∗(y)(λx.χy∉Q−1(x))) =k∗(y∗)(λx.χy∗∉Q−1(x)) And then we use that the k∗(y∗) are all of the form “take this measure, restrict it to Q(y∗)”, to get =(χQ(y∗)ϖk∗)(λx.χy∗∉Q−1(x)) =ϖk∗(λx.χx∈Q(y∗)χy∗∉Q−1(x)) Unpacking the definitions, we get =ϖk∗(λx.χ(x,y∗)∈Qχ(x,y∗)∉Q)=0 And so, this contribution is indeed supported on (y,α) pairs s.t. y∈α.
Now for the endofunction condition. As usual, fix an s and a g. (δy∗⋉k∗⋉Q−1)(λyαx.χs(y)∈αg(s(y),x)) Unpack the semidirect product =δy∗(λy.k∗(y)(λx.δQ−1(x)(λα.χs(y)∈αg(s(y),x)))) Plug in the dirac-deltas =k∗(y∗)(λx.χs(y∗)∈Q−1(x)g(s(y∗),x)) Reexpress the set membership criterion a bit =k∗(y∗)(λx.χx∈Q(s(y∗))g(s(y∗),x)) And the contribution at the start =(χQ(y∗)ϖk∗)(λx.χx∈Q(s(y∗))g(s(y∗),x)) Distribute it in as an indicator function. =ϖk∗(λx.χx∈Q(y∗)χx∈Q(s(y∗))g(s(y∗),x)) Pull the other indicator function out. =(χQ(s(y∗))ϖk∗)(λx.χx∈Q(y∗)g(s(y∗),x)) Rewrite with k∗ =k∗(s(y∗))(λx.χx∈Q(y∗)g(s(y∗),x)) Use an inequality to get rid of the indicator function ≤k∗(s(y∗))(λx.g(s(y∗),x)) Rewrite it a bit =δs(y∗)(λy.k∗(y)(λx.g(y,x))) Swap out k∗(y) for Ξ(y), the latter is larger ≤δs(y∗)(λy.Ξ(y)(λx.g(y,x))) Swap out δs(y∗) for ⊤Γ, the latter is larger ≤⊤Γ(λy.Ξ(y)(λx.g(y,x))) =(⊤Γ⋉Ξ)(λyx.g(y,x)) Abbreviate =Θ(λyx.g(y,x)) And bam, endofunction condition is shown, the entire proof goes through now. ■
Corollary 4.3: Suppose that for any d∈D and π:H→A s.t. d∈supp W(π), it holds that dCπ. That is, the observations W predicts to receive from the computer are consistent with the chosen policy. Let L:D→R be a Cartesian loss function and π:H→A a policy. Then, (prelΓBr(ΘW)∩Cπfair)(Lphys)=W(π;L)
I’m going to be proceeding very cautiously here. First off, make our π value visually distinct by swapping it out for π∗ (prelΓBr(ΘW)∩Cπ∗fair)(Lphys) Now, by the identifications we made earlier, we can identify Γ with AH, the space of policies. Using that to unpack the function a little bit, we have =(prelΓBr(ΘW)∩Cπ∗fair)(λπα.Lphys(π,α)) Now, we note that intersecting with top of a particular set is equivalent to updating on the indicator function for that set. Using definition 1.5 to unpack Cπ∗fair, we get =(prelΓBr(ΘW))(λπα.χ∀h∈Hπ,α:Gπ(h)=π∗(h)Lphys(π,α)) Apply that Gπ(h) is “what would the agent do on h if the agent is copying the behavior of π”, so we can rephrase as: =(prelΓBr(ΘW))(λπα.χ∀h∈Hπ,α:π(h)=π∗(h)Lphys(π,α)) Pull off the projection, and use d for a destiny in D. =Br(ΘW)(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)Lphys(π,α)) At this point, we use that ΘW:=⊤Γ⋉W, and that W is a PUCK over Q0 and Proposition 4.1 to go =[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)Lphys(π,α)) Before we can remove the downwards closure, we’ll want to verify the function is monotone. So, we’ll want to start unpacking the physicalist loss next. Applying definition 3.1, and using d′ instead of g to remember it’s a destiny, we have =[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)minha:ha∈Xπ,αmaxd′:ha⊑d′L(d′)) Next up is unpacking Xπ,α. Using definition 3.1, it’s =[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h) minha:ha∈Hπ,α×A∧(∀π′∈α:Gπ′(h)=a)maxd′′:ha⊑d′L(d′)) At this point, we can, again, treat Gπ′(h) the same as π′(h). =[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h) minha:ha∈Hπ,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′′L(d′)) And now we need to take a moment to show that Hπ,α gets smaller when α gets larger. Applying definition 1.5, the event h∈Hπ,α unpacks as (∀h′a⊏h,π′∈α:Gπ′(h′)=a)∧(∃d′:h⊏d′∧d′Cπ) Now, if α becomes a larger set, then it gets harder for the first condition to be fulfilled, so the set Hπ,α shrinks. Now, since this happens, it means that if α gets bigger, it gets more difficult for the prerequisite of the implication in the indicator function to be fulfilled, so the implication is more likely to hold. Further, the minimization is taking place over a smaller set, so the loss goes up. So our function is monotone in α, and we can remove the downwards closure. =(⊤Γ⋉W⋉Q−10)(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h) minha:ha∈Hπ,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′′L(d′)) Unpacking the semidirect product, it is =⊤Γ(λπ.W(π)(λd.δQ−10(d)(λα.χ∀h∈Hπ,α:π(h)=π∗(h) minha:ha∈Hπ,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′L(d′)))) Substituting in the dirac-delta everywhere that α is, we get =⊤Γ(λπ.W(π)(λd.χ∀h∈Hπ,Q−10(d):π(h)=π∗(h) minha:ha∈Hπ,Q−10(d)×A∧(∀π′∈Q−10(d):π′(h)=a)maxd′:ha⊑d′L(d′))) Now, Q−10(d) is the set of policies π′ s.t. π′Q0d. The “this policy is consistent with this destiny” relation. Also let’s swap out ⊤Γ for maximization =maxπW(π)(λd.χ∀h∈Hπ,Q−10(d):π(h)=π∗(h) minha:ha∈Hπ,Q−10(d)×A∧(∀π′Q0d:π′(h)=a)maxd′:ha⊑d′L(d′)) Now, we’re going to try to address that minimum, and show that the only ha that fulfill the conditions are exactly those ha⊑d. This requires showing that ha⊑d is a sufficient condition to fulfill the relevant properties, and then to show that ha⋢d implies a failure of one of the properties.
So, first up. Assume ha⊑d. Then, for any π′, dQ0π′ and ha⊑d \emph{must} imply that π′(h)=a, that’s what policy consistency means. Also, h∈Hπ,Q−10(d) unpacks as the two conditions ∀h′a′,π′:h′a′⊏h∧dQ0π′→π′(h′)=a′ ∃d′:h⊏d′∧d′Cπ As for the first condition,clearly, if π′ is consistent with d, it’s consistent with ha because ha⊑d, and so it must be consistent with any prefix of ha, so the first condition holds.
For the second condition, d is a valid choice, because we assumed ha⊑d, and dCπ occurs always, because W(π) always being supported on d s.t.dCπ was one of our problem assumptions.
So, we have one implication direction down. Now for the reverse implication direction. Assume ha⋢d. Then there are two possibilities. the first possibility is that ha first diverges from d on an observation. The second possibility is that ha first diverges from d on an action.
For the first possibility, it’s possible to make two policies which are consistent with d but also differ in their actions on history h, because h isn’t a prefix of d if ha first differs from d on an observation.
For the second possibility, it’s ruled out by either the condition for h∈Hπ,Q−10(d) that goes ∀h′a′,π′:h′a′⊏h∧π′Q0d→π′(h′)=a′ or the extra condition that ∀π′:π′Q0d→π′(h)=a applied to the first a-history prefix that deviates from d, because π′Q0d implies that π′