Fair upfront warning: This is not a particularly readable proof section (though much better than Section 2 about belief functions). There’s dense notation, logical leaps due to illusion of transparency since I’ve spent a month getting fluent with these concepts, and a relative lack of editing since it’s long. If you really want to read this, I’d suggest PM-ing me to get a link to MIRIxDiscord, where I’d be able to guide you through it and answer questions.

Proposition 1:If f∈C(X,[0,1]) then f+:(m,b)↦m(f)+b is a positive functional on Msa(X).

Proof Sketch: We just check three conditions. Linearity, being nonnegative on Msa(X), and continuity.

So we have verified that f+(aM+a′M′)=af+(M)+a′f+(M′) and we have linearity.

Positivity proof: An sa-measure M, writeable as (m,b) has m uniquely writeable as a pair of finite measures m+ (all the positive regions) and a m− (all the negative regions) by the Jordan Decomposition Theorem, and b+m−(1)≥0. So,

f+(M)=m(f)+b=m+(f)+m−(f)+b≥0+m−(1)+b≥0

The first ≥ by 1≥f≥0, so the expectation of f is positive and m− is negative so taking the expectation of 1 is more negative. The second ≥ is by the condition on how m− relates to b.

Continuity proof: Fix a sequence (mn,bn) converging to (m,b). Obviously the b part converges, so now we just need to show that mn(f) converges to m(f). The metric we have on the space of finite signed measures is the KR-metric, which implies the thing we want. This only works for continuous f, not general f.

Theorem 1:Every positive functional on Msa(X) can be written as (m,b)↦c(m(f)+b), where c≥0, and f∈C(X,[0,1])

Proof Sketch: The first part is showing that it’s impossible to have a positive functional where the b term doesn’t matter, without the positive functional being the one that maps everything to 0. The second part of the proof is recovering our f by applying the positive functional to Dirac-delta measures δx, to see what the function must be on point x.

Part 1: Let’s say f+ isn’t 0, ie there’s some nonzero (m,b) pair where f+(m,b)>0, and yet f+(0,1)=0 (which, by linearity, means that f+(0,b)=0 for all b). We’ll show that this situation is impossible.

Then, 0<f+(m,b)=f+(m+,0)+f+(m−,b) by our starting assumption, and Jordan decomposition of m, along with linearity of positive functionals. Now, f+(m−,b)+f+(−2(m−),0)=f+(−(m−),b) because positive functionals are linear, and everything in that above equation is an sa-measure (flipping a negative measure makes a positive measure, which doesn’t impose restrictions on the b term except that it be ≥0). And so, by nonnegativity of positive functionals on sa-measures, f+(m−,b)≤f+(−(m−),b). Using this, we get

Another use of linearity was invoked for the first = in the second line, and then the second = made use of our assumption that f+(0,b)=0 for all b.

At this point, we have derived that 0<f+(m+,0)+f+(−(m−),0). Both of these are positive measures. So, there exists some positive measure m′ where f+(m′,0)>0.

Now, observe that, for all b, 0=f+(0,b)=f+(m′,0)+f+(−(m′),b)

Let b be sufficiently huge to make (−(m′),b) into an sa-measure. Also, since f+(m′,0)>0, f+(−(m′),b)<0, which is impossible because positive functionals are nonnegative on all sa-measures. Contradiction. Due to the contradiction, if there’s a nonzero positive functional, it must assign f+(0,1)>0, so let f+(0,1) be our c term.

Proof part 2: Let’s try to extract our f. Let f(x):=f+(δx,0)f+(0,1) This is just recovering the value of the hypothesized f on x by feeding our positive functional the measure δx that assigns 1 value to x and nothing else, and scaling. Now, we just have to verify that this f is continuous and in [0,1].

For continuity, let xn limit to x. By the KR-metric we’re using, (δxn,0) limits to (δx,0). By continuity of f+, f+(δxn,0) limits to f+(δx,0). Therefore, f(xn) limits to f(x) and we have continuity.

For a lower bound, f≥0, because f(x) is a ratio of two nonnegative numbers, and the denominator isn’t 0.

Now we just have to show that f≤1. For contradiction, assume there’s an x where f(x)>1. Then f+(δx,0)f+(0,1)>1, so f+(δx,0)>f+(0,1), and in particular, f+(0,1)−f+(δx,0)<0.

But then, f+(−(δx),1)+f+(δx,0)=f+(0,1), so f+(−(δx),1)=f+(0,1)−f+(δx,0)<0

However, (−(δx),1) is an sa-measure, because δx(1)+1=0, and must have nonnegative value, so we get a contradiction. Therefore, f∈C(X,[0,1]).

Lemma 1: Compactness Lemma:Fixing some nonnegative constants λ◯ and b◯, the set of sa-measures where m+(1)∈[0,λ◯], b∈[0,b◯], is compact. Further, if a set lacks an upper bound on m+(1) or on b, it’s not compact.

Proof Sketch: We fix an arbitrary sequence of sa-measures, and then use the fact that closed intervals are compact-complete and the space ΔX is compact-complete to isolate a suitable convergent subsequence. Since all sequences have a limit point, the set is compact. Then, we go in the other direction, and get a sequence with no limit points assuming either a lack of upper bounds on m+(1), or a lack of upper bounds on b.

Proof: Fix some arbitrary sequence Mn wandering about within this space, which breaks down into (m+n,0)+(m−n,bn), and then, since all measures are just a probability distribution scaled by the constant m(1), it further breaks down into (m+n(1)⋅μn,0)+(m−n(1)⋅μ′n,bn). Since bn+m−n(1)≥0, m−n(1) must be bounded in [−b◯,0].

Now, what we can do is extract a subseqence where bn ,m+n(1), m−n(1), μn, and μ′n all converge, by Tychonoff’s Theorem (finite product, no axiom of choice required) Our three number sequences are all confined to a bounded interval, and our two probability sequences are wandering around within ΔX which is a compact complete metric space if X is. The limit of this subsequence is a limit point of the original sequence, since all its components are arbitrarily close to the components that make up Mn for large enough n in our subsequence.

The limiting value of m+(1) and b both obey their respective bounds, and the cone of sa-measures is closed, so the limit point is an sa-measure and respects the bounds too. Therefore the set is compact, because all sequences of points in it have a limit point.

In the other direction, assume a set B has unbounded b values. Then we can fix a sequence (mn,bn)∈B where bn increases without bound, so the a-measures can’t converge. The same applies to all subsequences, so there’s no limit point, so B isn’t compact.

Now, assume a set B has bounded b values, call the least upper bound b⊙, but the value of m+(1) is unbounded. Fix a sequence (mn,bn)∈B where m+n(1) is unbounded above. Assume a convergent subsequence exists. Since bn+m−n(1)≥0, m−n(1) must be bounded in [−b⊙,0]. Then because mn(1)=m+n(1)+m−n(1)≥m+n(1)−b⊙, and the latter quantity is finite, mn(1) must be unbounded above. However, in order for the mn to limit to some m, limn→∞mn(1)=m(1), which results in a contradiction. Therefore, said convergent subsequence doesn’t exist, and B is not compact.

Put together, we have a necessary-and-sufficient condition for a closed subset of Msa(X) to be compact. There must be an upper bound on b and m+(1), respectively.

Lemma 2:The upper completion of a closed set of sa-measures is closed.

Proof sketch: We’ll take a convergent sequence (mn,bn) in the upper completion of B that limits to (m,b), and show that, in order for it to converge, the same sorts of bounds as the Compactness Lemma uses must apply. Then, breaking down (mn,bn) into (mBn,bBn)+(m∗n,b∗n), where (mBn,bBn)∈B, and (m∗n,b∗n) is an sa-measure, we’ll transfer these Compactness-Lemma-enabling bounds to the sequences (mBn,bBn) and (m∗n,b∗n), to get that they’re both wandering around in a compact set. Then, we just take a convergent subsequence of both, add the two limit points together, and get our limit point (m,b), witnessing that it’s in the upper completion of B.

Proof: Let (mn,bn)∈B+Msa(X) limit to some (m,b). A convergent sequence (plus its one limit point) is a compact set of points, so, by the Compactness Lemma, there must be a b◯ and λ◯ that are upper bounds on the bn and m+n(1) values, respectively.

Now, for all n, break down (mn,bn) as (mBn,bBn)+(m∗n,b∗n), where (mBn,bBn)∈B, and (m∗n,b∗n) is an sa-measure.

Because bBn+b∗n=bn≤b◯, we can bound the bBn and b∗n quantities by b◯. This transfers into a −b◯ lower bound on mB−n(1) and m∗−n(1), respectively.

Using worst-case values for mB−n(1) and m∗−n(1), we get:

mB+n(1)+m∗+n(1)−2b◯≤λ◯

mB+n(1)+m∗+n(1)≤λ◯+2b◯

So, we have upper bounds on mB+n(1) and m∗+n(1) of λ◯+2b◯, respectively.

Due to the sequences (mBn,bBn) and (m∗n,b∗n) respecting bounds on b and m+(1) (b◯ and λ◯+2b◯ respectively), and wandering around within the closed sets B and Msa(X) respectively, we can use the Compactness Lemma and Tychonoff’s theorem (finite product, no axiom of choice needed) to go “hey, there’s a subsequence where both (mBn,bBn) and (m∗n,b∗n) converge, call the limit points (mB,bB) and (m∗,b∗). Since B and Msa(X) are closed, (mB,bB)∈B, and (m∗,b∗)∈Msa(X).”

Now, does (mB,bB)+(m∗,b∗)=(m,b)? Well, for any ϵ, there’s some really large n where d((mBn,bBn),(mB,bB))<ϵ, d((m∗n,b∗n),(m∗,b∗))<ϵ, and d((mn,bn),(m,b))<ϵ. Then, we can go:

So, regardless of ϵ, d((m,b),(mB,bB)+(m∗,b∗))<3ϵ, so (mB,bB)+(m∗,b∗)=(m,b). So, we’ve written (m,b) as a sum of an sa-measure in B and an sa-measure, certifying that (m,b)∈B+Msa(X), so B+Msa(X) is closed.

Proof sketch: Show both subset inclusion directions. One is very easy, then we assume the second direction is false, and invoke the Hahn-Banach theorem to separate a point in the latter set from the former set. Then we show that the separating functional is a positive functional, so we have a positive functional where the additional point underperforms everything in B+Msa(X), which is impossible by the definition of the latter set.

Easy direction: We will show that B+Msa(X)⊆{M|∀f+∃M′∈B:f+(M)≥f+(M′)}

This is because a M∈(B+Msa(X)), can be written as M=MB+M∗. Let MB be our M′ of interest. Then, it is indeed true that for all f+, f+(M)=f+(MB)+f+(M∗)≥f+(MB)

Hard direction: Assume by contradiction that

B+Msa(X)⊂{M|∀f+∃M′∈B:f+(M)≥f+(M′)}

Then there’s some M where ∀f+∃M′∈B:f+(M)≥f+(M′) and M∉B+Msa(X). B+Msa(X) is the upper completion of a closed set, so by the Compactness Lemma, it’s closed, and since it’s the Minkowski sum of convex sets, it’s convex.

Now, we can use the variant of the Hahn-Banach theorem from the Wikipedia article on “Hahn-Banach theorem”, in the “separation of a closed and compact set” section. Our single point M is compact, convex, nonempty, and disjoint from the closed convex set B+Msa(X). Banach spaces are locally convex, so we can invoke Hahn-Banach separation.

Therefore, there’s some continuous linear functional ϕ s.t. ϕ(M)<infM′∈(B+Msa(X))ϕ(M′)

We will show that this linear functional is actually a positive functional!

Assume there’s some sa-measure M∗ where ϕ(M∗)<0. Then we can pick a random MB∈B, and consider ϕ(MB+cM∗), where c is extremely large.MB+cM∗ lies in B+Msa(X), but it would also produce an extremely negative value for \phi which undershoots ϕ(M) which is impossible. So ϕ is a positive functional.

However, ϕ(M)<infM′∈(B+Msa(X))ϕ(M′), so ϕ(M)<infM′∈Bϕ(M′). But also, M fulfills the condition ∀f+∃M′∈B:f+(M)≥f+(M′), because of the set it came from. So, there must exist some M′∈B where ϕ(M)≥ϕ(M′). But, we have a contradiction, because ϕ(M)<infM′∈Bϕ(M′).

So, there cannot be any point in {M|∀f+∃M′∈B:f+(M)≥f+(M′)} that isn’t in B+Msa(X). This establishes equality.

Lemma 3:For any closed set B⊆Msa(X) and point M∈B, the set ({M}−Msa(X))∩B is nonempty and compact.

Proof: It’s easy to verify nonemptiness, because M is in the set. Also, it’s closed because it’s the intersection of two closed sets.B was assumed closed, and the other part is the Minkowski sum of {M} and −Msa(X), which is closed if −Msa(X) is, because it’s just a shift of −Msa(X) (via a single point). −Msa(X) is closed because it’s −1 times a closed set.

We will establish a bound on the m+(1) and b values of anything in the set, which lets us invoke the Compactness Lemma to show compactness, because it’s a closed subset of a compact set.

Note that if M′∈({M}−Msa(X))∩B, then M′=M−M∗, so M′+M∗=M. Rewrite this as (m′,b′)+(m∗,b∗)=(m,b)

Because b′+b∗=b, we can bound b′ and b∗ by b. This transfers into a −b lower bound on m′−(1) and m∗−(1). Now, we can go:

m′+(1)+m′−(1)+m∗+(1)+m∗−(1)=m′(1)+m∗(1)=m(1)

=m+(1)+m−(1)≤m+(1)

Using worst-case values for m′−(1) and m∗−(1), we get:

m′+(1)+m′+(1)−2b≤m+(1)

m′+(1)≤m′+(1)+m∗+(1)≤m+(1)+2b

So, we have an upper bound of m+(1)+2b on m′+(1), and an upper bound of b on b′. Further, (m′,b′) was arbitrary in ({M}−Msa(X))∩B, so we have our bounds. This lets us invoke the Compactness Lemma, and conclude that said closed set is compact.

Lemma 4:If ≥ is a partial order on B where M′≥M iff there’s some sa-measure M∗ where M=M′+M∗, then

∃M′>M↔(M∈B∧∃M′≠M:M′∈{M}−Msa(X))∩B)↔M is not minimal in B

Proof: ∃M′>M↔∃M′≠M:M′≥M

Also, M′≥M↔(M′,M∈B∧∃M∗:M=M′+M∗)

Also, ∃M∗:M=M′+M∗↔∃M∗:M−M∗=M′↔M′∈({M}−Msa(X))

Putting all this together, we get

(∃M′>M)↔(M∈B∧∃M′≠M:M′∈({M}−Msa(X))∩B)

And we’re halfway there. Now for the second half.

M is not minimal in B↔M∈B∧(∃M′∈B:M′≠M∧(∃M∗:M=M′+M∗))

Also, ∃M∗:M=M′+M∗↔∃M∗:M−M∗=M′↔M′∈({M}−Msa(X))

Putting this together, we get

M is not minimal in B↔(M∈B∧∃M′≠M:M′∈({M}−Msa(X))∩B)

And the result has been proved.

Theorem 2:Given a nonempty closed set B, the set of minimal points Bmin is nonempty and all points in B are above a minimal point.

Proof sketch: First, we establish an partial order that’s closely tied to the ordering on B, but flipped around, so minimal points in B are maximal elements. We show that it is indeed a partial order, letting us leverage Lemma 4 to translate between the partial order and the set B. Then, we show that every chain in the partial order has an upper bound via Lemma 3 and compactness arguments, letting us invoke Zorn’s lemma to show that that everything in the partial order is below a maximal element. Then, we just do one last translation to show that minimal points in B perfectly correspond to maximal elements in our partial order.

Proof: first, impose a partial order on B, where M′≥M iff there’s some sa-measure M∗ where M=M′+M∗. Notice that this flips the order. If an sa-measure is “below” another sa-measure in the sa-measure addition sense, it’s above that sa-measure in this ordering. So a minimal point in B would be maximal in the partial order. We will show that it’s indeed a partial order.

Reflexivity is immediate. M=M+(0,0), so M≥M.

For transitivity, assume M′′≥M′≥M. Then there’s some M∗ and M′∗ s.t. M=M′+M∗, and M′=M′′+M′∗. Putting these together, we get M=M′′+(M∗+M′∗), and adding sa-measures gets you an sa-measure, so M′′≥M.

For antisymmetry, assume M′≥M and M≥M′. Then M=M′+M∗, and M′=M+M′∗. By substitution, M=M+(M∗+M′∗), so M′∗=−M∗. For all positive functionals, f+(M′∗)=f+(−M∗)=−f+(M∗), and since positive functionals are always nonnegative on sa-measures, the only way this can happen is if M∗ and M′∗ are 0, showing that M=M′.

Anyways, since we’ve shown that it’s a partial order, all we now have to do is show that every chain has an upper bound in order to invoke Zorn’s lemma to show that every point in B lies below some maximal element.

Fix some ordinal-indexed chain Mγ, and associate each of them with the set Sγ=({Mγ}+(−Msa(X)))∩B, which is compact by Lemma 3 and always contains Mγ.

The collection of Sγ also has the finite intersection property, because, fixing finitely many of them, we can consider a maximal γ∗, and Mγ∗ is in every associated set by:

Case 1: Some other Mγ equals Mγ∗, so Sγ=Sγ∗ and Mγ∗∈Sγ∗=Sγ.

Case 2: Mγ∗>Mγ, and by Lemma 4, Mγ∗∈({Mγ}−Msa(X))∩B.

Anyways, since all the Sγ are compact, and have the finite intersection property, we can intersect them all and get a nonempty set containing some point M∞. M∞ lies in B, because all the sets we intersected were subsets of B. Also, because M∞∈(Mγ−Msa(X))∩B for all γ in our chain, then if M∞≠Mγ, Lemma 4 lets us get M∞>Mγ, and if M∞=Mγ, then M∞≥Mγ. Thus, M∞ is an upper bound for our chain.

By Zorn’s Lemma, because every chain has an upper bound, there are maximal elements in B, and every point in B has a maximal element above it.

To finish up, use Lemma 4 to get: M is maximal↔¬∃M′>M↔M is minimal in B

Proposition 3: Given a f∈C(X,[0,1]), and a B that is nonempty closed, inf(m,b)∈B(m(f)+b)=inf(m,b)∈Bmin(m(f)+b)

Direction 1: since Bmin is a subset of B, we get one direction easily, that

inf(m,b)∈B(m(f)+b)≤inf(m,b)∈Bmin(m(f)+b)

Direction 2: Take a M∈B. By Theorem 2, there is a Mmin∈Bmin s.t. M=Mmin+M∗. Applying our positive functional m(f)+b (by Proposition 1), we get that m(f)+b≥mmin(f)+bmin. Because every point in B has a point in Bmin which scores as low or lower according to the positive functional,

inf(m,b)∈B(m(f)+b)≥inf(m,b)∈Bmin(m(f)+b)

And this gives us our desired equality.

Proposition 4:Given a nonempty closed convex B, Bmin=(Buc)min and (Bmin)uc=Buc

Proof: First, we’ll show Bmin=(Buc)min. We’ll use the characterization in terms of the partial order ≤ we used for the Zorn’s Lemma proof of Theorem 2. If a point M is in Buc, then it can be written as M=MB+M∗, so M≤MB. Since all points added in Buc lie below a preexisting point in B (according to the partial order from Theorem 2) the set of maximals (ie, set of minimal points) is completely unchanged when we add all the new points to the partial order via upper completion, so Bmin=(Buc)min.

For the second part, one direction is immediate. Bmin⊆B, so (Bmin)uc⊆Buc. For the reverse direction, take a point M∈Buc. It can be decomposed as MB+M∗, and then by Theorem 2, MB can be decomposed as Mmin+M′∗, so M=Mmin+(M∗+M′∗), so it lies in (Bmin)uc, and we’re done.

Theorem 3:If the nonempty closed convex sets A and B have Amin≠Bmin, then there is some f∈C(X,[0,1]) where EA(f)≠EB(f)

Proof sketch: We show that upper completion is idempotent, and then use that to show that the upper completions of A and B are different. Then, we can use Hahn-Banach to separate a point of A from Buc (or vice-versa), and show that the separating functional is a positive functional. Finally, we use Theorem 1 to translate from a separating positive functional to different expectation values of some f∈C(X,[0,1])

Proof: Phase 1 is showing that upper completion is idempotent. (Buc)uc=Buc. One direction of this is easy, Buc⊆(Buc)uc. In the other direction, let M∈(Buc)uc. Then we can decompose M into M′+M∗, where M′∈Buc, and decompose that into MB+M′∗ where MB∈B, so M=MB+(M∗+M′∗) and M∈Buc.

Now for phase 2, we’ll show that the minimal points of one set aren’t in the upper completion of the other set. Assume, for contradiction, that this is false, so Amin⊆Buc and Bmin⊆Auc. Then, by idempotence, Proposition 4, and our subset assumption,

Auc=(Amin)uc⊆(Buc)uc=Buc

Swapping the A and B, the same argument holds, so Auc=Buc, so (Buc)min=(Auc)min.

Now, using this and Proposition 4, Bmin=(Buc)min=(Auc)min=Amin.

But wait, we have a contradiction, we said that the minimal points of B and A weren’t the same! Therefore, either Bmin⊈Auc, or vice-versa. Without loss of generality, assume that Bmin⊈Auc.

Now for phase 3, Hahn-Banach separation to get a positive functional with different inf values. Take a point MB in Bmin that lies outside Auc. Now, use the Hahn-Banach separation of {MB} and Auc used in the proof of Proposition 2, to get a linear functional ϕ (which can be demonstrated to be a positive functional by the same argument as the proof of Proposition 2) where: ϕ(MB)<infM∈Aucϕ(M). Thus, infM∈Bϕ(M)<infM∈Aϕ(M), so infM∈Bϕ(M)≠infM∈Aϕ(M)

Said positive functional can’t be 0, otherwise both sides would be 0. Thus, by Theorem 1, ϕ((m,b))=a(m(f)+b) where a>0, and f∈C(X,[0,1]). Swapping this out, we get:

inf(m,b)∈Ba(m(f)+b)≠inf(m′,b′)∈Aa(m′(f)+b′)

inf(m,b)∈B(m(f)+b)≠inf(m′,b′)∈A(m′(f)+b′)

and then this is EB(f)≠EA(f) So, we have crafted our f∈C(X,[0,1]) which distinguishes the two sets and we’re done.

Corollary 1:If two nonempty closed convex upper-complete sets A and B are different, then there is some f∈C(X,[0,1]) whereEA(f)≠EB(f)

Proof: Either Amin≠Bmin, in which case we can apply Theorem 3 to separate them, or their sets of minimal points are the same. In that case, by Proposition 4 and upper completion, A=Auc=(Amin)uc=(Bmin)uc=Buc=B and we have a contradiction because the two set are different.

Theorem 4:If H is an infradistribution/bounded infradistribution, then h:f↦EH(f) is concave in f, monotone, uniformly continuous/Lipschitz, h(0)=0,h(1)=1, and if range(f)⊈[0,1], h(f)=−∞

Proof sketch: h(0)=0,h(1)=1 is trivial, as is uniform continuity from the weak bounded-minimal condition. For concavity and monotonicity, it’s just some inequality shuffling, and for h(f)=∞ if f∈C(X),f∉C(X,[0,1]), we use upper completion to have its worst-case value be arbitrarily negative. Lipschitzness is much more difficult, and comprises the bulk of the proof. We get a duality between minimal points and hyperplanes in C(X)⊕R, show that all the hyperplanes we got from minimal points have the same Lipschitz constant upper bound, and then show that the chunk of space below the graph of h itself is the same as the chunk of space below all the hyperplanes we got from minimal points. Thus, h has the same (or lesser) Lipschitz constant as all the hyperplanes chopping out stuff above the graph of h.

Proof: For normalization, h(1)=EH(1)=1 and h(0)=EH(0)=0 by normalization for H. Getting the uniform continuity condition from the weak-bounded-minimal condition on an infradistribution H is also trivial, because the condition just says f↦EH(f) is uniformly continuous, and that’s just h itself.

Let’s show that h is concave over C(X,[0,1]), first. We’re shooting for h(pf+(1−p)f′)≥ph(f)+(1−p)h(f′). To show this,

And we’re done. The critical inequality in the middle came from all minimal points in an infradistribution having no negative component by positive-minimals, so swapping out a function for a greater function produces an increase in value.

Time for range(f)⊈[0,1]→h(f)=−∞. Let’s say there exists an x s.t. f(x)>1. We can take an arbitrary sa-measure (m,b)∈H, and consider (m,b)+c(−δx,1), where δx is the point measure that’s 1 on x, and c is extremely huge. The latter part is an sa-measure. But then,(m−cδx)(f)+(b+c)=m(f)+b+c(1−δx(f))=m(f)+b+c(1−f(x)). Since f(x)>1, and c is extremely huge, this is extremely negative. So, since there’s sa-measures that make the function as negative as we wish in H by upper-completeness, inf(m,b)∈H(m(f)+b)=−∞ A very similar argument can be done if there’s an x where f(x)<0, we just add in (cδx,0) to force arbitrarily negative values.

Now for Lipschitzness, which is by far the worst of all. A minimal point (m,b) induces an affine function hm,b (kinda like a hyperplane) of the form hm,b(f)=m(f)+b. Regardless of (m,b), as long as it came from a minimal point in H, hm,b≥h for functions with range in [0,1], because

## Proofs Section 1.1 (Initial results to LF-duality)

Fair upfront warning: This is not a particularly readable proof section (though much better than Section 2 about belief functions). There’s dense notation, logical leaps due to illusion of transparency since I’ve spent a month getting fluent with these concepts, and a relative lack of editing since it’s long. If you really want to read this, I’d suggest PM-ing me to get a link to MIRIxDiscord, where I’d be able to guide you through it and answer questions.

Proposition 1:Iff∈C(X,[0,1])thenf+:(m,b)↦m(f)+bis a positive functional onMsa(X).Proof Sketch: We just check three conditions. Linearity, being nonnegative on Msa(X), and continuity.

Linearity proof. Using a,a′ for constants,

f+(a(m,b)+a′(m′,b′))=f+(am+a′m′,ab+ab′)=(am+a′m′)(f)+ab+a′b′

=a(m(f)+b)+a′(m′(f)+b′)=af+(m,b)+a′f+(m′,b′)

So we have verified that f+(aM+a′M′)=af+(M)+a′f+(M′) and we have linearity.

Positivity proof: An sa-measure M, writeable as (m,b) has m uniquely writeable as a pair of finite measures m+ (all the positive regions) and a m− (all the negative regions) by the Jordan Decomposition Theorem, and b+m−(1)≥0. So,

f+(M)=m(f)+b=m+(f)+m−(f)+b≥0+m−(1)+b≥0

The first ≥ by 1≥f≥0, so the expectation of f is positive and m− is negative so taking the expectation of 1 is more negative. The second ≥ is by the condition on how m− relates to b.

Continuity proof: Fix a sequence (mn,bn) converging to (m,b). Obviously the b part converges, so now we just need to show that mn(f) converges to m(f). The metric we have on the space of finite signed measures is the KR-metric, which implies the thing we want. This only works for continuous f, not general f.

Theorem 1:Every positive functional onMsa(X)can be written as(m,b)↦c(m(f)+b), wherec≥0, andf∈C(X,[0,1])Proof Sketch: The first part is showing that it’s impossible to have a positive functional where the b term doesn’t matter, without the positive functional being the one that maps everything to 0. The second part of the proof is recovering our f by applying the positive functional to Dirac-delta measures δx, to see what the function must be on point x.

Part 1: Let’s say f+ isn’t 0, ie there’s some nonzero (m,b) pair where f+(m,b)>0, and yet f+(0,1)=0 (which, by linearity, means that f+(0,b)=0 for all b). We’ll show that this situation is impossible.

Then, 0<f+(m,b)=f+(m+,0)+f+(m−,b) by our starting assumption, and Jordan decomposition of m, along with linearity of positive functionals. Now, f+(m−,b)+f+(−2(m−),0)=f+(−(m−),b) because positive functionals are linear, and everything in that above equation is an sa-measure (flipping a negative measure makes a positive measure, which doesn’t impose restrictions on the b term except that it be ≥0). And so, by nonnegativity of positive functionals on sa-measures, f+(m−,b)≤f+(−(m−),b). Using this, we get

f+(m+,0)+f+(m−,b)≤f+(m+,0)+f+(−(m−),b)

=f+(m+,0)+f+(−(m−),0)+f+(0,b)=f+(m+,0)+f+(−(m−),0)

Another use of linearity was invoked for the first = in the second line, and then the second = made use of our assumption that f+(0,b)=0 for all b.

At this point, we have derived that 0<f+(m+,0)+f+(−(m−),0). Both of these are positive measures. So, there exists some positive measure m′ where f+(m′,0)>0.

Now, observe that, for all b, 0=f+(0,b)=f+(m′,0)+f+(−(m′),b)

Let b be sufficiently huge to make (−(m′),b) into an sa-measure. Also, since f+(m′,0)>0, f+(−(m′),b)<0, which is impossible because positive functionals are nonnegative on all sa-measures. Contradiction. Due to the contradiction, if there’s a nonzero positive functional, it must assign f+(0,1)>0, so let f+(0,1) be our c term.

Proof part 2: Let’s try to extract our f. Let f(x):=f+(δx,0)f+(0,1) This is just recovering the value of the hypothesized f on x by feeding our positive functional the measure δx that assigns 1 value to x and nothing else, and scaling. Now, we just have to verify that this f is continuous and in [0,1].

For continuity, let xn limit to x. By the KR-metric we’re using, (δxn,0) limits to (δx,0). By continuity of f+, f+(δxn,0) limits to f+(δx,0). Therefore, f(xn) limits to f(x) and we have continuity.

For a lower bound, f≥0, because f(x) is a ratio of two nonnegative numbers, and the denominator isn’t 0.

Now we just have to show that f≤1. For contradiction, assume there’s an x where f(x)>1. Then f+(δx,0)f+(0,1)>1, so f+(δx,0)>f+(0,1), and in particular, f+(0,1)−f+(δx,0)<0.

But then, f+(−(δx),1)+f+(δx,0)=f+(0,1), so f+(−(δx),1)=f+(0,1)−f+(δx,0)<0

However, (−(δx),1) is an sa-measure, because δx(1)+1=0, and must have nonnegative value, so we get a contradiction. Therefore, f∈C(X,[0,1]).

To wrap up, we can go:

f+(m,b)=f+(m,0)+f+(0,b)=f+(0,1)f+(0,1)(∫X(f+(δx,0))dm+f+(0,b))

=f+(0,1)(∫Xf+(δx,0)f+(0,1)dm+f+(0,b)f+(0,1))=c(∫Xf(x)dm+b)=c(m(f)+b)

And c≥0, and f∈C(X,[0,1]), so we’re done.

Lemma 1: Compactness Lemma:Fixing some nonnegative constantsλ◯andb◯, the set of sa-measures wherem+(1)∈[0,λ◯],b∈[0,b◯], is compact. Further, if a set lacks an upper bound onm+(1)or onb, it’s not compact.Proof Sketch: We fix an arbitrary sequence of sa-measures, and then use the fact that closed intervals are compact-complete and the space ΔX is compact-complete to isolate a suitable convergent subsequence. Since all sequences have a limit point, the set is compact. Then, we go in the other direction, and get a sequence with no limit points assuming either a lack of upper bounds on m+(1), or a lack of upper bounds on b.

Proof: Fix some arbitrary sequence Mn wandering about within this space, which breaks down into (m+n,0)+(m−n,bn), and then, since all measures are just a probability distribution scaled by the constant m(1), it further breaks down into (m+n(1)⋅μn,0)+(m−n(1)⋅μ′n,bn). Since bn+m−n(1)≥0, m−n(1) must be bounded in [−b◯,0].

Now, what we can do is extract a subseqence where bn ,m+n(1), m−n(1), μn, and μ′n all converge, by Tychonoff’s Theorem (finite product, no axiom of choice required) Our three number sequences are all confined to a bounded interval, and our two probability sequences are wandering around within ΔX which is a compact complete metric space if X is. The limit of this subsequence is a limit point of the original sequence, since all its components are arbitrarily close to the components that make up Mn for large enough n in our subsequence.

The limiting value of m+(1) and b both obey their respective bounds, and the cone of sa-measures is closed, so the limit point is an sa-measure and respects the bounds too. Therefore the set is compact, because all sequences of points in it have a limit point.

In the other direction, assume a set B has unbounded b values. Then we can fix a sequence (mn,bn)∈B where bn increases without bound, so the a-measures can’t converge. The same applies to all subsequences, so there’s no limit point, so B isn’t compact.

Now, assume a set B has bounded b values, call the least upper bound b⊙, but the value of m+(1) is unbounded. Fix a sequence (mn,bn)∈B where m+n(1) is unbounded above. Assume a convergent subsequence exists. Since bn+m−n(1)≥0, m−n(1) must be bounded in [−b⊙,0]. Then because mn(1)=m+n(1)+m−n(1)≥m+n(1)−b⊙, and the latter quantity is finite, mn(1) must be unbounded above. However, in order for the mn to limit to some m, limn→∞mn(1)=m(1), which results in a contradiction. Therefore, said convergent subsequence doesn’t exist, and B is not compact.

Put together, we have a necessary-and-sufficient condition for a closed subset of Msa(X) to be compact. There must be an upper bound on b and m+(1), respectively.

Lemma 2:The upper completion of a closed set of sa-measures is closed.Proof sketch: We’ll take a convergent sequence (mn,bn) in the upper completion of B that limits to (m,b), and show that, in order for it to converge, the same sorts of bounds as the Compactness Lemma uses must apply. Then, breaking down (mn,bn) into (mBn,bBn)+(m∗n,b∗n), where (mBn,bBn)∈B, and (m∗n,b∗n) is an sa-measure, we’ll transfer these Compactness-Lemma-enabling bounds to the sequences (mBn,bBn) and (m∗n,b∗n), to get that they’re both wandering around in a compact set. Then, we just take a convergent subsequence of both, add the two limit points together, and get our limit point (m,b), witnessing that it’s in the upper completion of B.

Proof: Let (mn,bn)∈B+Msa(X) limit to some (m,b). A convergent sequence (plus its one limit point) is a compact set of points, so, by the Compactness Lemma, there must be a b◯ and λ◯ that are upper bounds on the bn and m+n(1) values, respectively.

Now, for all n, break down (mn,bn) as (mBn,bBn)+(m∗n,b∗n), where (mBn,bBn)∈B, and (m∗n,b∗n) is an sa-measure.

Because bBn+b∗n=bn≤b◯, we can bound the bBn and b∗n quantities by b◯. This transfers into a −b◯ lower bound on mB−n(1) and m∗−n(1), respectively.

Now, we can go:

mB+n(1)+mB−n(1)+m∗+n(1)+m∗−n(1)=mBn(1)+m∗n(1)=mn(1)

=m+n(1)+m−n(1)≤m+n(1)≤λ◯

Using worst-case values for mB−n(1) and m∗−n(1), we get:

mB+n(1)+m∗+n(1)−2b◯≤λ◯

mB+n(1)+m∗+n(1)≤λ◯+2b◯

So, we have upper bounds on mB+n(1) and m∗+n(1) of λ◯+2b◯, respectively.

Due to the sequences (mBn,bBn) and (m∗n,b∗n) respecting bounds on b and m+(1) (b◯ and λ◯+2b◯ respectively), and wandering around within the closed sets B and Msa(X) respectively, we can use the Compactness Lemma and Tychonoff’s theorem (finite product, no axiom of choice needed) to go “hey, there’s a subsequence where both (mBn,bBn) and (m∗n,b∗n) converge, call the limit points (mB,bB) and (m∗,b∗). Since B and Msa(X) are closed, (mB,bB)∈B, and (m∗,b∗)∈Msa(X).”

Now, does (mB,bB)+(m∗,b∗)=(m,b)? Well, for any ϵ, there’s some really large n where d((mBn,bBn),(mB,bB))<ϵ, d((m∗n,b∗n),(m∗,b∗))<ϵ, and d((mn,bn),(m,b))<ϵ. Then, we can go:

d((m,b),(mB,bB)+(m∗,b∗))≤d((m,b),(mn,bn))+d((mn,bn),(mB,bB)+(m∗,b∗))

=d((m,b),(mn,bn))+d((mBn,bBn)+(m∗n,b∗n),(mB,bB)+(m∗,b∗))

=d((m,b),(mn,bn))+||((mBn,bBn)+(m∗n,b∗n))−((mB,bB)+(m∗,b∗))||

=d((m,b),(mn,bn))+||((mBn,bBn)−(mB,bB))+((m∗n,b∗n)−(m∗,b∗))||

≤d((m,b),(mn,bn))+||(mBn,bBn)−(mB,bB)||+||(m∗n,b∗n)−(m∗,b∗)||

=d((m,b),(mn,bn))+d((mBn,bBn),(mB,bB))+d((m∗n,b∗n),(m∗,b∗))<3ϵ

So, regardless of ϵ, d((m,b),(mB,bB)+(m∗,b∗))<3ϵ, so (mB,bB)+(m∗,b∗)=(m,b). So, we’ve written (m,b) as a sum of an sa-measure in B and an sa-measure, certifying that (m,b)∈B+Msa(X), so B+Msa(X) is closed.

Proposition 2:For closed convex nonemptyB,B+Msa(X)={M|∀f+∃M′∈B:f+(M)≥f+(M′)}Proof sketch: Show both subset inclusion directions. One is very easy, then we assume the second direction is false, and invoke the Hahn-Banach theorem to separate a point in the latter set from the former set. Then we show that the separating functional is a positive functional, so we have a positive functional where the additional point underperforms everything in B+Msa(X), which is impossible by the definition of the latter set.

Easy direction: We will show that B+Msa(X)⊆{M|∀f+∃M′∈B:f+(M)≥f+(M′)}

This is because a M∈(B+Msa(X)), can be written as M=MB+M∗. Let MB be our M′ of interest. Then, it is indeed true that for all f+, f+(M)=f+(MB)+f+(M∗)≥f+(MB)

Hard direction: Assume by contradiction that

B+Msa(X)⊂{M|∀f+∃M′∈B:f+(M)≥f+(M′)}

Then there’s some M where ∀f+∃M′∈B:f+(M)≥f+(M′) and M∉B+Msa(X). B+Msa(X) is the upper completion of a closed set, so by the Compactness Lemma, it’s closed, and since it’s the Minkowski sum of convex sets, it’s convex.

Now, we can use the variant of the Hahn-Banach theorem from the Wikipedia article on “Hahn-Banach theorem”, in the “separation of a closed and compact set” section. Our single point M is compact, convex, nonempty, and disjoint from the closed convex set B+Msa(X). Banach spaces are locally convex, so we can invoke Hahn-Banach separation.

Therefore, there’s some continuous linear functional ϕ s.t. ϕ(M)<infM′∈(B+Msa(X))ϕ(M′)

We will show that this linear functional is actually a positive functional!

Assume there’s some sa-measure M∗ where ϕ(M∗)<0. Then we can pick a random MB∈B, and consider ϕ(MB+cM∗), where c is extremely large.MB+cM∗ lies in B+Msa(X), but it would also produce an extremely negative value for \phi which undershoots ϕ(M) which is impossible. So ϕ is a positive functional.

However, ϕ(M)<infM′∈(B+Msa(X))ϕ(M′), so ϕ(M)<infM′∈Bϕ(M′). But also, M fulfills the condition ∀f+∃M′∈B:f+(M)≥f+(M′), because of the set it came from. So, there must exist some M′∈B where ϕ(M)≥ϕ(M′). But, we have a contradiction, because ϕ(M)<infM′∈Bϕ(M′).

So, there cannot be any point in {M|∀f+∃M′∈B:f+(M)≥f+(M′)} that isn’t in B+Msa(X). This establishes equality.

Lemma 3:For any closed setB⊆Msa(X)and pointM∈B, the set({M}−Msa(X))∩Bis nonempty and compact.Proof: It’s easy to verify nonemptiness, because M is in the set. Also, it’s closed because it’s the intersection of two closed sets.B was assumed closed, and the other part is the Minkowski sum of {M} and −Msa(X), which is closed if −Msa(X) is, because it’s just a shift of −Msa(X) (via a single point). −Msa(X) is closed because it’s −1 times a closed set.

We will establish a bound on the m+(1) and b values of anything in the set, which lets us invoke the Compactness Lemma to show compactness, because it’s a closed subset of a compact set.

Note that if M′∈({M}−Msa(X))∩B, then M′=M−M∗, so M′+M∗=M. Rewrite this as (m′,b′)+(m∗,b∗)=(m,b)

Because b′+b∗=b, we can bound b′ and b∗ by b. This transfers into a −b lower bound on m′−(1) and m∗−(1). Now, we can go:

m′+(1)+m′−(1)+m∗+(1)+m∗−(1)=m′(1)+m∗(1)=m(1)

=m+(1)+m−(1)≤m+(1)

Using worst-case values for m′−(1) and m∗−(1), we get:

m′+(1)+m′+(1)−2b≤m+(1)

m′+(1)≤m′+(1)+m∗+(1)≤m+(1)+2b

So, we have an upper bound of m+(1)+2b on m′+(1), and an upper bound of b on b′. Further, (m′,b′) was arbitrary in ({M}−Msa(X))∩B, so we have our bounds. This lets us invoke the Compactness Lemma, and conclude that said closed set is compact.

Lemma 4:If≥is a partial order onBwhereM′≥Miff there’s some sa-measureM∗whereM=M′+M∗, then∃M′>M↔(M∈B∧∃M′≠M:M′∈{M}−Msa(X))∩B)↔M is not minimal in B

Proof: ∃M′>M↔∃M′≠M:M′≥M

Also, M′≥M↔(M′,M∈B∧∃M∗:M=M′+M∗)

Also, ∃M∗:M=M′+M∗↔∃M∗:M−M∗=M′↔M′∈({M}−Msa(X))

Putting all this together, we get

(∃M′>M)↔(M∈B∧∃M′≠M:M′∈({M}−Msa(X))∩B)

And we’re halfway there. Now for the second half.

M is not minimal in B↔M∈B∧(∃M′∈B:M′≠M∧(∃M∗:M=M′+M∗))

Also, ∃M∗:M=M′+M∗↔∃M∗:M−M∗=M′↔M′∈({M}−Msa(X))

Putting this together, we get

M is not minimal in B↔(M∈B∧∃M′≠M:M′∈({M}−Msa(X))∩B)

And the result has been proved.

Theorem 2:Given a nonempty closed setB, the set of minimal pointsBminis nonempty and all points inBare above a minimal point.Proof sketch: First, we establish an partial order that’s closely tied to the ordering on B, but flipped around, so minimal points in B are maximal elements. We show that it is indeed a partial order, letting us leverage Lemma 4 to translate between the partial order and the set B. Then, we show that every chain in the partial order has an upper bound via Lemma 3 and compactness arguments, letting us invoke Zorn’s lemma to show that that everything in the partial order is below a maximal element. Then, we just do one last translation to show that minimal points in B perfectly correspond to maximal elements in our partial order.

Proof: first, impose a partial order on B, where M′≥M iff there’s some sa-measure M∗ where M=M′+M∗. Notice that this flips the order. If an sa-measure is “below” another sa-measure in the sa-measure addition sense, it’s above that sa-measure in this ordering. So a minimal point in B would be maximal in the partial order. We will show that it’s indeed a partial order.

Reflexivity is immediate. M=M+(0,0), so M≥M.

For transitivity, assume M′′≥M′≥M. Then there’s some M∗ and M′∗ s.t. M=M′+M∗, and M′=M′′+M′∗. Putting these together, we get M=M′′+(M∗+M′∗), and adding sa-measures gets you an sa-measure, so M′′≥M.

For antisymmetry, assume M′≥M and M≥M′. Then M=M′+M∗, and M′=M+M′∗. By substitution, M=M+(M∗+M′∗), so M′∗=−M∗. For all positive functionals, f+(M′∗)=f+(−M∗)=−f+(M∗), and since positive functionals are always nonnegative on sa-measures, the only way this can happen is if M∗ and M′∗ are 0, showing that M=M′.

Anyways, since we’ve shown that it’s a partial order, all we now have to do is show that every chain has an upper bound in order to invoke Zorn’s lemma to show that every point in B lies below some maximal element.

Fix some ordinal-indexed chain Mγ, and associate each of them with the set Sγ=({Mγ}+(−Msa(X)))∩B, which is compact by Lemma 3 and always contains Mγ.

The collection of Sγ also has the finite intersection property, because, fixing finitely many of them, we can consider a maximal γ∗, and Mγ∗ is in every associated set by:

Case 1: Some other Mγ equals Mγ∗, so Sγ=Sγ∗ and Mγ∗∈Sγ∗=Sγ.

Case 2: Mγ∗>Mγ, and by Lemma 4, Mγ∗∈({Mγ}−Msa(X))∩B.

Anyways, since all the Sγ are compact, and have the finite intersection property, we can intersect them all and get a nonempty set containing some point M∞. M∞ lies in B, because all the sets we intersected were subsets of B. Also, because M∞∈(Mγ−Msa(X))∩B for all γ in our chain, then if M∞≠Mγ, Lemma 4 lets us get M∞>Mγ, and if M∞=Mγ, then M∞≥Mγ. Thus, M∞ is an upper bound for our chain.

By Zorn’s Lemma, because every chain has an upper bound, there are maximal elements in B, and every point in B has a maximal element above it.

To finish up, use Lemma 4 to get: M is maximal↔¬∃M′>M↔M is minimal in B

Proposition 3:Given a f∈C(X,[0,1]), and a B that is nonempty closed, inf(m,b)∈B(m(f)+b)=inf(m,b)∈Bmin(m(f)+b)Direction 1: since Bmin is a subset of B, we get one direction easily, that

inf(m,b)∈B(m(f)+b)≤inf(m,b)∈Bmin(m(f)+b)

Direction 2: Take a M∈B. By Theorem 2, there is a Mmin∈Bmin s.t. M=Mmin+M∗. Applying our positive functional m(f)+b (by Proposition 1), we get that m(f)+b≥mmin(f)+bmin. Because every point in B has a point in Bmin which scores as low or lower according to the positive functional,

inf(m,b)∈B(m(f)+b)≥inf(m,b)∈Bmin(m(f)+b)

And this gives us our desired equality.

Proposition 4:Given a nonempty closed convexB,Bmin=(Buc)minand(Bmin)uc=BucProof: First, we’ll show Bmin=(Buc)min. We’ll use the characterization in terms of the partial order ≤ we used for the Zorn’s Lemma proof of Theorem 2. If a point M is in Buc, then it can be written as M=MB+M∗, so M≤MB. Since all points added in Buc lie below a preexisting point in B (according to the partial order from Theorem 2) the set of maximals (ie, set of minimal points) is completely unchanged when we add all the new points to the partial order via upper completion, so Bmin=(Buc)min.

For the second part, one direction is immediate. Bmin⊆B, so (Bmin)uc⊆Buc. For the reverse direction, take a point M∈Buc. It can be decomposed as MB+M∗, and then by Theorem 2, MB can be decomposed as Mmin+M′∗, so M=Mmin+(M∗+M′∗), so it lies in (Bmin)uc, and we’re done.

Theorem 3:If the nonempty closed convex setsAandBhaveAmin≠Bmin, then there is somef∈C(X,[0,1])whereEA(f)≠EB(f)Proof sketch: We show that upper completion is idempotent, and then use that to show that the upper completions of A and B are different. Then, we can use Hahn-Banach to separate a point of A from Buc (or vice-versa), and show that the separating functional is a positive functional. Finally, we use Theorem 1 to translate from a separating positive functional to different expectation values of some f∈C(X,[0,1])

Proof: Phase 1 is showing that upper completion is idempotent. (Buc)uc=Buc. One direction of this is easy, Buc⊆(Buc)uc. In the other direction, let M∈(Buc)uc. Then we can decompose M into M′+M∗, where M′∈Buc, and decompose that into MB+M′∗ where MB∈B, so M=MB+(M∗+M′∗) and M∈Buc.

Now for phase 2, we’ll show that the minimal points of one set aren’t in the upper completion of the other set. Assume, for contradiction, that this is false, so Amin⊆Buc and Bmin⊆Auc. Then, by idempotence, Proposition 4, and our subset assumption,

Auc=(Amin)uc⊆(Buc)uc=Buc

Swapping the A and B, the same argument holds, so Auc=Buc, so (Buc)min=(Auc)min.

Now, using this and Proposition 4, Bmin=(Buc)min=(Auc)min=Amin.

But wait, we have a contradiction, we said that the minimal points of B and A weren’t the same! Therefore, either Bmin⊈Auc, or vice-versa. Without loss of generality, assume that Bmin⊈Auc.

Now for phase 3, Hahn-Banach separation to get a positive functional with different inf values. Take a point MB in Bmin that lies outside Auc. Now, use the Hahn-Banach separation of {MB} and Auc used in the proof of Proposition 2, to get a linear functional ϕ (which can be demonstrated to be a positive functional by the same argument as the proof of Proposition 2) where: ϕ(MB)<infM∈Aucϕ(M). Thus, infM∈Bϕ(M)<infM∈Aϕ(M), so infM∈Bϕ(M)≠infM∈Aϕ(M)

Said positive functional can’t be 0, otherwise both sides would be 0. Thus, by Theorem 1, ϕ((m,b))=a(m(f)+b) where a>0, and f∈C(X,[0,1]). Swapping this out, we get:

inf(m,b)∈Ba(m(f)+b)≠inf(m′,b′)∈Aa(m′(f)+b′)

inf(m,b)∈B(m(f)+b)≠inf(m′,b′)∈A(m′(f)+b′)

and then this is EB(f)≠EA(f) So, we have crafted our f∈C(X,[0,1]) which distinguishes the two sets and we’re done.

Corollary 1:If two nonempty closed convex upper-complete setsAandBare different, then there is somef∈C(X,[0,1])whereEA(f)≠EB(f)Proof: Either Amin≠Bmin, in which case we can apply Theorem 3 to separate them, or their sets of minimal points are the same. In that case, by Proposition 4 and upper completion, A=Auc=(Amin)uc=(Bmin)uc=Buc=B and we have a contradiction because the two set are different.

Theorem 4:IfHis an infradistribution/bounded infradistribution, thenh:f↦EH(f)is concave inf, monotone, uniformly continuous/Lipschitz,h(0)=0,h(1)=1, and ifrange(f)⊈[0,1],h(f)=−∞Proof sketch: h(0)=0,h(1)=1 is trivial, as is uniform continuity from the weak bounded-minimal condition. For concavity and monotonicity, it’s just some inequality shuffling, and for h(f)=∞ if f∈C(X),f∉C(X,[0,1]), we use upper completion to have its worst-case value be arbitrarily negative. Lipschitzness is much more difficult, and comprises the bulk of the proof. We get a duality between minimal points and hyperplanes in C(X)⊕R, show that all the hyperplanes we got from minimal points have the same Lipschitz constant upper bound, and then show that the chunk of space below the graph of h itself is the same as the chunk of space below all the hyperplanes we got from minimal points. Thus, h has the same (or lesser) Lipschitz constant as all the hyperplanes chopping out stuff above the graph of h.

Proof: For normalization, h(1)=EH(1)=1 and h(0)=EH(0)=0 by normalization for H. Getting the uniform continuity condition from the weak-bounded-minimal condition on an infradistribution H is also trivial, because the condition just says f↦EH(f) is uniformly continuous, and that’s just h itself.

Let’s show that h is concave over C(X,[0,1]), first. We’re shooting for h(pf+(1−p)f′)≥ph(f)+(1−p)h(f′). To show this,

h(pf+(1−p)f′)=EH(pf+(1−p)f′)=inf(m,b)∈H(m(pf+(1−p)f′)+b)

=inf(m,b)∈H(p(m(f)+b)+(1−p)(m(f′)+b))

≥pinf(m,b)∈H(m(f)+b′)+(1−p)inf(m′,b′)∈H(m′(f′)+b′)

=pEH(f)+(1−p)EH(f′)=ph(f)+(1−p)h(f′)

And concavity has been proved.

Now for monotonicity. By Proposition 3 and Proposition 1,

∀f:inf(m,b)∈H(m(f)+b)=inf(m,b)∈Hmin(m(f)+b)

Now, let’s say f′≥f. Then:

EH(f)=inf(m,b)∈H(m(f)+b)=inf(m,b)∈Hmin(m(f)+b)≤inf(m,b)∈Hmin(m(f′)+b)

=inf(m,b)∈H(m(f′)+b)=EH(f′)

And we’re done. The critical inequality in the middle came from all minimal points in an infradistribution having no negative component by positive-minimals, so swapping out a function for a greater function produces an increase in value.

Time for range(f)⊈[0,1]→h(f)=−∞. Let’s say there exists an x s.t. f(x)>1. We can take an arbitrary sa-measure (m,b)∈H, and consider (m,b)+c(−δx,1), where δx is the point measure that’s 1 on x, and c is extremely huge. The latter part is an sa-measure. But then,(m−cδx)(f)+(b+c)=m(f)+b+c(1−δx(f))=m(f)+b+c(1−f(x)). Since f(x)>1, and c is extremely huge, this is extremely negative. So, since there’s sa-measures that make the function as negative as we wish in H by upper-completeness, inf(m,b)∈H(m(f)+b)=−∞ A very similar argument can be done if there’s an x where f(x)<0, we just add in (cδx,0) to force arbitrarily negative values.

Now for Lipschitzness, which is by far the worst of all. A minimal point (m,b) induces an affine function hm,b (kinda like a hyperplane) of the form hm,b(f)=m(f)+b. Regardless of (m,b), as long as it came from a minimal point in H, hm,b≥h for functions with range in [0,1], because

hm,b(f)=m(f)