I wrote this post in the course of working through Vladimir Slepnev’s A model of UDT with a halting oracle. This post contains some of the ideas of Slepnev’s post, with all the proofs written out. The main formal difference is that while Slepnev’s post is about programs with access to a halting oracle, the “decision agents” in this post are formulas in Peano arithmetic. They are generally uncomputable and do not reason under uncertainty.
These ideas are due to Vladimir Slepnev and Vladimir Nesov. (Please let me know if I should credit anyone else.) I’m pretty sure none of this is original material on my part. It is possible that I have misinterpreted Slepnev’s post or introduced errors.
We are going to define a world function U(), a 0-ary function1 that outputs an ordered pair (a,b)∈Q2 of payoff values. There are functions πi such that π0(a,b)=a and π1(a,b)=b for any a,b. In fact πi(a,b) is a function in the three variables i,a, and b.
We are also going to define an agent function A(x,i) that outputs the symbol C or D. The argument x is supposed to be the Gödel number of the world function, and i∈N is some sort of indexical information.
(┌ω┐ denotes the Gödel number of ω. ⊢ω means that ω is provable in Peano arithmetic.i– represents the numeral for i. I don’t care what value A(x,i) has when x isn’t the Gödel number of an appropriate 0-ary function.)
There is some circularity in this tentative definition, because a formula standing for A appears in the definition of A itself. We get around this by using diagonalization. We’ll describe how this works just this once: First define the function ϕ as follows:
This function can be defined by a formula. Then the diagonal lemma gives us a formula A such that ⊢A(x,i)=ϕ(x,i,┌A┐).
This is our (somewhat) rational decision agent. If it can prove it will do one thing, it does another; this is what Slepnev calls “playing chicken with the universe”. If it can prove that C is an optimal strategy, it chooses C; and otherwise it chooses D.
First, a lemma about the causes and consequences of playing chicken:
Lemma 1.For any χ,
⊢Prv(A(┌χ┐,i–)=C)→¬Con(PA)
⊢¬Con(PA)→A(┌χ┐,i–)=D
⊢Prv(A(┌χ┐,i–)=D)→¬Con(PACon(PA))
⊢Con(PA)∧¬Con(PACon(PA))→A(┌χ┐,i–)=C
(Prv is a binary-valued function such that Prv(┌ω┐) is true exactly when there is a proof of ω in Peano arithmetic. For brevity we write Prv(ω) instead.Con(PACon(PA)) is the proposition that Peano arithmetic, plus the axiom that Peano arithmetic is consistent, is a consistent theory.)
If we assume consistency of PACon(PA) (which entails consistency of PA), then parts (1) and (3) of Lemma 1 tell us that for any i, ⊬A(┌χ┐,i–)=C and ⊬A(┌χ┐,i–)=D. So the agent never actually plays chicken.
Now let’s see how our agent fares on a straightforward decision problem:
Proposition 2.Let α,β∈Q and suppose
U()={(α,0)\rm if A(┌U┐,0)=C(β,0)\rm if A(┌U┐,0)=D
Assume consistency of PACon(PA). Then A(┌U┐,0)=C if and only if α>β.
Proof. If we assume consistency of PACon(PA), then Lemma 1 tells us that the agent doesn’t play chicken. So the agent will choose C if and only if it determines that choosing C is optimal.
So the agent doesn’t decide that C is optimal, and A(┌U┐,0)=D.
□
Now let’s see how A fares on a symmetric Prisoner’s Dilemma with itself:
Proposition 3.Let
U()=⎧⎪
⎪
⎪
⎪⎨⎪
⎪
⎪
⎪⎩(1,1)\rm if A(┌U┐,0)=C∧A(┌U┐,1)=C(−1,2)\rm if A(┌U┐,0)=C∧A(┌U┐,1)=D(2,−1)\rm if A(┌U┐,0)=D∧A(┌U┐,1)=C(0,0)\rm if A(┌U┐,0)=D∧A(┌U┐,1)=D
Then, assuming consistency of PACon(PA), we have A(┌U┐,0)=C=A(┌U┐,1).
Proof.
(This proof uses Löb’s theorem, and that makes it confusing. Vladimir Slepnev points out that Löb’s theorem is not really necessary here; a simpler proof appears in the comments.)
⊢A(┌U┐,0)=A(┌U┐,1)→((A(┌U┐,0)=C→π0U()=1)∧(A(┌U┐,0)=D→π0U()=0)) ⊢Prv(A(┌U┐,0)=A(┌U┐,1))→∃a,b.Prv((A(┌U┐,0)=C→π0U()=a––)∧(A(┌U┐,0)=D→π0U()=b–))∧a>b Looking at the definition of A, we see that
⊢Prv(A(┌U┐,0)=A(┌U┐,1))→Prv(A(┌U┐,0)=C)∨Prv(A(┌U┐,0)=D)∨A(┌U┐,0)=C By Lemma 1, (1) and (3),
So, assuming Con(PACon(PA)), we conclude that A(┌U┐,0)=C=A(┌U┐,1).
□
The definition of A treats the choices C and D differently; so it is worth checking that A behaves correctly in the Prisoner’s Dilemma when the effects of C and D are switched:
Proposition 4.Let
U()=⎧⎪
⎪
⎪
⎪⎨⎪
⎪
⎪
⎪⎩(0,0)\rm if A(┌U┐,0)=C∧A(┌U┐,1)=C(2,−1)\rm if A(┌U┐,0)=C∧A(┌U┐,1)=D(−1,2)\rm if A(┌U┐,0)=D∧A(┌U┐,1)=C(1,1)\rm if A(┌U┐,0)=D∧A(┌U┐,1)=D
Then, assuming consistency of PACon(PA), we have A(┌U┐,0)=D=A(┌U┐,1).
There are a number of questions one can explore with this formalism: What is the correct generalization of A that can choose between n actions, and not just two? How about infinitely many actions? What about theories other than Peano arithmetic? How do we accomodate payoffs that are real numbers? How do we make agents that can reason under uncertainty? How do we make agents that are computable algorithms rather than arithmetic formulas? How does A fare on a Prisoner’s Dilemma with asymmetric payoff matrix? In a two-person game where the payoff to player A0 is independent of the behavior of A1, can A1 deduce the behavior of A0? What happens when we replace the third line of the definition of A with
? What is a (good) definition of “decision problem”? Is there a theorem that says that our agent A is, in a certain sense, optimal?
1Every k-ary function U in this article is defined by a formula ϕ(x1,…,xk,y) with k1 free variables such that ⊢∃y.ϕ(x1,…,xk,y) and ⊢ϕ(x1,…,xk,y)∧ϕ(x1,…,xk,y′)→y=y′. By a standard abuse of notation, when the name of a function like U appears in a formula of arithmetic, what we really mean is the formula ϕ that defines it.
Formulas of arithmetic that behave like decision agents
I wrote this post in the course of working through Vladimir Slepnev’s A model of UDT with a halting oracle. This post contains some of the ideas of Slepnev’s post, with all the proofs written out. The main formal difference is that while Slepnev’s post is about programs with access to a halting oracle, the “decision agents” in this post are formulas in Peano arithmetic. They are generally uncomputable and do not reason under uncertainty.
These ideas are due to Vladimir Slepnev and Vladimir Nesov. (Please let me know if I should credit anyone else.) I’m pretty sure none of this is original material on my part. It is possible that I have misinterpreted Slepnev’s post or introduced errors.
We are going to define a world function U(), a 0-ary function1 that outputs an ordered pair (a,b)∈Q2 of payoff values. There are functions πi such that π0(a,b)=a and π1(a,b)=b for any a,b. In fact πi(a,b) is a function in the three variables i,a, and b.
We are also going to define an agent function A(x,i) that outputs the symbol C or D. The argument x is supposed to be the Gödel number of the world function, and i∈N is some sort of indexical information.
We want to define our agent such that
A(┌χ┐,i)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩Dif ⊢A(┌χ┐,i–)=C; elseCif ⊢A(┌χ┐,i–)=D; elseCif ∃a,b(⊢(A(┌χ┐,i–)=C→πi–(χ())=a––)\quad∧\quad(A(┌χ┐,i–)=D→πi–(χ())=b–))\quad∧\quada>b; elseD
(┌ω┐ denotes the Gödel number of ω. ⊢ω means that ω is provable in Peano arithmetic.i– represents the numeral for i. I don’t care what value A(x,i) has when x isn’t the Gödel number of an appropriate 0-ary function.)
There is some circularity in this tentative definition, because a formula standing for A appears in the definition of A itself. We get around this by using diagonalization. We’ll describe how this works just this once: First define the function ϕ as follows:
ϕ(┌χ┐,i,┌ψ┐)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩Dif ⊢ψ(┌χ┐,i–)=C; elseCif ⊢ψ(┌χ┐,i–)=D; elseCif ∃a,b(⊢(ψ(┌χ┐,i–)=C→πi–(χ())=a––)\quad∧\quad(ψ(┌χ┐,i–)=D→πi–(χ())=b–))\quad∧\quada>b; elseD
This function can be defined by a formula. Then the diagonal lemma gives us a formula A such that ⊢A(x,i)=ϕ(x,i,┌A┐).
This is our (somewhat) rational decision agent. If it can prove it will do one thing, it does another; this is what Slepnev calls “playing chicken with the universe”. If it can prove that C is an optimal strategy, it chooses C; and otherwise it chooses D.
First, a lemma about the causes and consequences of playing chicken:
Lemma 1. For any χ,
⊢Prv(A(┌χ┐,i–)=C)→¬Con(PA)
⊢¬Con(PA)→A(┌χ┐,i–)=D
⊢Prv(A(┌χ┐,i–)=D)→¬Con(PACon(PA))
⊢Con(PA)∧¬Con(PACon(PA))→A(┌χ┐,i–)=C
(Prv is a binary-valued function such that Prv(┌ω┐) is true exactly when there is a proof of ω in Peano arithmetic. For brevity we write Prv(ω) instead.Con(PACon(PA)) is the proposition that Peano arithmetic, plus the axiom that Peano arithmetic is consistent, is a consistent theory.)
Proof. (1) By definition of A,
⊢Prv(A(┌χ┐,i–)=C)→A(┌χ┐,i)=D
So
⊢Prv(Prv(A(┌χ┐,i–)=C))→Prv(A(┌χ┐,i)=D)
⊢Prv(A(┌χ┐,i–)=C)→Prv(A(┌χ┐,i–)=C)∧Prv(Prv(A(┌χ┐,i–)=C))→Prv(A(┌χ┐,i–)=C)∧Prv(A(┌χ┐,i–)=D)→¬Con(PA)
(2) By the principle of explosion,
⊢¬Con(PA)→Prv(A(┌χ┐,i–)=C)→A(┌χ┐,i–)=D
(3) By the definition of A,
⊢¬Prv(A(┌χ┐,i–)=C)∧Prv(A(┌χ┐,i–)=D)→A(┌χ┐,i–)=C
⊢Prv(A(┌χ┐,i–)=D)→(Prv(A(┌χ┐,i–)=C)∧Prv(A(┌χ┐,i–)=D))∨A(┌χ┐,i–)=C→¬Con(PA)∨A(┌χ┐,i–)=C
⊢Prv(Prv(A(┌χ┐,i–)=D))→Prv(¬Con(PA)∨A(┌χ┐,i–)=C)
⊢Prv(A(┌χ┐,i–)=D)→Prv(A(┌χ┐,i–)=D)∧Prv(Prv(A(┌χ┐,i–)=D))→Prv(A(┌χ┐,i–)=D)∧Prv(¬Con(PA)∨A(┌χ┐,i–)=C)→Prv(A(┌χ┐,i–)=D∧(¬Con(PA)∨A(┌χ┐,i–)=C))→Prv(¬Con(PA))→¬Con(PACon(PA))
(4)
⊢¬Con(PACon(PA))→Prv(¬Con(PA))→Prv(Prv(A(┌χ┐,i–)=C))→Prv(A(┌χ┐,i–)=D)
⊢Con(PA)∧¬Con(PACon(PA))→Con(PA)∧Prv(A(┌χ┐,i–)=D)→¬Prv(A(┌χ┐,i–)=C)∧Prv(A(┌χ┐,i–)=D)→A(┌χ┐,i–)=C
□
If we assume consistency of PACon(PA) (which entails consistency of PA), then parts (1) and (3) of Lemma 1 tell us that for any i, ⊬A(┌χ┐,i–)=C and ⊬A(┌χ┐,i–)=D. So the agent never actually plays chicken.
Now let’s see how our agent fares on a straightforward decision problem:
Proposition 2. Let α,β∈Q and suppose
U()={(α,0)\rm if A(┌U┐,0)=C(β,0)\rm if A(┌U┐,0)=D
Assume consistency of PACon(PA). Then A(┌U┐,0)=C if and only if α>β.
Proof. If we assume consistency of PACon(PA), then Lemma 1 tells us that the agent doesn’t play chicken. So the agent will choose C if and only if it determines that choosing C is optimal.
We have
⊢(A(┌U┐,0)=C→π0U()=α––)∧(A(┌U┐,0)=D→π0U()=β––)
Suppose α>β. Then clearly
∃a,b⊢(A(┌U┐,0)=C→π0U()=a––)∧(A(┌U┐,0)=D→π0U()=b–)∧a>b
So A(┌U┐,0)=C.
As for the converse: We have ⊢(A(┌U┐,0)=C→π0U()=α––). If also
⊢(A(┌U┐,0)=C→π0U()=a––)
and a≠α, then ⊢(A(┌U┐,0)=D). By Lemma 1(3) and consistency of PACon(PA), this cannot happen. So
⊬(A(┌U┐,0)=C→π0U()=a––)
Similarly, we have
⊬(A(┌U┐,0)=D→π0U()=b–)
for all b≠β. So
∄a,b⊢(A(┌U┐,0)=C→π0U()=a––)∧(A(┌U┐,0)=D→π0U()=b–)∧a>b
So the agent doesn’t decide that C is optimal, and A(┌U┐,0)=D.
□
Now let’s see how A fares on a symmetric Prisoner’s Dilemma with itself:
Proposition 3. Let
U()=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩(1,1)\rm if A(┌U┐,0)=C∧A(┌U┐,1)=C(−1,2)\rm if A(┌U┐,0)=C∧A(┌U┐,1)=D(2,−1)\rm if A(┌U┐,0)=D∧A(┌U┐,1)=C(0,0)\rm if A(┌U┐,0)=D∧A(┌U┐,1)=D
Then, assuming consistency of PACon(PA), we have A(┌U┐,0)=C=A(┌U┐,1).
Proof.
(This proof uses Löb’s theorem, and that makes it confusing. Vladimir Slepnev points out that Löb’s theorem is not really necessary here; a simpler proof appears in the comments.)
⊢A(┌U┐,0)=A(┌U┐,1)→((A(┌U┐,0)=C→π0U()=1)∧(A(┌U┐,0)=D→π0U()=0))
⊢Prv(A(┌U┐,0)=A(┌U┐,1))→∃a,b.Prv((A(┌U┐,0)=C→π0U()=a––)∧(A(┌U┐,0)=D→π0U()=b–))∧a>b
Looking at the definition of A, we see that
⊢Prv(A(┌U┐,0)=A(┌U┐,1))→Prv(A(┌U┐,0)=C)∨Prv(A(┌U┐,0)=D)∨A(┌U┐,0)=C
By Lemma 1, (1) and (3),
⊢Prv(A(┌U┐,0)=A(┌U┐,1))→¬Con(PA)∨¬Con(PACon(PA))∨A(┌U┐,0)=C
Similarly,
⊢Prv(A(┌U┐,0)=A(┌U┐,1))→¬Con(PA)∨¬Con(PACon(PA))∨A(┌U┐,1)=C
So
(∗)⊢Prv(A(┌U┐,0)=A(┌U┐,1))→¬Con(PA)∨(Con(PA)∧¬Con(PACon(PA)))∨A(┌U┐,0)=C=A(┌U┐,1)
Applying Lemma 1(2) and (4),
⊢Prv(A(┌U┐,0)=A(┌U┐,1))→A(┌U┐,0)=D=A(┌U┐,1)∨A(┌U┐,0)=C=A(┌U┐,1)∨A(┌U┐,0)=C=A(┌U┐,1)→A(┌U┐,0)=A(┌U┐,1)
By Löb’s theorem,
⊢A(┌U┐,0)=A(┌U┐,1)
⊢Prv(A(┌U┐,0)=A(┌U┐,1))
By (∗), we have
⊢¬Con(PA)∨(Con(PA)∧¬Con(PACon(PA)))∨A(┌U┐,0)=C=A(┌U┐,1)
So, assuming Con(PACon(PA)), we conclude that A(┌U┐,0)=C=A(┌U┐,1).
□
The definition of A treats the choices C and D differently; so it is worth checking that A behaves correctly in the Prisoner’s Dilemma when the effects of C and D are switched:
Proposition 4. Let
U()=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩(0,0)\rm if A(┌U┐,0)=C∧A(┌U┐,1)=C(2,−1)\rm if A(┌U┐,0)=C∧A(┌U┐,1)=D(−1,2)\rm if A(┌U┐,0)=D∧A(┌U┐,1)=C(1,1)\rm if A(┌U┐,0)=D∧A(┌U┐,1)=D
Then, assuming consistency of PACon(PA), we have A(┌U┐,0)=D=A(┌U┐,1).
A proof appears in the comments.
There are a number of questions one can explore with this formalism: What is the correct generalization of A that can choose between n actions, and not just two? How about infinitely many actions? What about theories other than Peano arithmetic? How do we accomodate payoffs that are real numbers? How do we make agents that can reason under uncertainty? How do we make agents that are computable algorithms rather than arithmetic formulas? How does A fare on a Prisoner’s Dilemma with asymmetric payoff matrix? In a two-person game where the payoff to player A0 is independent of the behavior of A1, can A1 deduce the behavior of A0? What happens when we replace the third line of the definition of A with
⊢∃a,b(A(┌U┐,i–)=C→πi–U()=a)∧(A(┌U┐,i–)=D→πi–U()=b)∧a>b
? What is a (good) definition of “decision problem”? Is there a theorem that says that our agent A is, in a certain sense, optimal?
1Every k-ary function U in this article is defined by a formula ϕ(x1,…,xk,y) with k1 free variables such that ⊢∃y.ϕ(x1,…,xk,y) and ⊢ϕ(x1,…,xk,y)∧ϕ(x1,…,xk,y′)→y=y′. By a standard abuse of notation, when the name of a function like U appears in a formula of arithmetic, what we really mean is the formula ϕ that defines it.