Another problem with your argument—WFC says that all leaf nodes are human-verifiable, whereas some leaf nodes in your suggested tree have to be taken on faith (a fact which you mention, but don’t address).
Not sure what you want me to “address”. The leaf nodes that are taken on faith really are true under optimal play, which is what happens at equilibrium.
To focus on this part, because it seems quite tractable --
Let’s grant for the sake of argument that these nodes are true under optimal play. How can the human verify that? Optimal play is quite a computationally complex object.
WFC as you stated it says that these leaf nodes are verifiable:
(Weak version) For any question Q with correct answer A, there exists a tree of decompositions T arguing this such that at every leaf a human can verify that the answer to the question at the leaf is correct, [...]
So the tree you provide doesn’t satisfy this condition. Yet you say:
I claim that this is a tree that satisfies the weak Factored Cognition hypothesis, if the human can take on faith the answers to “What is the best defeater to X”.
To me this reads like “this would satisfy WFC if WFC allowed humans to take leaf nodes on faith, rather than verify them”.
Am I still misunderstanding something big about the kind of argument you are trying to make?
Am I still misunderstanding something big about the kind of argument you are trying to make?
I don’t think so, but to formalize the argument a bit more, let’s define this new version of the WFC:
Special-Tree WFC: For any question Q with correct answer A, there exists a tree of decompositions T arguing this such that:
Every internal node has exactly one child leaf of the form “What is the best defeater to X?” whose answer is auto-verified,
For every other leaf node, a human can verify that the answer to the question at that node is correct,
For every internal node, a human can verify that the answer to the question is correct, assuming that the subanswers are correct.
(As before, we assume that the human never verifies something incorrect, unless the subanswers they were given were incorrect.)
Claim 1: (What I thought was) your assumption ⇒ Special-Tree WFC, using the construction I gave.
Claim 2: Special-Tree WFC + assumption of optimal play ⇒ honesty is an equilibrium, using the same argument that applies to regular WFC + assumption of optimal play.
Idk whether this is still true under the assumptions you’re using; I think claim 1 in particular is probably not true under your model.
Ah, OK, so you were essentially assuming that humans had access to an oracle which could verify optimal play.
This sort of makes sense, as a human with access to a debate system in equilibrium does have such an oracle. I still don’t yet buy your whole argument, for reasons being discussed in another branch of our conversation, but this part makes enough sense.
Your argument also has some leaf nodes which use the terminology “fully defeat”, in contrast to “defeat”. I assume this means that in the final analysis (after expanding the chain of defeaters) this refutation was a true one, not something ultimately refuted.
If so, it seems you also need an oracle for that, right? Unless you think that can be inferred from some fact about optimal play. EG, that a player bothered to say it rather than concede.
In any case it seems like you could just make the tree out of the claim “A is never fully defeated”:
Node(Q, A, [Leaf("Is A ever fully defeated?", "No")])
Your argument also has some leaf nodes which use the terminology “fully defeat”, in contrast to “defeat”.
I don’t think I ever use “fully defeat” in a leaf? It’s always in a Node, or in a Tree (which is a recursive call to the procedure that creates the tree).
I assume this means that in the final analysis (after expanding the chain of defeaters) this refutation was a true one, not something ultimately refuted.
I don’t think I ever use “fully defeat” in a leaf? It’s always in a Node, or in a Tree (which is a recursive call to the procedure that creates the tree).
Ahhhhh, OK. I missed that that was supposed to be a recursive call, and interpreted it as a leaf node based on the overall structure. So I was still missing an important part of your argument. I thought you were trying to offer a static tree in that last part, rather than a procedure.
To focus on this part, because it seems quite tractable --
Let’s grant for the sake of argument that these nodes are true under optimal play. How can the human verify that? Optimal play is quite a computationally complex object.
WFC as you stated it says that these leaf nodes are verifiable:
So the tree you provide doesn’t satisfy this condition. Yet you say:
To me this reads like “this would satisfy WFC if WFC allowed humans to take leaf nodes on faith, rather than verify them”.
Am I still misunderstanding something big about the kind of argument you are trying to make?
I don’t think so, but to formalize the argument a bit more, let’s define this new version of the WFC:
Special-Tree WFC: For any question Q with correct answer A, there exists a tree of decompositions T arguing this such that:
Every internal node has exactly one child leaf of the form “What is the best defeater to X?” whose answer is auto-verified,
For every other leaf node, a human can verify that the answer to the question at that node is correct,
For every internal node, a human can verify that the answer to the question is correct, assuming that the subanswers are correct.
(As before, we assume that the human never verifies something incorrect, unless the subanswers they were given were incorrect.)
Claim 1: (What I thought was) your assumption ⇒ Special-Tree WFC, using the construction I gave.
Claim 2: Special-Tree WFC + assumption of optimal play ⇒ honesty is an equilibrium, using the same argument that applies to regular WFC + assumption of optimal play.
Idk whether this is still true under the assumptions you’re using; I think claim 1 in particular is probably not true under your model.
Ah, OK, so you were essentially assuming that humans had access to an oracle which could verify optimal play.
This sort of makes sense, as a human with access to a debate system in equilibrium does have such an oracle. I still don’t yet buy your whole argument, for reasons being discussed in another branch of our conversation, but this part makes enough sense.
Your argument also has some leaf nodes which use the terminology “fully defeat”, in contrast to “defeat”. I assume this means that in the final analysis (after expanding the chain of defeaters) this refutation was a true one, not something ultimately refuted.
If so, it seems you also need an oracle for that, right? Unless you think that can be inferred from some fact about optimal play. EG, that a player bothered to say it rather than concede.
In any case it seems like you could just make the tree out of the claim “A is never fully defeated”:
Node(Q, A, [Leaf("Is A ever fully defeated?", "No")])
I don’t think I ever use “fully defeat” in a leaf? It’s always in a
Node
, or in aTree
(which is a recursive call to the procedure that creates the tree).Yes, that’s what I mean by “fully defeat”.
Ahhhhh, OK. I missed that that was supposed to be a recursive call, and interpreted it as a leaf node based on the overall structure. So I was still missing an important part of your argument. I thought you were trying to offer a static tree in that last part, rather than a procedure.