Your assertions about when x is free aren’t right. The variable x is being bound in its first appearance, and it is already bound (and so not “completely free”) in its second and third appearance.
The third appearance of x assumes that x is already bound, so that you can evaluate whether something is a member of it. But when processing the second appearance of x, you are testing candidate values of x, to find out whether they satisfy the predicate not(member(X,X)). Within that predicate, you assume that the second argument is already bound, and look for instantiations of the first argument for which the predicate is true.
To evaluate this, you would have to write it as
Find X1, X2, X3 such that not(member(X1,X2)), equals(X1,X2), member(X2, X3), equals(X2, X3). (Finding a binding asserts that the set of all sets that are not members of themselves is a member of itself.)
But if you try to evaluate this, it’s not going to run into any paradoxes; it’s going to depend on how you define the predicate “not”.
You can see a problem in writing it down if you think about how, computationally, it would be evaluated. If you start by asking for the set X,Y such that not(member(X,Y)), that’s not a computable set; it keeps generating members forever. I don’t think there’s any way to write Russell’s paradox in Prolog such that it would be possible to compute an answer.
Your assertions about when x is free aren’t right. The variable x is being bound in its first appearance, and it is already bound (and so not “completely free”) in its second and third appearance.
The third appearance of x assumes that x is already bound, so that you can evaluate whether something is a member of it. But when processing the second appearance of x, you are testing candidate values of x, to find out whether they satisfy the predicate not(member(X,X)). Within that predicate, you assume that the second argument is already bound, and look for instantiations of the first argument for which the predicate is true.
You are not using the word “bound” in the way that it is used in formal logic. The two occurances of ‘X’ in not(member(X,X)) are either both bound or both free. You cannot have one lying within the scope of a quantifier while the other is not.
You aren’t disagreeing with me! I’m just additionally pointing out that the semantics of member(X,Y) require either X or Y to be bound in order for the statement to be unified, and for the entire quantified statement to be meaningful. To find X that satisfy Russell’s paradox, you would have to violate the rule you just stated.
To find X that satisfy Russell’s paradox, you would have to violate the rule you just stated.
Do you mean that I can’t verify that a set is not an element of itself? But the empty set, for example, is not an element of itself. I can confirm this by noting that no thing is an element of the empty set (which is a finite check). Hence, the empty set, in particular, is not an element of the empty set. Therefore, by setting X = the empty set, I have an X such that X is not an element of X.
Note that in the above use of the predicate not(member(X,X)), both occurrences of ‘X’ were bound. So I’m not sure what you mean when you say that I “would have to violate the rule [I] just stated.”
Oops, you’re right. I was thinking of cases where you are trying to generate answers to a query, which is different from testing one candidate answer.
Let me start over:
My suspicion, which I haven’t proved, is that Russell’s Paradox is a result of specifying a logic, and not specifying the algorithms used to evaluate statements in that logic or to answer queries. In any complete specification of a logic, including both the rules and the computational steps taken to evaluate or query-answer, you will probably find that you can prove what the outcome of the Russell’s paradox query is.
In other words, “formal logic” is too sloppy; my intuition is that in Russell’s paradox, it hides problems with order of execution.
Not really. In most standard axiomatic descriptions of set theory, like say ZFC, one doesn’t need to think at all about computability of set relations. This is a good thing. We want for example the Busy Beaver function to be well-defined. And we want to be able to talk about things like the behavior of Turing machines when connected to non-computable oracles. If we insist that our set theoretic relationships be computable this sort of thing becomes very hard to talk about.
Huh? The Busy Beaver function is all about computability. The function itself is not computable; but you know that only because you’ve spelled out how the computation works, and can show that there is an answer but that it is not computable.
ZFC was designed to avoid Russell’s Paradox; it does this by getting closer to a computable description of set theory. The definition itself is actually computational; a set of axioms of implication are just rewrite rules. It is ambiguous only in that it fails to specify what order rewrite rule applies first.
I have no problem with using non-computable oracles. But the way Turing machines interact with those oracles is computationally explicit. The view I’m expressing suggests you could run into troubles if you posed an oracle a question that hid ambiguities that would be resolved by a computational specification of the oracle. But I’ve never seen any use of an oracle that did that. People don’t use oracles for Russell’s paradox-like problems; they typically use them for non-computable functions.
How are you defining computable? Naively I would think it would be something like describable by a Turing machine that outputs the result after finite time. But Busy Beaver won’t do that. There may be definitional issues at work here.
You aren’t disagreeing with me! I’m just additionally pointing out that the semantics of member(X,Y) require either X or Y to be bound in order for the statement to be unified, and for the entire quantified statement to be meaningful.
I’m not sure how you’re using these words. In formal logic, boundedness has nothing to do with semantics. Boundedness is a purely syntactic notion. And I don’t know what “unified” means in this context.
The “member” predicate is not a predicate like “shaves”; it is a primitive predicate, like “and”, with special semantics. If you want to generate the set of X for which member(X,Y) is true, you need Y to be bound. To generate the set of Y for which member(X,Y) is true, you need X to be bound. That’s what I meant.
Asking for the set of all for which member(X,Y) is true is going to get stuck in an infinite loop without ever generating any results. There is some connection between Russell’s paradox, and queries that generate a useless infinite loop in a query language. You can look at a statement of the paradox and feel that it leads to two contradictory proofs; but if you define set theory and the algorithms used to answer queries, I think (but am not sure) you will find that you can prove what the algorithm will produce when run.
Thanks for the pointer. I was saying that my guess is that Russell’s paradox reduces to something like the Kleene-Rosser paradox, which is resolved when you spell out the order of execution; and that we shouldn’t worry about paradoxes that are resolved when you spell out the order of execution, because a “formal logic” without the details for performing computations spelled out is not sufficiently formal.
The “member” predicate is not a predicate like “shaves”; it is a primitive predicate, like “and”, with special semantics. If you want to generate the set of X for which member(X,Y) is true, you need Y to be bound. To generate the set of Y for which member(X,Y) is true, you need X to be bound. That’s what I meant.
That’s not a problem, because, like I said, both arguments of member(X,X) are bound in Russell’s paradox. They are both bound by the phrase “The set of all X such that. . .” that precedes them.
Asking for the set of all for which member(X,Y) is true is going to get stuck in an infinite loop without ever generating any results.
Again, I need to know how you are encoding sets before I can answer any question about what an algorithm would do on given input.
but if you define set theory and the algorithms used to answer queries, I think (but am not sure) you will find that you can prove what the algorithm will produce when run.
My talk of “primitive predicates” makes sense only if you’re talking about a computational implementation.
In a truly formal logic, you do determine the definitions of the predicates purely by the axioms they are found in, including a set of axioms that are production rules, which therefore spell out execution (although order of execution is still usually ambiguous). ZF set theory is like that.
But I’ve never seen a presentation of Russell’s paradox which defined what “not” and “member-of” mean in an axiomatic way. I don’t think that had even been done for set theory at the time Russell proposed his paradox. The presentations I’ve seen always rely on your intuition about what “not” and “member of” mean.
The third appearance of x assumes that x is already bound, so that you can evaluate whether something is a member of it. But when processing the second appearance of x, you are testing candidate values of x, to find out whether they satisfy the predicate not(member(X,X)). Within that predicate, you assume that the second argument is already bound, and look for instantiations of the first argument for which the predicate is true.
To evaluate this, you would have to write it as
Find X1, X2, X3 such that not(member(X1,X2)), equals(X1,X2), member(X2, X3), equals(X2, X3). (Finding a binding asserts that the set of all sets that are not members of themselves is a member of itself.)
But if you try to evaluate this, it’s not going to run into any paradoxes; it’s going to depend on how you define the predicate “not”.
You can see a problem in writing it down if you think about how, computationally, it would be evaluated. If you start by asking for the set X,Y such that not(member(X,Y)), that’s not a computable set; it keeps generating members forever. I don’t think there’s any way to write Russell’s paradox in Prolog such that it would be possible to compute an answer.
You are not using the word “bound” in the way that it is used in formal logic. The two occurances of ‘X’ in not(member(X,X)) are either both bound or both free. You cannot have one lying within the scope of a quantifier while the other is not.
You aren’t disagreeing with me! I’m just additionally pointing out that the semantics of member(X,Y) require either X or Y to be bound in order for the statement to be unified, and for the entire quantified statement to be meaningful. To find X that satisfy Russell’s paradox, you would have to violate the rule you just stated.
Do you mean that I can’t verify that a set is not an element of itself? But the empty set, for example, is not an element of itself. I can confirm this by noting that no thing is an element of the empty set (which is a finite check). Hence, the empty set, in particular, is not an element of the empty set. Therefore, by setting X = the empty set, I have an X such that X is not an element of X.
Note that in the above use of the predicate not(member(X,X)), both occurrences of ‘X’ were bound. So I’m not sure what you mean when you say that I “would have to violate the rule [I] just stated.”
Oops, you’re right. I was thinking of cases where you are trying to generate answers to a query, which is different from testing one candidate answer.
Let me start over:
My suspicion, which I haven’t proved, is that Russell’s Paradox is a result of specifying a logic, and not specifying the algorithms used to evaluate statements in that logic or to answer queries. In any complete specification of a logic, including both the rules and the computational steps taken to evaluate or query-answer, you will probably find that you can prove what the outcome of the Russell’s paradox query is.
In other words, “formal logic” is too sloppy; my intuition is that in Russell’s paradox, it hides problems with order of execution.
Not really. In most standard axiomatic descriptions of set theory, like say ZFC, one doesn’t need to think at all about computability of set relations. This is a good thing. We want for example the Busy Beaver function to be well-defined. And we want to be able to talk about things like the behavior of Turing machines when connected to non-computable oracles. If we insist that our set theoretic relationships be computable this sort of thing becomes very hard to talk about.
Huh? The Busy Beaver function is all about computability. The function itself is not computable; but you know that only because you’ve spelled out how the computation works, and can show that there is an answer but that it is not computable.
ZFC was designed to avoid Russell’s Paradox; it does this by getting closer to a computable description of set theory. The definition itself is actually computational; a set of axioms of implication are just rewrite rules. It is ambiguous only in that it fails to specify what order rewrite rule applies first.
I have no problem with using non-computable oracles. But the way Turing machines interact with those oracles is computationally explicit. The view I’m expressing suggests you could run into troubles if you posed an oracle a question that hid ambiguities that would be resolved by a computational specification of the oracle. But I’ve never seen any use of an oracle that did that. People don’t use oracles for Russell’s paradox-like problems; they typically use them for non-computable functions.
How are you defining computable? Naively I would think it would be something like describable by a Turing machine that outputs the result after finite time. But Busy Beaver won’t do that. There may be definitional issues at work here.
The busy beaver function isn’t computable; but it is posed within a well-defined computational model.
Ok. So when do you consider something to be within an acceptable computational model or not?
I’m not sure how you’re using these words. In formal logic, boundedness has nothing to do with semantics. Boundedness is a purely syntactic notion. And I don’t know what “unified” means in this context.
The “member” predicate is not a predicate like “shaves”; it is a primitive predicate, like “and”, with special semantics. If you want to generate the set of X for which member(X,Y) is true, you need Y to be bound. To generate the set of Y for which member(X,Y) is true, you need X to be bound. That’s what I meant.
Asking for the set of all for which member(X,Y) is true is going to get stuck in an infinite loop without ever generating any results. There is some connection between Russell’s paradox, and queries that generate a useless infinite loop in a query language. You can look at a statement of the paradox and feel that it leads to two contradictory proofs; but if you define set theory and the algorithms used to answer queries, I think (but am not sure) you will find that you can prove what the algorithm will produce when run.
It looks like you’re using a computational model to understand Russell’s paradox. Should you be thinking of Kleene-Rosser?
Thanks for the pointer. I was saying that my guess is that Russell’s paradox reduces to something like the Kleene-Rosser paradox, which is resolved when you spell out the order of execution; and that we shouldn’t worry about paradoxes that are resolved when you spell out the order of execution, because a “formal logic” without the details for performing computations spelled out is not sufficiently formal.
That’s not a problem, because, like I said, both arguments of member(X,X) are bound in Russell’s paradox. They are both bound by the phrase “The set of all X such that. . .” that precedes them.
Again, I need to know how you are encoding sets before I can answer any question about what an algorithm would do on given input.
One wouldn’t want all queries within set theory to be decidable by some algorithm. If that were the case, then set theory would not be able to capture enough of arithmetic.
I’m aware of that. This particular paradox, though, is one where the question seems to be decidable both ways, not one where it’s undecidable.
My talk of “primitive predicates” makes sense only if you’re talking about a computational implementation.
In a truly formal logic, you do determine the definitions of the predicates purely by the axioms they are found in, including a set of axioms that are production rules, which therefore spell out execution (although order of execution is still usually ambiguous). ZF set theory is like that.
But I’ve never seen a presentation of Russell’s paradox which defined what “not” and “member-of” mean in an axiomatic way. I don’t think that had even been done for set theory at the time Russell proposed his paradox. The presentations I’ve seen always rely on your intuition about what “not” and “member of” mean.