komponisto and TheOtherDave appear to have been taking route 3. (challenging Richard’s purported access to evidence for zombie conceivaiblity).
antigonus
I was very deliberately ignoring this distinction: “people” includes Richard, even for Richard. The point is that Richard cannot simply trust his intuition; he has to weigh his apparent successful conception of zombies against the other evidence, such as the scientific success of reductionism, the findings from cognitive science that show how untrustworthy our intuitions are, and in particular specific arguments showing how we might fool ourselves into thinking zombies are conceivable.
I don’t think Richard said anything to dispute this. He never said that his direct access to the conceivability of zombies renders his justification indefeasible.
This would appear to violate Aumann’s agreement theorem.
“Private knowledge” in this sense is ruled out by Aumann, as far as I can tell.
This is not a case in which you share common priors, so the theorem doesn’t apply. You don’t have, and in fact can never have, the information Richard (thinks he) has. Aumann’s theorem does not imply that everyone is capable of accessing the same evidence.
This is a confusion of map and territory. It is possible to be rationally uncertain about logical truths; and probability estimates (which include the extent to which a datum is evidence for a proposition) are determined by the information available to the agent, not the truth or falsehood of the proposition (otherwise, the only possible probability estimates would be 1 and 0). It may be rational to assign a probability of 75% to the truth of the Riemann Hypothesis given the information we currently have, even if the Riemann Hypothesis turns out to be false (we may have misleading information).
That’s certainly true, but I can’t see its relevance to what I said. In part because of some of the very reasons you name here, we can be mistaken about whether an observation O confirms a hypothesis H or not, hence whether an observation is evidence for a hypothesis or not. If the hypothesis in question concerns whether O is in fact even observable, and my evidence for ~H is that I’ve made O, then someone who strongly disagrees with me about H will conclude that I made some other observation O’ and have been mistaking it for O. And since the observability of O’ doesn’t have any evidentiary bearing on H, he’ll say, my observation wasn’t actually the evidence that I took it to be. That’s the point I was trying to illustrate: we may not be able to agree about whether my purported evidence should confirm H if we antecedently disagree about H. [Edited this sentence to make it clearer.]
But I don’t see why it should be a conversation-stopper: if Richard is right and I am wrong, Richard should be able to offer evidence that he is unusually capable of determining whether his apparent conception is in fact successful (if he can’t, then he should be doubting his own successful conception himself).
I don’t really see what this could mean.
As for “direct access”, well, that was Eliezer’s original point, which I agree with: all knowledge is subject to some uncertainty due to the flaws in human psychology, and in particular all knowledge claims are subject to being undermined by arguments showing how the brain could generate them independently of the truth of the proposition in question. (In other words, the “genetic fallacy” is no fallacy, at least not necessarily.)
Richard didn’t state that his evidence for the conceivability of zombies is absolutely incontrovertible. He just said he had direct access to it, i.e., he has extremely strong evidence for it that doesn’t follow from some intermediary inference.
Why not?
Because, again, you do not have access to the same evidence (if Richard is right about the conceivability of zombies, that is!). Robin’s paper is unfortunately not going to avail you here. It applies to cases where Bayesians share all the same information but nevertheless disagree. To reiterate, Richard (as I understand him) believes that you and he do not share the same information.
In any case, what I want to know is how I should update my beliefs in light of Richard’s statements.
Well, you shouldn’t take his testimony of zombie conceivability as very good evidence of zombie conceivability. In that sense, you don’t have to sweat this conversation very much at all. This is less a debate about the conceivability of zombies and more a debate about the various dialectical positions of the parties involved in the conceivability debate. Do people who feel they can “robustly” conceive of p-zombies necessarily have to found their beliefs on publicly evaluable, “third-person” evidence? That seems to me the cornerstone of this particular discussion, rather than: Is the evidence for the conceivability of p-zombies any good?
In such a dispute, there is some observation O″ that (both parties can agree) you made, which is equal to (or implies) either O or O’, and the dispute is about which one of these it is the same as (or implies). But since O implies H and O’ doesn’t, the dispute reduces to the question of whether O″ implies H or not, and so you may as well discuss that directly.
Yes, that’s the “neutral” view of evidence Richard professed to deny.
The actual values of O and O’ at hand are “That one particular mental event which occurred in Richard’s mind at time t [when he was trying to conceive of zombies] was a conception of zombies,” and “That one particular mental event which occurred in Richard’s mind at time t was a conception of something other than zombies, or a non-conception.” The truth-value value of the O″ you provide has little bearing on either of these.
EDIT: Here’s a thought experiment that might illuminate my argument a bit. Imagine a group of evil scientists kidnaps you and implants special contact lenses which stream red light directly into your retina constantly. Your visual field is a uniformly red canvas, and you can never shut it off. The scientists then strand you on an island full of Bayesian tribespeople who are congenitally blind. The tribespeople consider the existence of visual experience ridiculous and point to all sorts of icky human biases tainting our judgment. How do you update your belief that you’re experiencing red?
EDIT 2: Looking over this once again, I think I should be less glib in my first paragraph. Note that I’m denying that you share common priors, but then appealing to different evidence that you have to explain why this can be rational. If the difference in priors is a result of the difference in evidence, aren’t they just posteriors?
The answer I personally would give is that there are different kinds of evidence. Posteriors are the result of conditionalizing on propositional evidence, such as “Snow is white.” But not all evidence is propositional. In particular, many of our introspective beliefs are justified (when they are justified at all) by the direct access we have to our own experiences. Experiences are not propositions! You cannot conditionalize on an experience. You can conditionalize on a sentence like “I am having experience E,” of course, but the evidence for that sentence is going to come from E itself, not another proposition.
This theory is widely known as the “classical theory of probability” (see here: http://plato.stanford.edu/entries/probability-interpret/#ClaPro). The main problems are:
Fares poorly with infinite sets of events, as noted above.
Can’t handle irrational probabilities in an obvious way. Given a 1“x1” square, what’s the probability of choosing a point within the .5″ radius inscribed circle?
Not clear how to handle “weighted” possibilities. If a coin is biased towards heads, there’s still only two possibilities (it’ll land on heads or land on tails), but p(heads) > 50%.
Runs into problems with the principle of indifference. There are lots of different ways to partitioning up the same sets of events into finitely disjoint alternaties. How do we pick the “right” partitioning?
Would you be willing to elaborate on that? I have a strong personal interest in your thoughts on the matter, having previously spent some time on the bizarre world of Christian apologetics, myself.
Your crucial, unstated premise is that concepts with fuzzy application conditions can’t or usually don’t pick out determinate qualities or relations in the world. Because if they actually can pick out such qualities, then those qualities may turn out to be analyzable in terms of others, and conceptual analysts can just take themselves to be analyzing the semantic reference of our concepts rather than the confused jumble of neural events in which those concepts are actually stored.
Furthermore, that premise seems highly non-obvious to me. It impinges upon a ton of different questions. And it falls under the domain of philosophy of language, not cognitive science. So I think your claim that good philosophy just is cognitive science is clearly false.
Regarding your claim that philosophy hasn’t produced any non-trivial conceptual reductions, that’s a pretty controversial view. In particular, I think there are highly successful reductions of the concept of truth—see the SEP article on the deflationary theory of truth. And it’s a lot harder to understand the concept of truth in terms of fuzzy pattern-matching than, say, the concept of socks.
This isn’t actually considered an open question in neurobiology, right?
It isn’t a question in neurobiology at all. If consciousness is epiphenomenal, then by definition you can’t perform any experiment to detect its existence. And insofar as neurology is the attempt to discover the material composition of the brain and the causal structure of brain events, and epiphenomenalism holds that consciousness is immaterial and causally silent, well...
I imagine because the thing you’ve successfully comprehended could be very, very bad. Not sure if that “obliges” you to feel anything (or if anything ever obliges anyone to feel anything), but if you’re actually wondering what the thought process is...
People who get dumped want to know their partners’ reasons for breaking up, not the biological etiology of those reasons. They are very likely to take lengthy discourses into the latter as insensitive, obfuscatory deflections (and probably correctly so).
I would call the ‘real reasons’ typically given to be obfuscatory deflections. People seldom know the actual reasons for why they want to break up. More often they are explicitly aware of one of the downstream effects of the actual reason.
I’m sure that’s the case. But my point was that if the real reason for the break-up was “I want to be with someone who possesses quality X that you lack,” then tacking on ”...because evolution made me that way” does not render the reason more real or add an additional, separate reason; it just renders the one reason better explained in a mostly irrelevant way.
If your point is that going on about evolutionary psychology adds to the obfuscation but not to the insensitivity, I disagree. There are often ways of more or less sensitively coming clean about (what one takes to be) one’s true reasons for breaking up. Maybe you wouldn’t go so specific as “you’re too fat,” but you could talk about lack of physical chemistry or whatever without uttering a falsehood or being too misunderstood. But there is no way of sensitively taking your devastated ex aside and handing him/her a Tooby and Cosmides paper to read for homework.
Digression into a bunch of theory and science impersonalizes things as well as focussing on ‘me’ instead of ‘you’
Not really. Any evolutionary explanation of why I am repulsed by your physical appearance is going to spend a lot of time dwelling on your physical appearance. And I think the impersonalization bit is the key—it is a ridiculously impersonal digression at a moment of extreme emotional vulnerability on the other person’s part. Most people will interpret impersonal explanations of this sort of emotionally impactful decision as an extremely cold-hearted way of excusing oneself. “I’m sorry I’ve just hurt your feelings. But allow me to explain how this is all just the work of the forces of sexual selection in our ancestral environment...”
Why do you expect happiness-causing beliefs to have a relationship to truthful beliefs?
The point is that if religion does make people happier, this makes it more probable than before on naturalism that lots of people would be theists, hence weakens the evidence for theism that comes from the surprising-ness of lots of people being theists. In other words, theism’s making people happier helps screen off the truth of theism from the phenomenon of widespread religious belief.
Of course, to be evidence against theism, the happiness thing has to lower the probability of theism on all of the evidence, not just lower it on one portion of the evidence. Some theists may want to argue that theists being happier is itself more likely on theism than on non-theism, and this isn’t terribly implausible. Without having a healthier sense of the conditional probabilities involved than I actually do, I don’t know how to evaluate the overall effect on theism’s posterior probability.
Actually, I should be more clear. Let T = theism, H = theism makes people happier, L = lots of people are theists. Then H&L are evidence for T iff the quotient P(H&L|T)/P(H&L|~T) > 1. We can rewrite this quotient as P(L|H&T)/P(L|H&~T) * P(H|T)/P(~H|T). Then the thread-starter’s argument at best shows that T is irrelevant to L once we know H, hence P(L|H&T)/P(L|H&~T) = 1. So in this best-case scenario, the quotient becomes P(H|T)/P(H|~T). If theists can show this is greater than 1, then H&L still ends up as evidence for theism. So that’s what you really have to be asking.
I like this post a lot! And I’ve agreed with most of the things you’ve said in the comments.
However, I think the problem you raise regarding Yudkowsky’s objection can be raised regarding your own. That the omniscient entity experiences qualia as a result of its perfect simulation of a human brain can be granted. Chalmers is not saying that the entity need be told a further fact about the existence of qualia not deducible from its physical computations, but rather that it needs further facts about the existence of other minds’ possessing qualia. It’s theoretically possible that its simulation of my brain produces qualia, but my simulation of my brain (which is, of course, just my brain) doesn’t. Of course, this feels terribly unlikely, and presumably the entity would justly assign a very high probability to my having conscious experiences identical to the ones that it had while perfectly simulating me. But this is nevertheless not a strict deduction from the physical facts. According to your post, merely having high confidence doesn’t cut it.
To restate, our belief in this outcome of the thought experiment is merely extremely confident. But given this most probable outcome, since the computation is exactly the same, the qualia experienced by the omniscient being are certainly exactly the same. This is quite a subtle distinction!
I’m not sure you took my point correctly. I am arguing that the omniscient entity, and not just us, can only be extremely confident that other people are having conscious experiences.
The entity can be certain that my qualia exist and are identical to his-simulation-of-me’s qualia only if he’s antecedently certain that qualia supervene on the physical facts that are the subject of his computations. If they don’t supervene in this way, two identical computations may differ in the qualia they produce. Furthermore, certain knowledge of this supervenience is not built into the entity’s omniscience. So he lacks certain knowledge of my experiences as a result of his simulation, even while obtaining certain knowledge of his own. So even though his computations have led him to perfect knowledge of the configuration of all quarks and the like, he still lacks perfect knowledge regarding my qualia. This is the conclusion Chalmers is trying to arrive at.
I’m worried we’re talking past each other, since I would give largely the same reply as before.
Since it is certain that the computation is the same as yours, it is certain that you experience the same qualia.
The word “it” here is referring to the superintelligence correct? Because if so, this is the specific inference I’m disputing the superintelligence will legitimately make. As I wrote: “The entity can be certain that my qualia exist and are identical to his-simulation-of-me’s qualia only if he’s antecedently certain that qualia supervene on the physical facts that are the subject of his computations.” (It would be helpful for me if you gave me a simple yes-or-no to this principle.) Even if we suppose ourselves to be certain of the supervenience (and therefore certain that the entity undergoes identical experiences to mine in the process of simulating me), what matters here is the superintelligence’s certainty around it. So in this scenario, there is no “regardless of whether the superintelligence knows qualia supervene upon brain states.”
O.K., you’re correct that full-fledged supervenience isn’t necessary. What the superintelligence needs to be certain is instead certain knowledge of the following weaker claim:
(1) Any two identical computational processes yield the same qualia if at some point the process is performed inside of the specific region R of the universe that the superintelligence is looking at.
But since the superintelligence can’t be certain of (1), either, it doesn’t really make a difference. If you disagree, how can the superintelligence deduce (1) from its complete description of the physical events in R? It seems to me that all it can deduce are A. the state of the matter in R at any particular time, and B. that its own performance of some of the processes in R yields qualia. But (1) is clearly not a logical consequence of A. and B.
You may want to look at Brandon Fitelson’s short paper Evidence of evidence is not (necessarily) evidence. You seem to be arguing that, since we have strong evidence that the book has strong evidence for Zoroastrianism before we read it, it follows that we already have (the most important part of) our evidence for Zoroastrianism. But it turns out that it’s extremely tricky to make this sort of reasoning work. To use the most primitive example from the paper, discovering that a playing card C is black is evidence that C is the ace of spades. Furthermore, that C is the ace of spades is excellent evidence that it’s an ace. But discovering that C is black does not give you any evidence whatsoever that C is an ace.
The problem here—at least one of them—is that discovering C is black is just as much evidence for C being the x of spades for any other card-value x. Similarly, before opening the book on Zoroastrianism, we have just as much evidence for the existence of strong evidence for Christianity/atheism/etc, so our credences shouldn’t suddenly start favoring any one of these. But once we learn the evidence for Zoroastrianism, we’ve acquired new information, in just the same way that learning that the card is an ace of spades provides us new information if we previously just knew it was black.
I do suspect that there are relevant disanalogies here, but don’t have a very detailed understanding of them.
I believe he’s trying to draw a distinction between two potential sources of evidence:
The factual claim that people believe zombies are conceivable, and
The actual private act of conceiving of zombies.
Richard is saying that his justification for his belief that p-zombies are conceivable lies in his successful conception of p-zombies. So what licenses him to believe that he’s successfully conceived of zombies after all? His answer is that he has direct access to the contents of his conception, in the same way that he has access to the contents of his perception. You don’t need to ask, “How do I know I’m really seeing blue right now, and not red?” Your justification for your belief that you’re seeing blue just is your phenomenal act of noticing a real, bluish sensation. This justification is “direct” insofar as it comes directly from the sensation, and not via some intermediate process of reasoning which involve inferences (which can be valid or invalid) or premises (which can be true or false). Similarly, he thinks his justification for his belief that p-zombies are conceivable just is his p-zombie-ish conception.
A couple of things to note. One is that this evidence is wholly private. You don’t have direct access to his conceptions, just as you don’t have direct access to his perceptions. The only evidence Richard can give you is testimony. Moreover, he agrees that testimony of this sort is extremely weak evidence. But it’s not the evidence he claims that his belief rests on. The evidence that Richard appeals to can be evidence-for-Richard only.
Another thing is that the direct evidence he appeals to is not “neutral.” If p-zombies really are inconceivable, then he’s in fact not conceiving of p-zombies at all, and so his conception, whatever it was, was never evidence for the conceivability of p-zombies in the first place (in just the same way that seeing red isn’t evidence that you’re seeing blue). So there’s no easy way to set aside the question of whether Richard’s conception is evidence-for-him from the question of whether p-zombies are in general conceivable. The worthiness of Richard’s source of evidence is inextricable from the actual truth or falsehood of the claim in contention, viz., that p-zombies are conceivable. But he thinks this isn’t a problem.
If you want to move ahead in the discussion, then the following are your options:
You simply deny that Richard is in fact conceiving of p-zombies. This isn’t illegitimate, but it’s going to be a conversation-stopper, since he’ll insist that he does have them but that they’re private.
You accept that Richard can successfully conceive of p-zombies, but that this isn’t good evidence for their possibility (or that the very notion of “possibility” in this context is far too problematic to be useful).
You deny that we have direct access to anything, or that access to conceptions in particular is direct, or that one can ever have private knowledge. If you go this route, you have to be careful not to set yourself up for easy reductio. Specifically, you’d better not be led to deny the rationality of believing that you’re seeing blue when, e.g., you highlight this text.
I hope this helps clear things up. It pains me when people interpret their own confusion as evidence of some deep flaw in academic philosophy.