There’s a funny thing about nihilism: It’s not decision-relevant. Imagine being a nihilist, deciding whether to spend your free time trying to bring about an awesome post-AGI utopia, vs sitting on the couch and watching TV. Well, if you’re a nihilist, then the awesome post-AGI utopia doesn’t matter. But watching TV doesn’t matter either. Watching TV entails less exertion of effort. But that doesn’t matter either. Watching TV is more fun (well, for some people). But having fun doesn’t matter either. There’s no reason to throw yourself at a difficult project. There’s no reason not to throw yourself at a difficult project. Isn’t it funny?
I agree except for the funny part.
I don’t have a grand ethical theory, I’m not ready to sit in judgment of anyone else, I’m just deciding what to do for my own account. There’s a reason I ended the post with “Dentin’s prayer of the altruistic nihilist”; that’s how I feel, at least sometimes. I choose to care about information-processing systems that are (or “perceive themselves to be”?) conscious in a way that’s analogous to how humans do that, with details still uncertain. I went them to be (or “to perceive themselves to be”?) happy and have awesome futures. So here I am :-D
Thanks for describing this. I’m both impressed and a bit shocked that you’re being consistent.
This is a pretty weird claim, right? I mean, you remember writing down the statement. Would you agree with that claim? No way, right?
Let’s assume I do. (I think I would have agreed a few years ago, or at least assigned significant probability to this.) I still think (and thought then) that there is a slam-dunk chain from ‘I experience consciousness’ to ‘therefore, consciousness exists’.
Let A=I experience consciousness and B=consciousness exists. Clearly A⟹B because experiencing anything is already sufficient for what I call consciousness. Furthermore, clearly A is true. Hence B is true. Nothing about your Claim contradicts any step of this argument.
I think the reason why this topic has intuitions differ so much is that we are comparing very low probability theories against each other, and the question is which one is lower. (And operations with low numbers are prone to higher errors than operations with higher numbers.) At least my impression (correct me if I’m wrong) is that the subjective proof of consciousness would be persuasive, except that it seems to imply Claim, and Claim is a no-go, so therefore the subjective proof has to give in. I.e., you have both subjective proof is valid→Claim and ¬Claim, and therefore ¬subjective proof is valid.
My main point is that it doesn’t make sense to assign anything lower probability than ¬A and ¬(A→B) because A is immediately proven by the fact that you experience stuff, and A is the definition of B so A→B is utterly trivial. You can make a coherent-sounding (if far fetched) argument for why Claim is true, but I’m not familiar with any coherent argument that A is false (other than that it must be false because of what it implies, which is again the argument above.)
My probabilities (not adjusted for the fact that one of them must be true) look something like this:
A or A→B is false ∼0
Consciousness is an emergent phenomenon. (I.e., matter is unconscious but consciousness appears as a result of information processing and has no causal effect on the world. This would imply Claim.) ∼0.001
Something weird like Dual-aspect monism (consciousness and materialism are two views on the same process, in particular all matter is conscious), ∼0.05
Hence what I said earlier: I don’t believe Claim right now because I think there is actually a not-super-low-probability explanation, but even if there weren’t, it would still not change anything because 0.001 is a lot more than 0. I do remember finding EY’s anti-p-zombie post persuasive, although it’s been years since I’ve read it.
I can’t say I understand it very well either, and see also Luke’s report Appendix F and Joe’s blog post. From where I’m at right now, there’s a set of phenomena that people describe using words like “consciousness” and “qualia”, and nothing we say will make those phenomena magically disappear. However, it’s possible that those phenomena are not what they appear to be.
We all perceive that we have qualia. You can think of statements like “I perceive X” as living on continuum, like a horizontal line. On the left extreme of the line, we can perceive things because those things are out there in the world and our senses are accurately and objectively conveying them to us. On the right extreme of the line, we can perceive things because of quirks of our perceptual systems.
I think that’s just dodging the problem since any amount of subjective experience is enough for A. The question isn’t how accurately your brain reports on the outside world, it’s why you have subjective experience of any kind.
I agree except for the funny part.
Thanks for describing this. I’m both impressed and a bit shocked that you’re being consistent.
Let’s assume I do. (I think I would have agreed a few years ago, or at least assigned significant probability to this.) I still think (and thought then) that there is a slam-dunk chain from ‘I experience consciousness’ to ‘therefore, consciousness exists’.
Let A=I experience consciousness and B=consciousness exists. Clearly A⟹B because experiencing anything is already sufficient for what I call consciousness. Furthermore, clearly A is true. Hence B is true. Nothing about your Claim contradicts any step of this argument.
I think the reason why this topic has intuitions differ so much is that we are comparing very low probability theories against each other, and the question is which one is lower. (And operations with low numbers are prone to higher errors than operations with higher numbers.) At least my impression (correct me if I’m wrong) is that the subjective proof of consciousness would be persuasive, except that it seems to imply Claim, and Claim is a no-go, so therefore the subjective proof has to give in. I.e., you have both subjective proof is valid→Claim and ¬Claim, and therefore ¬subjective proof is valid.
My main point is that it doesn’t make sense to assign anything lower probability than ¬A and ¬(A→B) because A is immediately proven by the fact that you experience stuff, and A is the definition of B so A→B is utterly trivial. You can make a coherent-sounding (if far fetched) argument for why Claim is true, but I’m not familiar with any coherent argument that A is false (other than that it must be false because of what it implies, which is again the argument above.)
My probabilities (not adjusted for the fact that one of them must be true) look something like this:
A or A→B is false ∼0
Consciousness is an emergent phenomenon. (I.e., matter is unconscious but consciousness appears as a result of information processing and has no causal effect on the world. This would imply Claim.) ∼0.001
Something weird like Dual-aspect monism (consciousness and materialism are two views on the same process, in particular all matter is conscious), ∼0.05
Hence what I said earlier: I don’t believe Claim right now because I think there is actually a not-super-low-probability explanation, but even if there weren’t, it would still not change anything because 0.001 is a lot more than 0. I do remember finding EY’s anti-p-zombie post persuasive, although it’s been years since I’ve read it.
I think that’s just dodging the problem since any amount of subjective experience is enough for A. The question isn’t how accurately your brain reports on the outside world, it’s why you have subjective experience of any kind.
Thanks! I’m sympathetic to everything you wrote, and I don’t have a great response. I’d have to think about it more. :-D