Are you suggesting that in the case of the hard problem, there may be some equivalent of the ‘flat earth’ assumption that the hard-problemists hold so tightly that they can’t even comprehend a ‘round earth’ explanation when it’s offered?
Another assumption that I think people with intuitions on the hard problem hold tightly is the idea that whether something is conscious must be equivalent to things they value intrinsically at a universal level, which is false, because you can value something without it being conscious, and something can be conscious without it being valuable to you.
I think the methodology of the post below is bad, but I have a high prior that something like this is happening in the consciousness debate, and that it’s confusing everyone:
Can you elaborate a bit? Personally, I have intuitions on the hard problem and I think conscious experience is the only type of thing that matters intrinsically. But I don’t think that’s part of the definition of ‘conscious experience’. That phrase would still refer to the same concept as it does now if I thought that, say, beauty was intrinsically valuable—or even if I thought conscious experience was the only thing that didn’t matter.
Basically, if you want consciousness to matter morally/intrinsically, then you will prefer theories that match your values on what counts as intrinsically valuable, irrespective of the truth of the theory, and in particular, it should be way more surprising than it does that the correct theory of consciousness just so happens to match what you find intrinsically valuable, or at least matches up way more than random chance, because I believe what you value/view as moral is inherently relative, and doesn’t really have a relationship to the scientific problem of consciousness.
I think this is part of the reason why people don’t exactly like reductive conceptions of consciousness, where consciousness is created by parts like neurons/atoms/quantum fields that people usually don’t value in themselves, because they believe that consciousness should come out of parts/units that they think are morally valuable to them, and also part of the reason why people dislike theories that imply that consciousness goes beyond species that they value intrinsically, which is us for most people.
I think every side here is a problem, in that arguments for moral worth of species often are conditionalized on those species being conscious for suffering, and people not wanting to admit that it’s totally fine to be okay with someone suffering, even if they are conscious, and it being totally fine to be okay to value something like a rock, or all rocks, that isn’t conscious or suffering.
Another way to say it is even if a theory suggests that something you don’t value intrinsically is conscious, you don’t have to change your values very much, and you can still go about your day mostly fine.
I think a lot of people who aren’t you conflate moral value with the science question of “what is consciousness” unintentionally, due to the term being so value-loaded.
Another assumption that I think people with intuitions on the hard problem hold tightly is the idea that whether something is conscious must be equivalent to things they value intrinsically at a universal level, which is false, because you can value something without it being conscious, and something can be conscious without it being valuable to you.
I think the methodology of the post below is bad, but I have a high prior that something like this is happening in the consciousness debate, and that it’s confusing everyone:
https://www.lesswrong.com/posts/KpD2fJa6zo8o2MBxg/consciousness-as-a-conflationary-alliance-term-for
Can you elaborate a bit? Personally, I have intuitions on the hard problem and I think conscious experience is the only type of thing that matters intrinsically. But I don’t think that’s part of the definition of ‘conscious experience’. That phrase would still refer to the same concept as it does now if I thought that, say, beauty was intrinsically valuable—or even if I thought conscious experience was the only thing that didn’t matter.
Basically, if you want consciousness to matter morally/intrinsically, then you will prefer theories that match your values on what counts as intrinsically valuable, irrespective of the truth of the theory, and in particular, it should be way more surprising than it does that the correct theory of consciousness just so happens to match what you find intrinsically valuable, or at least matches up way more than random chance, because I believe what you value/view as moral is inherently relative, and doesn’t really have a relationship to the scientific problem of consciousness.
I think this is part of the reason why people don’t exactly like reductive conceptions of consciousness, where consciousness is created by parts like neurons/atoms/quantum fields that people usually don’t value in themselves, because they believe that consciousness should come out of parts/units that they think are morally valuable to them, and also part of the reason why people dislike theories that imply that consciousness goes beyond species that they value intrinsically, which is us for most people.
I think every side here is a problem, in that arguments for moral worth of species often are conditionalized on those species being conscious for suffering, and people not wanting to admit that it’s totally fine to be okay with someone suffering, even if they are conscious, and it being totally fine to be okay to value something like a rock, or all rocks, that isn’t conscious or suffering.
Another way to say it is even if a theory suggests that something you don’t value intrinsically is conscious, you don’t have to change your values very much, and you can still go about your day mostly fine.
I think a lot of people who aren’t you conflate moral value with the science question of “what is consciousness” unintentionally, due to the term being so value-loaded.