This argument rests on foundations of moral realism, which I don’t think is actually a coherent meta-ethical view.
Under an anti-realist worldview, it makes total sense that we would assign axiological value in a way which is centered around ourselves. We often choose to extend value to things which are similar to ourselves, based on notions of fairness, or notions that our axiological system should be simple or consistent. But even if we knew everything about human vs cow vs shrimp vs plant cognition, there’s no way of doing that which is objectively “correct” or “incorrect”, only different ways of assembling competing instincts about values into a final picture.
Pain is bad because...
Pain is bad because of how it feels. When I have a bad headache, and it feels bad, I don’t think “ah, this detracts from the welfare of a member of a sapient species.” No, I think it’s bad because it hurts.
I disagree with this point. If I actually focus on the sensation of severe pain, I notice that it’s empty and has no inherent value. It’s only when my brain relates the pain to other phenomena that it has some kind of value.
Secondly, even the fact that “pain” “feels” “like” “something” identifies the firing of neurons with the sensation of feeling in a way which is philosophically careless.
For an example which ties these points together, when you see something beautiful, it seems like the feeling of aesthetic appreciation is a primitive sensation, but this sensation and the associated value label that you give it only exist because of a bunch of other things.
A different example: currently my arms ache because I went to the gym yesterday, but this aching doesn’t have any negative value to me, despite it “feeling” “bad”.
Passing the BB turing test?
Overall I don’t think I can model your world-model very well. I think you believe in mind-stuff which obeys mental laws and is bound physical objects by “psychophysical laws” which means that any physical object which trips some kind of brain-ish-ness threshold essentially gets ensouled by the psychophysical laws binding a bunch of mind-stuff to it, which also cause the atoms of that brain-thing to move around differ. Then the atoms can move in a certain way which causes the mind-stuff to experience qualia, which are kind of primitive in some sense and have inherent moral value.
I don’t know what role you think the brain plays in all this. I assume it’s some role, since the brain does a lot of work.
I think you think that the inherent moral value is in the mental laws, which means that any brain with mind-stuff attached has a kind of privileged access to moral reasoning, allowing it to—eventually—come to an objectively correct view on what is morally good vs bad. Or in other words, morality exists as a kind of convergent value system in all mind-stuff, which influences the brains that have mind-stuff bound to them to behave in a certain way.
This was also my reaction, better stated than I would have done.
I think there’s a version of this argument that says that most people would not reflectively endorse the animal suffering they cause, if they truly understood themselves and their own values in a CEV-like sense. I don’t know if that version is true either.
Merged two comments into one:
Moral Realism
This argument rests on foundations of moral realism, which I don’t think is actually a coherent meta-ethical view.
Under an anti-realist worldview, it makes total sense that we would assign axiological value in a way which is centered around ourselves. We often choose to extend value to things which are similar to ourselves, based on notions of fairness, or notions that our axiological system should be simple or consistent. But even if we knew everything about human vs cow vs shrimp vs plant cognition, there’s no way of doing that which is objectively “correct” or “incorrect”, only different ways of assembling competing instincts about values into a final picture.
Pain is bad because...
I disagree with this point. If I actually focus on the sensation of severe pain, I notice that it’s empty and has no inherent value. It’s only when my brain relates the pain to other phenomena that it has some kind of value.
Secondly, even the fact that “pain” “feels” “like” “something” identifies the firing of neurons with the sensation of feeling in a way which is philosophically careless.
For an example which ties these points together, when you see something beautiful, it seems like the feeling of aesthetic appreciation is a primitive sensation, but this sensation and the associated value label that you give it only exist because of a bunch of other things.
A different example: currently my arms ache because I went to the gym yesterday, but this aching doesn’t have any negative value to me, despite it “feeling” “bad”.
Passing the BB turing test?
Overall I don’t think I can model your world-model very well. I think you believe in mind-stuff which obeys mental laws and is bound physical objects by “psychophysical laws” which means that any physical object which trips some kind of brain-ish-ness threshold essentially gets ensouled by the psychophysical laws binding a bunch of mind-stuff to it, which also cause the atoms of that brain-thing to move around differ. Then the atoms can move in a certain way which causes the mind-stuff to experience qualia, which are kind of primitive in some sense and have inherent moral value.
I don’t know what role you think the brain plays in all this. I assume it’s some role, since the brain does a lot of work.
I think you think that the inherent moral value is in the mental laws, which means that any brain with mind-stuff attached has a kind of privileged access to moral reasoning, allowing it to—eventually—come to an objectively correct view on what is morally good vs bad. Or in other words, morality exists as a kind of convergent value system in all mind-stuff, which influences the brains that have mind-stuff bound to them to behave in a certain way.
This was also my reaction, better stated than I would have done.
I think there’s a version of this argument that says that most people would not reflectively endorse the animal suffering they cause, if they truly understood themselves and their own values in a CEV-like sense. I don’t know if that version is true either.