I only have my first-person evidence to go with. This bothers me a lot but I’m assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by “sentience”, having it correspond to specific implemented algorithms or brain states.
What evidence do you have for thinking that your first-person intuitions about sentience “cut reality at its joints”? Maybe if you analyze what goes through your head when you think “sentience”, and then try to apply that to other animals (never mind AIs or aliens), you’ll just end up measuring how different those animals are from humans in a completely arbitrary and morally-unimportant implementation feature.
If after solving all the problems of philosophy you found out something like this, would you accept it, or would you say that “sentience” was no longer the basis of your morals? In other words, why might you prefer this particular intuition to other intuitions that judge how similar something is to a human?
If I understand it correctly, this is the position endorsed here. I don’t think realizing that this view is right would change much for me; I would still try to generalize criteria for why I care about a particular experience and then care about all instances of the same thing. However, I realize that this would make it much more difficult to convince others to draw the same lines. If the question of whether a given being is sentience translates into whether I have reasons to care about that being, then one part of my argument would fall away. This issue doesn’t seem to be endemic to the treatment of non-human animals though, you’d have it with any kind of utility function that values well-being.
What evidence do you have for thinking that your first-person intuitions about sentience “cut reality at its joints”? Maybe if you analyze what goes through your head when you think “sentience”, and then try to apply that to other animals (never mind AIs or aliens), you’ll just end up measuring how different those animals are from humans in a completely arbitrary and morally-unimportant implementation feature.
If after solving all the problems of philosophy you found out something like this, would you accept it, or would you say that “sentience” was no longer the basis of your morals? In other words, why might you prefer this particular intuition to other intuitions that judge how similar something is to a human?
If I understand it correctly, this is the position endorsed here. I don’t think realizing that this view is right would change much for me; I would still try to generalize criteria for why I care about a particular experience and then care about all instances of the same thing. However, I realize that this would make it much more difficult to convince others to draw the same lines. If the question of whether a given being is sentience translates into whether I have reasons to care about that being, then one part of my argument would fall away. This issue doesn’t seem to be endemic to the treatment of non-human animals though, you’d have it with any kind of utility function that values well-being.