Partially covered this in my response to TAG above, but let me delve into that a bit more, since your comment makes a good point, and my definition of fairness above has some rhetorical dressing that is worth dropping for the sake of clarity.
I would define fairness at a high-level as—taking care not to gerrymander our values to achieve a specific outcome, and instead trying to generalize our own ethics into something that genuinely works for everyone and everything as best it can. In this specific case, that would be something along the lines of making sure that our moral reasoning is responsive first and foremost to evidence from reality, based on our best understanding of what kinds of metrics are ethically relevant.
For instance, what color a creature’s shell, pelt or skin is has no significant ethical dimension, because it has limited bearing on things like suffering, pleasure and valence of that creature’s experience (and what effect it does have is usually caused by a subjective aesthetic preference, often an externally-imposed one, rather than the color itself) - if our ethics considered exterior coloration as a parameter, our ethics would not in that case be built on a firm foundation.
Contrastingly, intelligence does seem to be a relevant ethical dimension because it determines things like whether an organism can worry about future suffering (thereby suffering twice), and whether an organism is capable of participating in more complex activities with more complex, potentially positive, potentially negative valences. Of course there is a great deal of further work required to understand how best to consider and parameterize intelligence for this context, but we are not unjustified in believing it is relevant.
I agree that ultimately choices are going to need to be made—I am of the opinion those choices should be as inclusive as possible, balancing against our best understanding of reality, ethics, and what will bring about the better outcome for all involved. Does that answer your question?
It does answer my question. I was wondering if you were assuming some sort of moral realism in which fairness is neatly defined by reality. I’m glad to see that you’re not.
For a fascinating in-depth look at how hard it is to define a fair alignment target that still includes humanity, see A Moral Case for Evolved-Sapience-Chauvinism and the surrounding sequence.
Partially covered this in my response to TAG above, but let me delve into that a bit more, since your comment makes a good point, and my definition of fairness above has some rhetorical dressing that is worth dropping for the sake of clarity.
I would define fairness at a high-level as—taking care not to gerrymander our values to achieve a specific outcome, and instead trying to generalize our own ethics into something that genuinely works for everyone and everything as best it can. In this specific case, that would be something along the lines of making sure that our moral reasoning is responsive first and foremost to evidence from reality, based on our best understanding of what kinds of metrics are ethically relevant.
For instance, what color a creature’s shell, pelt or skin is has no significant ethical dimension, because it has limited bearing on things like suffering, pleasure and valence of that creature’s experience (and what effect it does have is usually caused by a subjective aesthetic preference, often an externally-imposed one, rather than the color itself) - if our ethics considered exterior coloration as a parameter, our ethics would not in that case be built on a firm foundation.
Contrastingly, intelligence does seem to be a relevant ethical dimension because it determines things like whether an organism can worry about future suffering (thereby suffering twice), and whether an organism is capable of participating in more complex activities with more complex, potentially positive, potentially negative valences. Of course there is a great deal of further work required to understand how best to consider and parameterize intelligence for this context, but we are not unjustified in believing it is relevant.
I agree that ultimately choices are going to need to be made—I am of the opinion those choices should be as inclusive as possible, balancing against our best understanding of reality, ethics, and what will bring about the better outcome for all involved. Does that answer your question?
It does answer my question. I was wondering if you were assuming some sort of moral realism in which fairness is neatly defined by reality. I’m glad to see that you’re not.
For a fascinating in-depth look at how hard it is to define a fair alignment target that still includes humanity, see A Moral Case for Evolved-Sapience-Chauvinism and the surrounding sequence.