I mean, I can’t make up for any atrocity I commit simply by out-breeding my victims!
I think this is a good reductio of many meta-level non-moral-realist FAI approaches like CEV. They retrospectively endorse genocide. (ETA: And of course they also very much disendorse anti-natalist preferences/tendencies, for whatever that’s worth.)
I think this is a good reductio of many meta-level non-moral-realist FAI approaches like CEV. They retrospectively endorse genocide.
I’ve had thoughts along similar lines myself. However I must point out that it isn’t CEV that is retrospectively endorsing genocide so much as it is the hypothetical people who commit genocide prior to having their CEV calculated that are (evidently) endorsing genocide. Yes, extrapolating the volition of folks who are into genocide (that you don’t approve of) is a bad idea. It is rather critical just which set of agents you plug into a CEV algorithm!
it is the hypothetical people who commit genocide prior to having their CEV calculated that are (evidently) endorsing genocide.
I wasn’t really thinking of the same people doing both so much as Germans gaining more biological fitness than Poles due to the Holocaust, where their descendants’ population differences might have massive effects on the output of CEV. You can’t really blame the current Germans or Poles for existing more or less and yet CEV still doesn’t attempt to adjust for this, which seems to violate normal moral intuitions about consistency. If we accept that natural selection isn’t a moral process and is beyond the reach of God (which doesn’t make any sense, but whatever), then it seems really odd to just accept its results as moral even after we’ve gained the ability to reflect and fix past errors.
You can’t really blame the current Germans or Poles for existing more or less and yet CEV still doesn’t attempt to adjust for this, which seems to violate normal moral intuitions about consistency.
I don’t see any violation of moral intuitions there. It isn’t the business of CEV to second guess what morality should be. It works out the morality (and other preferences) the input class has and seeks to satisfy them. So if you look at CEV you will see a CEV that does take into account the impact of past injustices according to whatever your moral intuitions are.
If we accept that natural selection isn’t a moral process and is beyond the reach of God (which doesn’t make any sense, but whatever), then it seems really odd to just accept its results as moral even after we’ve gained the ability to reflect and fix past errors.
Once you eliminate the effects of natural selection—and assorted past genocides—you don’t have anything left. It isn’t odd to accept whatever morality you happen to have as morality when there isn’t anything else. I happen to have my morality because having it helped my ancestors kill their rivals, stay alive and get laid. Take away those influences doesn’t leave me with a more pure morality it leaves me with absolutely nothing.
It is rather critical just which set of agents you plug into a CEV algorithm!
I take this (very real) possibility as strongly indicating that CEV-like approaches are insufficiently meta and that we should seriously expend a lot of effort on (getting closer to) solving moral philosophy if at all possible. (Or alternatively, as Wei Dai likes to point out, solving metaphilosophy.)
Put slightly differently: if I have some set of ethical standards S against which I’m prepared to compare the results R of a CEV-like algorithm, with the intention of discarding R where R conflicts with S, it follows that I consider wherever I got S from a more reliable source of ethical judgments than I consider CEV. If so, that strongly suggests that if I want reliable ethical judgments, what I ought to be doing is exploring the source of S.
Conversely, if I believe a CEV-like algorithm is a more reliable source of ethical judgments than anything else I have available, then I ought to be willing to discard S where it conflicts with R.
I think this is a good reductio of many meta-level non-moral-realist FAI approaches like CEV. They retrospectively endorse genocide. (ETA: And of course they also very much disendorse anti-natalist preferences/tendencies, for whatever that’s worth.)
I’ve had thoughts along similar lines myself. However I must point out that it isn’t CEV that is retrospectively endorsing genocide so much as it is the hypothetical people who commit genocide prior to having their CEV calculated that are (evidently) endorsing genocide. Yes, extrapolating the volition of folks who are into genocide (that you don’t approve of) is a bad idea. It is rather critical just which set of agents you plug into a CEV algorithm!
I wasn’t really thinking of the same people doing both so much as Germans gaining more biological fitness than Poles due to the Holocaust, where their descendants’ population differences might have massive effects on the output of CEV. You can’t really blame the current Germans or Poles for existing more or less and yet CEV still doesn’t attempt to adjust for this, which seems to violate normal moral intuitions about consistency. If we accept that natural selection isn’t a moral process and is beyond the reach of God (which doesn’t make any sense, but whatever), then it seems really odd to just accept its results as moral even after we’ve gained the ability to reflect and fix past errors.
I don’t see any violation of moral intuitions there. It isn’t the business of CEV to second guess what morality should be. It works out the morality (and other preferences) the input class has and seeks to satisfy them. So if you look at CEV you will see a CEV that does take into account the impact of past injustices according to whatever your moral intuitions are.
Once you eliminate the effects of natural selection—and assorted past genocides—you don’t have anything left. It isn’t odd to accept whatever morality you happen to have as morality when there isn’t anything else. I happen to have my morality because having it helped my ancestors kill their rivals, stay alive and get laid. Take away those influences doesn’t leave me with a more pure morality it leaves me with absolutely nothing.
I take this (very real) possibility as strongly indicating that CEV-like approaches are insufficiently meta and that we should seriously expend a lot of effort on (getting closer to) solving moral philosophy if at all possible. (Or alternatively, as Wei Dai likes to point out, solving metaphilosophy.)
Sure.
Put slightly differently: if I have some set of ethical standards S against which I’m prepared to compare the results R of a CEV-like algorithm, with the intention of discarding R where R conflicts with S, it follows that I consider wherever I got S from a more reliable source of ethical judgments than I consider CEV. If so, that strongly suggests that if I want reliable ethical judgments, what I ought to be doing is exploring the source of S.
Conversely, if I believe a CEV-like algorithm is a more reliable source of ethical judgments than anything else I have available, then I ought to be willing to discard S where it conflicts with R.