A rough heuristic I use is that my moral preferences are those that I prefer regardless of whether I perceive the outcome (for example, deluding myself that other people aren’t suffering when they in fact are isn’t good), and my hedonic preferences are those where I only care about what I perceive (if I think what I’m eating tastes good, it doesn’t matter what it “really” tastes like).
This heuristic works for things like Deep Blue (it doesn’t care about chess games that it’s not aware of), but it doesn’t match my intuition for paperclippers. Any thoughts on why this heuristic breaks down there? Or is paperclipping simply a moral preference that I disapprove of, along the same lines as keeping women veiled or not eating shrimp?
Any thoughts on why this heuristic breaks down there?
I think that both morality, and the desires of paperclippers, are examples of what might be called “Non-personal preferences,” that is, they are preferences that, as you said, are preferred regardless of whether or not one perceives their fulfillment. All moral preferences are non-personal preferences, but not all Non-personal preferences are moral preferences.
The reason the heuristic works most of the time for you is, I think, because humans don’t have a lot of non-personal preferences. Having experiences is the main thing we care about. Morality is one of the few non-personal preferences we have. So if you have a preference you prefer regardless of whether or not you perceive the outcome, it is probably a moral preference.
The reason that heuristic breaks down in regards to paperclippers is that they are a hypothetical alien entity that has nothing but non-personal preferences. They aren’t human, their non-personal preferences aren’t moral.
What would I propose as a replacement heuristic? It’s a hard question, but I’d say moral preferences tend to have the following properties:
They are non-personal.
They are concerned about the wellbeing of people.
They are concerned about what sorts of people we ought to create.
They are usually fair and impartial in some (but not necessarily all) ways.
If you want an example of what might be a non-moral, non-personal preference that humans do have, I think a parent’s love for their children might be a candidate. Parents are willing to sacrifice large amounts of hedonic utility for their children even if they do not perceive the outcome of that sacrifice. And you can’t consider it a purely moral preference because the amount they are willing to sacrifice goes way beyond what a stranger would be morally obliged to sacrifice. If they sacrificed a stranger’s hedonic utility as freely as they sacrifice their own they would be justly condemned for nepotism.
A rough heuristic I use is that my moral preferences are those that I prefer regardless of whether I perceive the outcome (for example, deluding myself that other people aren’t suffering when they in fact are isn’t good), and my hedonic preferences are those where I only care about what I perceive (if I think what I’m eating tastes good, it doesn’t matter what it “really” tastes like).
This heuristic works for things like Deep Blue (it doesn’t care about chess games that it’s not aware of), but it doesn’t match my intuition for paperclippers. Any thoughts on why this heuristic breaks down there? Or is paperclipping simply a moral preference that I disapprove of, along the same lines as keeping women veiled or not eating shrimp?
I think that both morality, and the desires of paperclippers, are examples of what might be called “Non-personal preferences,” that is, they are preferences that, as you said, are preferred regardless of whether or not one perceives their fulfillment. All moral preferences are non-personal preferences, but not all Non-personal preferences are moral preferences.
The reason the heuristic works most of the time for you is, I think, because humans don’t have a lot of non-personal preferences. Having experiences is the main thing we care about. Morality is one of the few non-personal preferences we have. So if you have a preference you prefer regardless of whether or not you perceive the outcome, it is probably a moral preference.
The reason that heuristic breaks down in regards to paperclippers is that they are a hypothetical alien entity that has nothing but non-personal preferences. They aren’t human, their non-personal preferences aren’t moral.
What would I propose as a replacement heuristic? It’s a hard question, but I’d say moral preferences tend to have the following properties:
They are non-personal.
They are concerned about the wellbeing of people.
They are concerned about what sorts of people we ought to create.
They are usually fair and impartial in some (but not necessarily all) ways.
If you want an example of what might be a non-moral, non-personal preference that humans do have, I think a parent’s love for their children might be a candidate. Parents are willing to sacrifice large amounts of hedonic utility for their children even if they do not perceive the outcome of that sacrifice. And you can’t consider it a purely moral preference because the amount they are willing to sacrifice goes way beyond what a stranger would be morally obliged to sacrifice. If they sacrificed a stranger’s hedonic utility as freely as they sacrifice their own they would be justly condemned for nepotism.