What does it even mean for a creature not to have “morals” and yet to have goals (often competing ones) that it tries to pursue?
Why wouldn’t it be possible to arbitrarily relabel any of those goals as ‘morals’? How would you restrict morals such that they are not conceivably identical with any part of a utility function, without being overly anthropocentric in what you allow as a valid moral code?
Why wouldn’t it be possible to arbitrarily relabel any of those goals as ‘morals’?
You can, but I think you’ve lost a distinction that most of us make. We have preferences. Some of those preferences we call moral. Some we don’t. Losing that distinction loses information.
You may prefer vanilla over chocolate, but few would call that a moral preference. One part of the distinction is that ice cream flavor preference has no 3rd person or higher order preferences. We don’t disapprove of people for eating the disfavored flavor or for preferring the disfavored flavor. And we don’t approve of people for preferring the favored flavor.
Deep Blue has a goal (winning chess games), but I wouldn’t call that “morals”. OTOH, I can’t think of any decent explicit criterion for which goals I would call morals other than “I know it when I see it” at the moment.
A rough heuristic I use is that my moral preferences are those that I prefer regardless of whether I perceive the outcome (for example, deluding myself that other people aren’t suffering when they in fact are isn’t good), and my hedonic preferences are those where I only care about what I perceive (if I think what I’m eating tastes good, it doesn’t matter what it “really” tastes like).
This heuristic works for things like Deep Blue (it doesn’t care about chess games that it’s not aware of), but it doesn’t match my intuition for paperclippers. Any thoughts on why this heuristic breaks down there? Or is paperclipping simply a moral preference that I disapprove of, along the same lines as keeping women veiled or not eating shrimp?
Any thoughts on why this heuristic breaks down there?
I think that both morality, and the desires of paperclippers, are examples of what might be called “Non-personal preferences,” that is, they are preferences that, as you said, are preferred regardless of whether or not one perceives their fulfillment. All moral preferences are non-personal preferences, but not all Non-personal preferences are moral preferences.
The reason the heuristic works most of the time for you is, I think, because humans don’t have a lot of non-personal preferences. Having experiences is the main thing we care about. Morality is one of the few non-personal preferences we have. So if you have a preference you prefer regardless of whether or not you perceive the outcome, it is probably a moral preference.
The reason that heuristic breaks down in regards to paperclippers is that they are a hypothetical alien entity that has nothing but non-personal preferences. They aren’t human, their non-personal preferences aren’t moral.
What would I propose as a replacement heuristic? It’s a hard question, but I’d say moral preferences tend to have the following properties:
They are non-personal.
They are concerned about the wellbeing of people.
They are concerned about what sorts of people we ought to create.
They are usually fair and impartial in some (but not necessarily all) ways.
If you want an example of what might be a non-moral, non-personal preference that humans do have, I think a parent’s love for their children might be a candidate. Parents are willing to sacrifice large amounts of hedonic utility for their children even if they do not perceive the outcome of that sacrifice. And you can’t consider it a purely moral preference because the amount they are willing to sacrifice goes way beyond what a stranger would be morally obliged to sacrifice. If they sacrificed a stranger’s hedonic utility as freely as they sacrifice their own they would be justly condemned for nepotism.
What does it even mean for a creature not to have “morals” and yet to have goals (often competing ones) that it tries to pursue?
Why wouldn’t it be possible to arbitrarily relabel any of those goals as ‘morals’? How would you restrict morals such that they are not conceivably identical with any part of a utility function, without being overly anthropocentric in what you allow as a valid moral code?
You can, but I think you’ve lost a distinction that most of us make. We have preferences. Some of those preferences we call moral. Some we don’t. Losing that distinction loses information.
You may prefer vanilla over chocolate, but few would call that a moral preference. One part of the distinction is that ice cream flavor preference has no 3rd person or higher order preferences. We don’t disapprove of people for eating the disfavored flavor or for preferring the disfavored flavor. And we don’t approve of people for preferring the favored flavor.
Deep Blue has a goal (winning chess games), but I wouldn’t call that “morals”. OTOH, I can’t think of any decent explicit criterion for which goals I would call morals other than “I know it when I see it” at the moment.
A rough heuristic I use is that my moral preferences are those that I prefer regardless of whether I perceive the outcome (for example, deluding myself that other people aren’t suffering when they in fact are isn’t good), and my hedonic preferences are those where I only care about what I perceive (if I think what I’m eating tastes good, it doesn’t matter what it “really” tastes like).
This heuristic works for things like Deep Blue (it doesn’t care about chess games that it’s not aware of), but it doesn’t match my intuition for paperclippers. Any thoughts on why this heuristic breaks down there? Or is paperclipping simply a moral preference that I disapprove of, along the same lines as keeping women veiled or not eating shrimp?
I think that both morality, and the desires of paperclippers, are examples of what might be called “Non-personal preferences,” that is, they are preferences that, as you said, are preferred regardless of whether or not one perceives their fulfillment. All moral preferences are non-personal preferences, but not all Non-personal preferences are moral preferences.
The reason the heuristic works most of the time for you is, I think, because humans don’t have a lot of non-personal preferences. Having experiences is the main thing we care about. Morality is one of the few non-personal preferences we have. So if you have a preference you prefer regardless of whether or not you perceive the outcome, it is probably a moral preference.
The reason that heuristic breaks down in regards to paperclippers is that they are a hypothetical alien entity that has nothing but non-personal preferences. They aren’t human, their non-personal preferences aren’t moral.
What would I propose as a replacement heuristic? It’s a hard question, but I’d say moral preferences tend to have the following properties:
They are non-personal.
They are concerned about the wellbeing of people.
They are concerned about what sorts of people we ought to create.
They are usually fair and impartial in some (but not necessarily all) ways.
If you want an example of what might be a non-moral, non-personal preference that humans do have, I think a parent’s love for their children might be a candidate. Parents are willing to sacrifice large amounts of hedonic utility for their children even if they do not perceive the outcome of that sacrifice. And you can’t consider it a purely moral preference because the amount they are willing to sacrifice goes way beyond what a stranger would be morally obliged to sacrifice. If they sacrificed a stranger’s hedonic utility as freely as they sacrifice their own they would be justly condemned for nepotism.