Eliezer, would the following be an accurate synopsis of what you call morality?
Each of us has an action-evaluating program. This should be thought of as a Turing machine encoded in the hardware of our brains. It is a determinate computational dynamic in our minds that evaluates the actions of agents in scenarios. By a scenario, I mean a mental model of a hypothetical or real situation. Now, a scenario that models agents can also model their action-evaluating programs. An evaluation of an action in a scenario is a moral evaluation if, and only if, the same action is given the same value in every scenario that differs from the first one only in that the agent performing the action has a different action-evaluating program.
In other words, moral evaluations are characterized by being invariant under certain kinds of modifications: Namely, modifications that consist only of assigning a different action-evaluating program to the agent performing the action.
Does that capture the distinctive quality of moral evaluations that you’ve been trying to convey?
A few thoughts:
(1) It seems strange to me to consider moral evaluations, so defined, to be distinct from personal preferences. With this definition, I would say that moral evaluations are a special case of personal preferences. Specifically, they are the preferences that are invariant under a certain kind of modification to the scenario being considered.
I grant that it is valuable to distinguish this particular kind of personal preference. First, I can imagine that you’re right when you say that it’s valuable if one wants to build an AI. Second, it’s logically interesting because this criterion for moral evaluation is a self-referential one, in that it stipulates how the action-evaluating program (doesn’t) react to hypothetical changes to itself. Third, by noting this distinctive kind of action-evaluation, you’ve probably helped to explain why people are so prone to thinking that certain evaluations are universally valid.
Nonetheless, the point remains that your definition amounts to considering moral evaluation to be nothing more than a particular kind of personal preference. I therefore don’t think that it does anything to ease the concerns of moral universalists. Some of your posts included very cogent explanations of why moral universalism is incoherent, but I think you would grant that the points that you raised there weren’t particularly original. Moral-relativists have been making those points for a long time. I agree that they make moral universalism untenable, but moral universalists have heard them all before.
Your criterion for moral evaluation, on the other hand, is original (to the best of my meager knowledge). But, so far as the debate between moral relativists and universalists is concerned, it begs the question. It takes the reduction of morality to personal preference as given, and proceeds to define which preferences are the moral ones. I therefore don’t expect it to change any minds in that debate.
(2) Viewing moral evaluations as just a special kind of personal preference, what reason is there to think that moral evaluations have their own computational machinery underlying them? I’m sure that this is something that you’ve thought a lot about, so I’m curious to hear you thoughts on this. My first reaction is to think that, sure, we can distinguish moral evaluations from the other outputs of our preference-establishing machinery, but that doesn’t mean that special processes were running to produce the moral evaluations.
For example, consider a program that produces the natural numbers by starting with 1, and then producing each successive number by adding 1 to the previously-produced number. After this machine has produced some output, we can look over the tape and observe that some of the numbers produced have the special property of being prime. We might want to distinguish these numbers from the rest of the output for all sorts of good reasons. There is indeed a special, interesting feature of those numbers. But we should not infer that any special computational machinery produced those special numbers. The prime numbers might be special, but, in this case, the dynamics that produced them are the same as those that produced the non-special composite numbers.
Similarly, moral evaluations, as you define them, are distinguishable from other action-evaluations. But what reason is there to think that any special machinery underlies moral evaluations as opposed to other personal preferences?
(3) Since humans manage to differ in so many of their personal preferences, there seems little reason to think that they are nearly universally unanimous with regards to their moral evaluations. That is, I don’t see how the distinguishing feature of moral evaluations (a particular kind of invariance) would make them less likely to differ from person-to-person or moment-to-moment within the same person. So, I don’t quite understand your strong reluctance to attribute different moral evaluations to different people.
Eliezer, would the following be an accurate synopsis of what you call morality?
Each of us has an action-evaluating program. This should be thought of as a Turing machine encoded in the hardware of our brains. It is a determinate computational dynamic in our minds that evaluates the actions of agents in scenarios. By a scenario, I mean a mental model of a hypothetical or real situation. Now, a scenario that models agents can also model their action-evaluating programs. An evaluation of an action in a scenario is a moral evaluation if, and only if, the same action is given the same value in every scenario that differs from the first one only in that the agent performing the action has a different action-evaluating program.
In other words, moral evaluations are characterized by being invariant under certain kinds of modifications: Namely, modifications that consist only of assigning a different action-evaluating program to the agent performing the action.
Does that capture the distinctive quality of moral evaluations that you’ve been trying to convey?
A few thoughts:
(1) It seems strange to me to consider moral evaluations, so defined, to be distinct from personal preferences. With this definition, I would say that moral evaluations are a special case of personal preferences. Specifically, they are the preferences that are invariant under a certain kind of modification to the scenario being considered.
I grant that it is valuable to distinguish this particular kind of personal preference. First, I can imagine that you’re right when you say that it’s valuable if one wants to build an AI. Second, it’s logically interesting because this criterion for moral evaluation is a self-referential one, in that it stipulates how the action-evaluating program (doesn’t) react to hypothetical changes to itself. Third, by noting this distinctive kind of action-evaluation, you’ve probably helped to explain why people are so prone to thinking that certain evaluations are universally valid.
Nonetheless, the point remains that your definition amounts to considering moral evaluation to be nothing more than a particular kind of personal preference. I therefore don’t think that it does anything to ease the concerns of moral universalists. Some of your posts included very cogent explanations of why moral universalism is incoherent, but I think you would grant that the points that you raised there weren’t particularly original. Moral-relativists have been making those points for a long time. I agree that they make moral universalism untenable, but moral universalists have heard them all before.
Your criterion for moral evaluation, on the other hand, is original (to the best of my meager knowledge). But, so far as the debate between moral relativists and universalists is concerned, it begs the question. It takes the reduction of morality to personal preference as given, and proceeds to define which preferences are the moral ones. I therefore don’t expect it to change any minds in that debate.
(2) Viewing moral evaluations as just a special kind of personal preference, what reason is there to think that moral evaluations have their own computational machinery underlying them? I’m sure that this is something that you’ve thought a lot about, so I’m curious to hear you thoughts on this. My first reaction is to think that, sure, we can distinguish moral evaluations from the other outputs of our preference-establishing machinery, but that doesn’t mean that special processes were running to produce the moral evaluations.
For example, consider a program that produces the natural numbers by starting with 1, and then producing each successive number by adding 1 to the previously-produced number. After this machine has produced some output, we can look over the tape and observe that some of the numbers produced have the special property of being prime. We might want to distinguish these numbers from the rest of the output for all sorts of good reasons. There is indeed a special, interesting feature of those numbers. But we should not infer that any special computational machinery produced those special numbers. The prime numbers might be special, but, in this case, the dynamics that produced them are the same as those that produced the non-special composite numbers.
Similarly, moral evaluations, as you define them, are distinguishable from other action-evaluations. But what reason is there to think that any special machinery underlies moral evaluations as opposed to other personal preferences?
(3) Since humans manage to differ in so many of their personal preferences, there seems little reason to think that they are nearly universally unanimous with regards to their moral evaluations. That is, I don’t see how the distinguishing feature of moral evaluations (a particular kind of invariance) would make them less likely to differ from person-to-person or moment-to-moment within the same person. So, I don’t quite understand your strong reluctance to attribute different moral evaluations to different people.