You are correct in that conclusion. I think it is impossible for one to act on their own (true) moral preferences yet morally wrongly.
There are two remaining points, for me. First is that it’s difficult to figure out one’s own exact moral preferences. The second is that it becomes extremely important to never forget to qualify “morally wrongly” with a parent.
Frank can never act on Frank’s true moral preferences and yet act Frank’s-Evaluation-Of morally wrongly.
Bob can never act on Bob’s true moral preferences and yet act Bob’s-Evaluation-Of morally wrongly.
However, since it is not physically required in the laws of the universe that Frank’s “Evaluation of Morally Wrong” function == Bob’s “Evaluation of Morally Wrong” function, this can mean that:
Frank CAN act on Frank’s true moral preferences and yet act Bob’s-Evaluation-Of morally wrongly.
So to attempt to resolve the whole brain-wracking nightmare that ensues, it becomes important to see whether Bob and Frank have common parts in their evaluation of morality. It also becomes important to notice that it’s highly likely that a fraction of Frank’s evaluation of morality depends on the results of Bob’s evaluation of morality, and vice-versa.
Thus, we can get cases where Frank’s moral preferences will depend on the moral preferences of Bob, at least in part, which means if Frank is really acting according to what Frank’s moral preferences really say about Frank not wanting to act completely against Bob’s moral preferences, then Frank is usually also acting partially according to most of Bob’s preferences.
It is counterintuitive, I’ll grant that. I find it much less counterintuitive than Quantum Physics, though, and as the latter exemplifies it’s not uncommon for human brains to not find reality intuitive. I don’t mean this association connotatively; I don’t really have other examples. My point is that human intuition is a poor tool to evaluate advanced notions like these.
This is sensible enough as a theory of morality, but you still haven’t accounted for ethics, or the practice of engaging in interpersonal arguments about moral values. If Bob!morality is so clearly distinct from Frank!morality, why would Bob and Frank even want to engage in ethical reasoning and debate? Is it just a coincidence that we do, or is there some deeper explanation?
A possible explanation: we need to use ethical debate as a way of compromising and defusing potential conflicts. If Bob and Frank couldn’t debate their values, they would probably have to resort to violence and coercion, which most folks would see as morally bad.
Well, I agree with your second paragraph as a possible reason, which on its own I think would be enough to make most actual people do ethics.
And while Bob and Frank have clearly distinct moralities, since both of them were created by highly similar circumstances and processes (i.e. those that produce humans brains), it seems very likely that there’s more than just one or two things on which they would agree.
As for other reasons to do ethics, I think the part of Frank!morality that takes Bob!morality as an input is usually rather important, at least in a context where Frank and Bob are both humans in the same tribe. Which means Frank wants to know Bob!morality, otherwise Frank!morality has incomplete information with which to evaluate things, which is more likely to lead to sub-optimal estimates of Frank’s moral preferences as they would be if Frank had known Bob’s true moral preferences.
Frank wants to maximize the true Frank!morality, which has a component for Bob!morality, and probability says incomplete information on Bob!morality leads to lower expected Frank!morality.
If we add more players, eventually it gets to a point where you can’t keep track of all the X!morality, and so you try to build approximations and aggregations of common patterns of morality and shared values among members of the groups that Frank!morality evaluates over. Frank also wants to find the best possible game-theoretic “compromise”, since others having more of their morality means they are less likely to act against Frank!morality by social commitment, ethical reasoning, game-theoretic reasoning, or any other form of cooperation.
Ethics basically appears to me like a natural Nash equilibrium, and meta-ethics the best route towards Pareto optima. These are rough pattern-matching guesses, though, since what numbers would I be crunching? I don’t have the actual algorithms of actual humans to work with, of course.
You are correct in that conclusion. I think it is impossible for one to act on their own (true) moral preferences yet morally wrongly.
There are two remaining points, for me. First is that it’s difficult to figure out one’s own exact moral preferences. The second is that it becomes extremely important to never forget to qualify “morally wrongly” with a parent.
Frank can never act on Frank’s true moral preferences and yet act Frank’s-Evaluation-Of morally wrongly.
Bob can never act on Bob’s true moral preferences and yet act Bob’s-Evaluation-Of morally wrongly.
However, since it is not physically required in the laws of the universe that Frank’s “Evaluation of Morally Wrong” function == Bob’s “Evaluation of Morally Wrong” function, this can mean that:
Frank CAN act on Frank’s true moral preferences and yet act Bob’s-Evaluation-Of morally wrongly.
So to attempt to resolve the whole brain-wracking nightmare that ensues, it becomes important to see whether Bob and Frank have common parts in their evaluation of morality. It also becomes important to notice that it’s highly likely that a fraction of Frank’s evaluation of morality depends on the results of Bob’s evaluation of morality, and vice-versa.
Thus, we can get cases where Frank’s moral preferences will depend on the moral preferences of Bob, at least in part, which means if Frank is really acting according to what Frank’s moral preferences really say about Frank not wanting to act completely against Bob’s moral preferences, then Frank is usually also acting partially according to most of Bob’s preferences.
It is counterintuitive, I’ll grant that. I find it much less counterintuitive than Quantum Physics, though, and as the latter exemplifies it’s not uncommon for human brains to not find reality intuitive. I don’t mean this association connotatively; I don’t really have other examples. My point is that human intuition is a poor tool to evaluate advanced notions like these.
This is sensible enough as a theory of morality, but you still haven’t accounted for ethics, or the practice of engaging in interpersonal arguments about moral values. If Bob!morality is so clearly distinct from Frank!morality, why would Bob and Frank even want to engage in ethical reasoning and debate? Is it just a coincidence that we do, or is there some deeper explanation?
A possible explanation: we need to use ethical debate as a way of compromising and defusing potential conflicts. If Bob and Frank couldn’t debate their values, they would probably have to resort to violence and coercion, which most folks would see as morally bad.
Well, I agree with your second paragraph as a possible reason, which on its own I think would be enough to make most actual people do ethics.
And while Bob and Frank have clearly distinct moralities, since both of them were created by highly similar circumstances and processes (i.e. those that produce humans brains), it seems very likely that there’s more than just one or two things on which they would agree.
As for other reasons to do ethics, I think the part of Frank!morality that takes Bob!morality as an input is usually rather important, at least in a context where Frank and Bob are both humans in the same tribe. Which means Frank wants to know Bob!morality, otherwise Frank!morality has incomplete information with which to evaluate things, which is more likely to lead to sub-optimal estimates of Frank’s moral preferences as they would be if Frank had known Bob’s true moral preferences.
Frank wants to maximize the true Frank!morality, which has a component for Bob!morality, and probability says incomplete information on Bob!morality leads to lower expected Frank!morality.
If we add more players, eventually it gets to a point where you can’t keep track of all the X!morality, and so you try to build approximations and aggregations of common patterns of morality and shared values among members of the groups that Frank!morality evaluates over. Frank also wants to find the best possible game-theoretic “compromise”, since others having more of their morality means they are less likely to act against Frank!morality by social commitment, ethical reasoning, game-theoretic reasoning, or any other form of cooperation.
Ethics basically appears to me like a natural Nash equilibrium, and meta-ethics the best route towards Pareto optima. These are rough pattern-matching guesses, though, since what numbers would I be crunching? I don’t have the actual algorithms of actual humans to work with, of course.