I came to the metaethics sequence an ethical subjectivist and walked away an ethical naturalist. I’ve mostly stopped using the words “objective” and “subjective”, because I’ve talked with subjectivists with whom I have few to no substantive disagreements. But I think you and I do have a disagreement! How exciting.
I accept that there’s something like an ordering over universe configurations which is “ideal” in a sense I will expand on later, and that human desirability judgements are evidence about the structure of that ordering, and that arguments between humans (especially about the desirability of of outcomes or the praiseworthiness of actions) are often an investigation into the structure of that ordering, much as an epistemic argument between agents (especially about true states of physical systems or the truth value of mathematical propositions) investigates the structure of a common reality which influences the agents’ beliefs.
A certain ordering over universe configurations also influences human preferences. It is not a causal influence, but a logical one. The connection between human minds and morality, the ideal ordering over universe configurations, is in the design of our brains. Our brains instantiate algorithms, especially emotional responses, that are logically correlated with the computation that compresses the ideal ordering over universe configurations.
Actually, our brains are logically correlated with the computations that compress multiple different orderings over universe configurations, which is part of the reason we have moral disagreements. We’re not sure which valuation—which configuration-ordering that determines how our consequential behaviors change in response to different evidence—which valuation is our logical antecedent and which are merely correlates. This is also why constructed agents similar to humans, like the ones in Three Worlds Collide, could seem to have moral disagreements with humans. They, as roughly consequentialist agents, would also be logically influenced by an ordering over universe configurations, and because of similar evolutionary pressures might also developed emotion-type algorithms. The computations compressing the different orderings, morality versus “coherentized alien endorsement relation” would be logically correlated, would be partially conditionally compressed by knowing the value of simpler computations that were common between the two. Through these commonalities the two species could have moral disagreements. But there would be other aspects of the computations that compress their orderings, logical factors that would influence one species but not the other. These would naively appear as moral disagreements, but would simply be mistaken communication: exchanging evidence about different referents while thinking they were the same.
But there are other sources of valuation-disagreement than being separate optimization processes. Some sources of moral disagreement between humans: We have only partial information about morality, just as we can be partially ignorant about the state of reality. For example, we might be unsure what long term effects to society would accompany the adoption of some practice like industrial manufacturing. Or even if someone in the pre-industrial era had perfect foresight, they might be unsure of how their expressed preferences toward that society would change with more exposure it. There are raw computational difficulties (unrelated to prediction of consequences) in figuring out which ordering best fits our morality-evidence, since the space of orderings over universe configurations is large. There are still more complicated issues with model selection because human preferences aren’t fully self endorsing.
Anyway, I’ve been using the word “ideal” a lot as though multiple people share a single ideal, and it’s past time I explained why. Humans share a ton of neural machinery and have a spatially concentrated origins, both of which mean closer logical-causal influences to their roughly-consequential reasoning. We have so much in common that saying “Pah, nothing is right. It’s all just subjective preferences and we’re very different people and what’s right for you is different from what’s right for me” seems to me like irresponsible ignorance. We’ve got like friggin’ hundreds of identical functional regions in our brains. We can exploit that for fun and profit. We can use interpersonal communication and argumentation and living together and probably other things to figure out morality. I see no reason to be dismissive of others’ values that we don’t sympathize with simply because there’s no shiny morality-object that “objectively exists” and has a wire leading into all our brains or whatever. Blur those tiniest of differences and it’s a common ideal. And that commonality is important enough that “moral realism” is a badge worth carrying on my identity.
I came to the metaethics sequence an ethical subjectivist and walked away an ethical naturalist. I’ve mostly stopped using the words “objective” and “subjective”, because I’ve talked with subjectivists with whom I have few to no substantive disagreements. But I think you and I do have a disagreement! How exciting.
I accept that there’s something like an ordering over universe configurations which is “ideal” in a sense I will expand on later, and that human desirability judgements are evidence about the structure of that ordering, and that arguments between humans (especially about the desirability of of outcomes or the praiseworthiness of actions) are often an investigation into the structure of that ordering, much as an epistemic argument between agents (especially about true states of physical systems or the truth value of mathematical propositions) investigates the structure of a common reality which influences the agents’ beliefs.
A certain ordering over universe configurations also influences human preferences. It is not a causal influence, but a logical one. The connection between human minds and morality, the ideal ordering over universe configurations, is in the design of our brains. Our brains instantiate algorithms, especially emotional responses, that are logically correlated with the computation that compresses the ideal ordering over universe configurations.
Actually, our brains are logically correlated with the computations that compress multiple different orderings over universe configurations, which is part of the reason we have moral disagreements. We’re not sure which valuation—which configuration-ordering that determines how our consequential behaviors change in response to different evidence—which valuation is our logical antecedent and which are merely correlates. This is also why constructed agents similar to humans, like the ones in Three Worlds Collide, could seem to have moral disagreements with humans. They, as roughly consequentialist agents, would also be logically influenced by an ordering over universe configurations, and because of similar evolutionary pressures might also developed emotion-type algorithms. The computations compressing the different orderings, morality versus “coherentized alien endorsement relation” would be logically correlated, would be partially conditionally compressed by knowing the value of simpler computations that were common between the two. Through these commonalities the two species could have moral disagreements. But there would be other aspects of the computations that compress their orderings, logical factors that would influence one species but not the other. These would naively appear as moral disagreements, but would simply be mistaken communication: exchanging evidence about different referents while thinking they were the same.
But there are other sources of valuation-disagreement than being separate optimization processes. Some sources of moral disagreement between humans: We have only partial information about morality, just as we can be partially ignorant about the state of reality. For example, we might be unsure what long term effects to society would accompany the adoption of some practice like industrial manufacturing. Or even if someone in the pre-industrial era had perfect foresight, they might be unsure of how their expressed preferences toward that society would change with more exposure it. There are raw computational difficulties (unrelated to prediction of consequences) in figuring out which ordering best fits our morality-evidence, since the space of orderings over universe configurations is large. There are still more complicated issues with model selection because human preferences aren’t fully self endorsing.
Anyway, I’ve been using the word “ideal” a lot as though multiple people share a single ideal, and it’s past time I explained why. Humans share a ton of neural machinery and have a spatially concentrated origins, both of which mean closer logical-causal influences to their roughly-consequential reasoning. We have so much in common that saying “Pah, nothing is right. It’s all just subjective preferences and we’re very different people and what’s right for you is different from what’s right for me” seems to me like irresponsible ignorance. We’ve got like friggin’ hundreds of identical functional regions in our brains. We can exploit that for fun and profit. We can use interpersonal communication and argumentation and living together and probably other things to figure out morality. I see no reason to be dismissive of others’ values that we don’t sympathize with simply because there’s no shiny morality-object that “objectively exists” and has a wire leading into all our brains or whatever. Blur those tiniest of differences and it’s a common ideal. And that commonality is important enough that “moral realism” is a badge worth carrying on my identity.