Could evolution have selected for moral realism?

I was surprised to see the high number of moral realists on Less Wrong, so I thought I would bring up a (probably unoriginal) point that occurred to me a while ago.

Let’s say that all your thoughts either seem factual or fictional. Memories seem factual, stories seem fictional. Dreams seem factual, daydreams seem fictional (though they might seem factual if you’re a compulsive fantasizer). Although the things that seem factual match up reasonably well to the things that actually are factional, this isn’t the case axiomatically. If deviating from this pattern is adaptive, evolution will select for it. This could result in situations like: the rule that pieces move diagonally in checkers seems fictional, while the rule that you can’t kill people seems factual, even though they’re both just conventions. (Yes, the rule that you can’t kill people is a very good convention, and it makes sense to have heavy default punishments for breaking it. But I don’t think it’s different in kind from the rule that you must move diagonally in checkers.)

I’m not an expert, but it definitely seems as though this could actually be the case. Humans are fairly conformist social animals, and it seems plausible that evolution would’ve selected for taking the rules seriously, even if it meant using the fact-processing system for things that were really just conventions.

Another spin on this: We could see philosophy as the discipline of measuring, collating, and making internally consistent our intuitions on various philosophical issues. Katja Grace has suggested that the measurement of philosophical intuitions may be corrupted by the desire to signal on the part of the philosophy enthusiasts. Could evolutionary pressure be an additional source of corruption? Taking this idea even further, what do our intuitions amount to at all aside from a composite of evolved and encultured notions? If we’re talking about a question of fact, one can overcome evolution/​enculturation by improving one’s model of the world, performing experiments, etc. (I was encultured to believe in God by my parents. God didn’t drop proverbial bowling balls from the sky when I prayed for them, so I eventually noticed the contradiction in my model and deconverted. It wasn’t trivial—there was a high degree of enculturation to overcome.) But if the question has no basis in fact, like the question of whether morals are “real”, then genes and enculturation will wholly determine your answer to it. Right?

Yes, you can think about your moral intuitions, weigh them against each other, and make them internally consistent. But this is kind of like trying to add resolution back in to an extremely pixelated photo—just because it’s no longer obviously “wrong” doesn’t guarantee that it’s “right”. And there’s the possibility of path-dependence—the parts of the photo you try to improve initially could have a very significant effect on the final product. Even if you think you’re willing to discard your initial philosophical conclusions, there’s still the possibility of accidentally destroying your initial intuitional data or enculturing yourself with your early results.

To avoid this possibility of path-dependence, you could carefully document your initial intuitions, pursue lots of different paths to making them consistent in parallel, and maybe even choose a “best match”. But it’s not obvious to me that your initial mix of evolved and encultured values even deserves this preferential treatment.

Currently, I disagree with what seems to be the prevailing view on Less Wrong that achieving a Really Good Consistent Match for our morality is Really Darn Important. I’m not sure that randomness from evolution and enculturation should be treated differently from random factors in the intuition-squaring process. It’s randomness all the way through either way, right? The main reason “bad” consistent matches are considered so “bad”, I suspect, is that they engender cognitive dissonance (e.g. maybe my current ethics says I should hack Osama Bin Laden to death in his sleep with a knife if I get the chance, but this is an extremely bad match for my evolved/​encultured intuitions, so I experience a ton of cognitive dissonance actually doing this). But cognitive dissonance seems to me like just another aversive experience to factor in to my utility calculations.

Now that you’ve read this, maybe your intuition has changed and you’re a moral anti-realist. But in what sense has your intuition “improved” or become more accurate?

I really have zero expertise on any of this, so if you have relevant links please share them. But also, who’s to say that matters? In what sense could philosophers have “better” philosophical intuition? The only way I can think of for theirs to be “better” is if they’ve seen a larger part of the landscape of philosophical questions, and are therefore better equipped to build consistent philosophical models (example).