I wanted to answer as OP, but I don’t have a particularly informed view on this. And my view is biased by having spent a lot of time around moral philosophers who are hugely, hugely disproportionately likely to fall into the “moral reasoning is real” camp — and by having evolved from conservative Evangelical ethics to classical utilitarianism!
One fairly robust observation in psychology is that people do feel pressure to engage in moral consistency reasoning. One example of this comes from the literature on the Meat Eating Paradox. Let me reproduce an overview from pp. 573-574 here:
In a series of five studies, Brock Bastian and colleagues have demonstrated a link between seeing animals as food, on one hand, and seeing animals as having diminished mental lives and moral value, on the other hand. We will here describe three.
In a first study, participants were asked to rate the degree to which each of a diverse group of thirty-two animals possessed ten mental capacities, and then were asked how likely they would be to eat the animal and how wrong they believe eating that animal is. Perceived edibility was negatively associated with mind possession (r = –.42, p < .001), which was in turn associated with how the perceived wrongness of eating the animal (r = .80, p < .001).
In a second study, participants were asked to eat dried beef or dried nuts and then judge a cow’s cognitive abilities and desert of moral treatment on two seven-point scales. Participants in the beef condition (M = 5.57) viewed the cow as significantly less deserving of moral concern than those in the control condition (M = 6.08).
In a third study, participants were informed about Papua New Guinea’s tree kangaroo and informed variably that tree kangaroos have a steady population, that they are killed by storms, that they are killed for food, or that they are foraged for food. Bastian and colleagues found that categorizing tree kangaroos as food and no other features of these cases led participants to attribute less capacity for suffering and less moral concern.
Additionally, a sequence of five studies from Jonas Kunst and Sigrid Hohle demonstrates that processing meat, beheading a whole roasted pig, watching a meat advertisement without a live animal versus one with a live animal, describing meat production as “harvesting” versus “killing” or “slaughtering,” and describing meat as “beef/pork” rather than “cow/pig” all decreased empathy for the animal in question and, in several cases, significantly increased willingness to eat meat rather than an alternative vegetarian dish.
Psychologists involved in these and several other studies believe that these phenomena occur because people recognize an incongruity between eating animals and seeing them as beings with mental life and moral status, so they are motivated to resolve this cognitive dissonance by lowering their estimation of animal sentience and moral status.
I think studies like these at least imply that people are driven to resolve local tensions in their moral intuitions. (E.g. tensions between views about moral status, consciousness, right action, and belief about one’s own virtues.) How often would resolving these tensions lead to radical departures in moral views? I’m not sure! It seems to mostly depend on how much psychological pressure this creates and how interconnected people’s web of moral intuitions are. In people with highly interconnected moral webs and a lot of psychological pressure towards consistency, you’d expect much bigger changes.
I feel sad about this too. But this is common in impure scientific disciplines, e.g. medical studies often refer to value-laden concepts like proper functioning. The ideal would be to gradually naturalize all of this so we can talk to each other about observables without making any assumptions about interpretation of open-textured terminology. What I want to show here is primarily an existence proof that we can fully naturalize this discussion, but I haven’t yet managed to do this.
I think this is a very good question about arguments. And I do think we will have to make value judgments about what kinds of moral deliberation processes we think are “good” otherwise we are merely making predictions about behaviour rather than proposing an approach to alignment. An end result I would like would be one where the moral realist and the antirealist can neutrally discuss empirical hypotheses about what kinds of arguments would lead to what kind of updating, and discuss this separately from the question of which kinds of updating we like. This would allow for a more nuanced conversation, where instead of saying “I’m a realist therefore keep the future open” or “I’m an antirealist therefore lock it down” we can say “Let’s set aside capital letters and talk about what really motivated people in moral cognition. I think, empirically, this is how people reason morally and what people care about; personally, I want to make X intervention in the way people reason morally and would invite you to agree with me.”