I feel sad about this too. But this is common in impure scientific disciplines, e.g. medical studies often refer to value-laden concepts like proper functioning. The ideal would be to gradually naturalize all of this so we can talk to each other about observables without making any assumptions about interpretation of open-textured terminology. What I want to show here is primarily an existence proof that we can fully naturalize this discussion, but I haven’t yet managed to do this.
I think this is a very good question about arguments. And I do think we will have to make value judgments about what kinds of moral deliberation processes we think are “good” otherwise we are merely making predictions about behaviour rather than proposing an approach to alignment. An end result I would like would be one where the moral realist and the antirealist can neutrally discuss empirical hypotheses about what kinds of arguments would lead to what kind of updating, and discuss this separately from the question of which kinds of updating we like. This would allow for a more nuanced conversation, where instead of saying “I’m a realist therefore keep the future open” or “I’m an antirealist therefore lock it down” we can say “Let’s set aside capital letters and talk about what really motivated people in moral cognition. I think, empirically, this is how people reason morally and what people care about; personally, I want to make X intervention in the way people reason morally and would invite you to agree with me.”
I feel sad about this too. But this is common in impure scientific disciplines, e.g. medical studies often refer to value-laden concepts like proper functioning. The ideal would be to gradually naturalize all of this so we can talk to each other about observables without making any assumptions about interpretation of open-textured terminology. What I want to show here is primarily an existence proof that we can fully naturalize this discussion, but I haven’t yet managed to do this.
I think this is a very good question about arguments. And I do think we will have to make value judgments about what kinds of moral deliberation processes we think are “good” otherwise we are merely making predictions about behaviour rather than proposing an approach to alignment. An end result I would like would be one where the moral realist and the antirealist can neutrally discuss empirical hypotheses about what kinds of arguments would lead to what kind of updating, and discuss this separately from the question of which kinds of updating we like. This would allow for a more nuanced conversation, where instead of saying “I’m a realist therefore keep the future open” or “I’m an antirealist therefore lock it down” we can say “Let’s set aside capital letters and talk about what really motivated people in moral cognition. I think, empirically, this is how people reason morally and what people care about; personally, I want to make X intervention in the way people reason morally and would invite you to agree with me.”