On your account, who in a slaveholding society is the ideal rational agent?
The question is misleading, because humans have a very complicated set of goals which include a measure of egalitarianism. But the complexity of our goals is not a necessary component of our intelligence about fulfilling them, as far as we can tell. We could be just as clever and sophisticated about reaching much simpler goals.
let’s assume they are both meta-ethical anti-realists.
Don’t you have to be a moral realist to compare utilities across different agents?
her slaves have no mechanism to satisfy their stronger preference for freedom.
This is not the mechanism which I’ve been saying is necessary. The necessary mechanism is one which will connect a preference to the planning algorithms of a particular agent. For humans, that mechanism is natural selection, including kin selection; that’s what gave us the various ways in which we care about the preferences of others. For a designed-from-scratch agent like a paperclip maximizer, there is—by stipulation—no such mechanism.
Khafra, one doesn’t need to be a moral realist to give impartial weight to interests / preference strengths. Ideal rational agent Jill need no more be a moral realist in taking into consideration the stronger but introspectively inaccessible preferences of her slaves than she need be a moral realist taking into account the stronger but introspectively inaccessible preference of her namesake and distant successor Pensioner Jill not to be destitute in old age when weighing whether to raid her savings account. Ideal rationalist Jill does not mistake an epistemological limitation on her part for an ontological truth. Of course, in practice flesh-and-blood Jill may sometimes be akratic. But this, I think, is a separate issue.
The question is misleading, because humans have a very complicated set of goals which include a measure of egalitarianism. But the complexity of our goals is not a necessary component of our intelligence about fulfilling them, as far as we can tell. We could be just as clever and sophisticated about reaching much simpler goals.
Don’t you have to be a moral realist to compare utilities across different agents?
This is not the mechanism which I’ve been saying is necessary. The necessary mechanism is one which will connect a preference to the planning algorithms of a particular agent. For humans, that mechanism is natural selection, including kin selection; that’s what gave us the various ways in which we care about the preferences of others. For a designed-from-scratch agent like a paperclip maximizer, there is—by stipulation—no such mechanism.
Khafra, one doesn’t need to be a moral realist to give impartial weight to interests / preference strengths. Ideal rational agent Jill need no more be a moral realist in taking into consideration the stronger but introspectively inaccessible preferences of her slaves than she need be a moral realist taking into account the stronger but introspectively inaccessible preference of her namesake and distant successor Pensioner Jill not to be destitute in old age when weighing whether to raid her savings account. Ideal rationalist Jill does not mistake an epistemological limitation on her part for an ontological truth. Of course, in practice flesh-and-blood Jill may sometimes be akratic. But this, I think, is a separate issue.