Do you mean consistent in the sense of choosing by a fixed criterion, like “no torturing people”, or choosing by a fixed criterion that is not exposed to certain losses in terms of the agent’s preferences given an adversary with identical knowledge, like “behavior positively correlated with the that of an agent who knows and shares your preferences and is able to conditionalize on evidence and decides to maximize its updateless expectation”?
If the latter, as I understood your comment upon first reading, that seems to be contradicted by the claims of Eli’s circular altruism post, though he provides no citations. Also, the post says nothing explicitly of whether people who call themselves utilitarians are better in practice of shutting up and multiplying, though I don’t see how having no verbal beliefs such as “you can’t put a price on life” would make one more likely to act as though human lives are incomparably valuable.
Do you mean consistent in the sense of choosing by a fixed criterion, like “no torturing people”, or choosing by a fixed criterion that is not exposed to certain losses in terms of the agent’s preferences given an adversary with identical knowledge, like “behavior positively correlated with the that of an agent who knows and shares your preferences and is able to conditionalize on evidence and decides to maximize its updateless expectation”?
If the latter, as I understood your comment upon first reading, that seems to be contradicted by the claims of Eli’s circular altruism post, though he provides no citations. Also, the post says nothing explicitly of whether people who call themselves utilitarians are better in practice of shutting up and multiplying, though I don’t see how having no verbal beliefs such as “you can’t put a price on life” would make one more likely to act as though human lives are incomparably valuable.