I’m slowly working through a bunch of philosophical criticisms of consequentialism and utilitarianism, kicked off by this book: https://www.routledge.com/Risk-Philosophical-Perspectives/Lewens/p/book/9780415422840 (which I don’t think is good enough to actually recommend)
One common thread is complaints about utilitarianism and consequentialism giving incorrect answers in specific cases. One is the topic of this question: When evaluating potential harms, how can we decide between a potential harm that’s the result of someone’s agency (their own beliefs, decisions, and actions) vs a potential harm from an outside source (e.g. imposed by the state)?
I’m open to gedanken experiments to illustrate this, but for now I’ll use something dumb and simple. You can save one of two people; saving them means getting them 100% protection from this specific potential harm, all else being equal.
Person A has entered into a risky position by their own actions, after deciding to do so based on their beliefs. They are currently at a 10% chance of death (with the remainder 90% nothing happens).
Person B has been forced into a risk position by their state, which they were born into and have not been allowed to leave. They are currently at X% chance of death (with the remainder nothing happens).
Assume that Persons A and B have approximately the same utility ahead of the (QALYs, etc). The point of the question is to specifically quantifiably find a ratio to tradeoff between agency and utility (in this case mortality).
For what values of X would you chose to save Person B?
I’m interested in things like natural experiments would would show how current decision systems or philosophies answer this question. I am also interested in peoples personal takes.