Brief Thoughts on Justifications for Paternalism

Link post

Lisa wrote an article on coercion, paternalism and irreversible decisions. I have some thoughts.

The basic groundwork goes something like this:

  • coercing people is bad

  • coercing people to prevent them infringing on the rights of others may be justified in some cases

  • coercing people to stop them doing something they freely want to do, and which has no impact on others, is on face value unjustified.

When can paternalistic coercion be justified? A few obvious justifications I can think of are

  • When a person is mistaken about the consequential outcomes of an act. (e.g: John thinks that taking act X will give him $1′000. Actually it will give him with -$1′000)

  • When a persons actions would lower their own utility

  • When a persons choice is not a reflection of their “true” preferences. (e.g: an addict who wants drugs but doesn’t want to want drugs. A person in a temporary depression who wants to kill themselves)

  • When a persons choice has overly large effects on their future self, and those effects are locked in. e.g: selling yourself into slavery

The “You’re Wrong” justification for paternalism

This justification is basically that someone is mistaken about the practical consequences of a choice they’re going to make. Hence coercing them to do what would best suit their preferences is okay.

On one hand I’m not too convinced by this justification. If someone is mistaken about the consequences of a given action, you can always tell them why you think they’re mistaken. If they still disagree, it’s not clear what justification you have for coercion. Imagine you live in a society where the following is true

  • There are shops which sell “ill-advised consumer goods”

  • the goods in these shops are unregulated and hence often dangerous or harmful to the user

  • the government specifically advises you to not buy things from these shops and entering them requires you to sign a waiver stating you understand that it’s a bad idea for most people who do so

  • If you don’t trust yourself to make these kind of decisions in the moment, you can also choose at the beginning of every year to be banned from ill-advised goods for the entire year.

In this kind of society, people are very clearly

  • given a choice of whether they want to defer to the state/​experts or not (the annual opt-in)

  • made aware of why society thinks a certain act is bad

I guess the general argument here is that if you think someone is mistaken, you can tell them why you think they’re mistaken. If they still disagree subsequent to hearing your reasoning, I’m really not clear why it’s okay to assume that you’re right, they’re wrong and the use of coercion is okay.

The utilitarian justification for paternalism

Not much to say here. I think some (many?) people are dumb and make bad decisions. Forcing them to make better decisions can raise their utility, whether measured by their own subjective preferences or by some kind of objective list variant (e.g: hedonic utilitarianism). Whether in practice this forcing is actually effective depends on the society/​government/​political equilibria but in principle many such paternalistic interventions exist.

If you’re a hardcore utilitarian, this is enough to convince you. For me I value a plurality of goods with freedom from coercion and utility being two of them. I guess in some cases the tradeoff between utility and freedom is sufficiently lopsided that I would think paternalism is okay. That being said

  • I think my intuitions generally put a lot more weight on preference consequentialism as opposed to any kind of hedonic utilitarianism. Hence if someone chooses not to wear a seatbelt, I would tend to be okay with that decision.

  • I worry that as a social technology, paternalism usually increases in scope and severity over time as the policy ratchet goes primarily in one direction.

I guess the real crux here is between hedonic vs preference utilitarianism. If you’re a hedonic utilitarian, paternalism is straightforwardly morally fine. If you’re a preference utilitarian, as I think most of us are, then it doesn’t.

The false vs real preferences justification

Heroin addicts want heroin. They don’t want to want heroin. Hence it’s okay to deny them heroin and force them into rehab because that matches their true underlying desires.

I find this argument fairly persuasive. I guess the only difficulty I have is in how we distinguish between real and false preferences, maybe let’s call them higher and lower order preferences from now on. A bad way to do this is to look at what most people do and want, assume this is normal and then judging deviations from this as increasingly likely to be false preferences. A better way is to see what people say they want when the immediate need for a thing is satisfied. There’s still some difficulty with different versions of a person having different desires and how far we value the preferences of short lived personas of a person verses their more normal persona, but hey that’s a tricky problem generally.

The future self justification

Some decisions you can make will unduly constrain your future self. E.g: living like a king for 10 years in return for being a slave for the next 50 and being abused constantly. This is wrong because you remove the agency of your future self, harming them. Consider a similar case where you live like a king for 10 years and then have to put on a mind-wipe headband before having a different person overwritten onto your brain who is then enslaved. That would also be bad for the same reason.

I’m not sure here. I think this argument raises a lot of genuinely thorny/​difficult problems that are hard to resolve in isolation. Some thoughts:

  • You’re not actually harming the future person because absent your actions, they would not exist in the fist place. (Basically a non-identity objection)

  • All actions you make constrain your future self.

  • To what extent do your present choices constrain a future person vs determine which of many different future people will exist?

  • The whole future person thing seems contingent on a view of identity that is not based on continuity of conscious experience.

So yeah, this is pretty messy and I think pretty much entirely depends on what your view is on identity + whether non-instantiated agents moral preferences matter. My view is that identity is basically mindspace similarity (aka how similar is your mind to the other agent) and and non-instantiated agents preferences matter an equal amount. I’d hence say your future self, provided it’s sufficiently similar to you, counts as pretty much the same agent and you can make decisions for the both of you. (Although you should probably have a discount rate of 0 or close to it for you’re future self’s utility).

All in all

I think paternalistic coercion is sometimes justified. Probably when someone has explicitly said they’d rather not have a preference/​do an act and the version of them that says that is either reasonably similar to the version of them that does the act. In theory I’m not okay with “you’re wrong” type paternalism but in practice I think it’s often okay because the political reality is such that the choice is between “paternalism to stop dumb people killing themselves or teens trying super heroin” and no paternalism, not magical libertarian dangerous goods shops.

No comments.