Rawls’s Veil of Ignorance Doesn’t Make Any Sense

John Rawls suggests the thought experiment of an “original position” where people decide the political system of a society under a “veil of ignorance” by which they lose the knowledge of certain information about themselves. Rawls’s veil of ignorance doesn’t justify the kind of society he supports.

It seems to fail at every step individually:

  1. At best, the support of people in the OP provides necessary but probably insufficient conditions for justice, unless he refutes all the other proposed conditions involving whatever rights, desert, etc.

  2. And really the conditions of the OP are actively contrary to good decision-making. For example, in the OP, you don’t know your particular conception of the good (??) and you’re essentially self-interested. . .

  3. There’s no reason to think, generally, that people disagree with John Rawls only because of their social position or psychological quirks

  4. There’s no reason to think, specifically, that people would have the literally infinite risk aversion required to support the maximin principle.

  5. Even given everything, the best social setup could easily be optimized for the long-term (in consideration of future people) in a way that makes it very different (e.g. harsher for the poor living today) from the kind of egalitarian society I understand Rawls to support.

More concretely:

  • (A) I imagine that if Aristotle were under a thin veil of ignorance, he would just say “Well if I turn out to be born a slave then I will deserve it.” It’s unfair and not very convincing to say that people would just agree with a long list of your specific ideas if not for their personal circumstances.

  • (B) If you won the lottery and I demanded that you sell your ticket to me for $100 on the grounds that you would have, hypothetically, agreed to do this yesterday (before you know that it was a winner), you don’t have to do this; the hypothetical situation doesn’t actually bear on reality in this way.

Another frame is that his argument involves a bunch of provisions that seem designed to avoid common counterarguments but are otherwise arbitrary (utility monsters, utilitarianism, etc).