Original parent says, “The world is neither fair nor unfair”, meaning, “The world is neither deliberately fair nor deliberately unfair”, and my comment was meant to be interpreted as replying, “Of course the world is unfair—if it’s not fair, it must be unfair—and it doesn’t matter that it’s accidental rather than deliberate.” Also to counteract the deep wisdom aura that “The world is neither fair nor unfair” gets from counterintuitively violating the (F \/ ~F) axiom schema.
It matters hugely that it’s not deliberately unfair. People get themselves into really awful psychological holes—in particular the lasting and highly destructive stain of bitterness—by noting that the world is not fair, and going on to adopt a mindset that it is deliberately unfair.
It matters hugely that it’s not deliberately unfair.
It matters a lot (to those who are vulnerable to the particular kind of irrational bitterness in question) that the universe is not deliberately unfair.
I took Eliezer’s “it doesn’t matter” to be the more specific claim “it does not matter to the question of whether the universe is unfair whether the unfairness present is deliberate or not-deliberate”.
Err, the “question of whether the universe is unfair” sounds a lot to me like the “question of whether the tree makes a sound”. What query are we trying to hug here? I think what I call “unfairness”—something due to some agent—is something we can at least sometimes usefully respond by being pissed off, because the agent doesn’t want us to be pissed off. But the Universe absolutely cannot care whether we’re pissed off, and so putting it under the same category as eg discrimination engenders the wrong response.
What makes being pissed off at an agent who treats me unfairly useful is not that the agent doesn’t want me to be pissed off. In fact, I can sometimes be usefully pissed off at an unfair agent that is entirely indifferent to, or even unaware of, my existence. In much the the same way, I can sometimes be usefully pissed off at a non-agent that behaves in ways that I would classify as “unfair” if an agent behaved that way.
Admittedly, asking when it’s useful to classify something as “unfair” is different from asking what things are in fact unfair.
On the other hand, in practice the first of those seems most relevant to actual human behavior. The second seems to pretty quickly lead to either the answer “everything” (all processes result in output distributions that are not evenly distributed across some metric) or “nothing” (all processes are equally constrained and specified by physical law) and neither of those answers seems terribly relevant to what anyone means by the question.
I’m confused because it was Eliezer who taught me this.
EDIT: I’m now resisting the temptation to tell Eliezer to “read the sequences”.
Original parent says, “The world is neither fair nor unfair”, meaning, “The world is neither deliberately fair nor deliberately unfair”, and my comment was meant to be interpreted as replying, “Of course the world is unfair—if it’s not fair, it must be unfair—and it doesn’t matter that it’s accidental rather than deliberate.” Also to counteract the deep wisdom aura that “The world is neither fair nor unfair” gets from counterintuitively violating the (F \/ ~F) axiom schema.
It matters hugely that it’s not deliberately unfair. People get themselves into really awful psychological holes—in particular the lasting and highly destructive stain of bitterness—by noting that the world is not fair, and going on to adopt a mindset that it is deliberately unfair.
It matters a lot (to those who are vulnerable to the particular kind of irrational bitterness in question) that the universe is not deliberately unfair.
I took Eliezer’s “it doesn’t matter” to be the more specific claim “it does not matter to the question of whether the universe is unfair whether the unfairness present is deliberate or not-deliberate”.
Err, the “question of whether the universe is unfair” sounds a lot to me like the “question of whether the tree makes a sound”. What query are we trying to hug here? I think what I call “unfairness”—something due to some agent—is something we can at least sometimes usefully respond by being pissed off, because the agent doesn’t want us to be pissed off. But the Universe absolutely cannot care whether we’re pissed off, and so putting it under the same category as eg discrimination engenders the wrong response.
What makes being pissed off at an agent who treats me unfairly useful is not that the agent doesn’t want me to be pissed off. In fact, I can sometimes be usefully pissed off at an unfair agent that is entirely indifferent to, or even unaware of, my existence. In much the the same way, I can sometimes be usefully pissed off at a non-agent that behaves in ways that I would classify as “unfair” if an agent behaved that way.
Admittedly, asking when it’s useful to classify something as “unfair” is different from asking what things are in fact unfair.
On the other hand, in practice the first of those seems most relevant to actual human behavior. The second seems to pretty quickly lead to either the answer “everything” (all processes result in output distributions that are not evenly distributed across some metric) or “nothing” (all processes are equally constrained and specified by physical law) and neither of those answers seems terribly relevant to what anyone means by the question.