And meanwhile a large chunk of the problem is “people have very different models and ontologies that output very different beliefs and plans”, so a lot of things that look like rationalization is just very different models.
“just” very different model demands the question of why they (and you) prefer such different models.
It’s motivated cognition all the way down. Choice of model is subject to the same biases that the object-level untruths are. In fact, motivated use of less-than-useful model is probably the MOST common case I encounter of the kinds of self- and other-deception we’re discussing.
I think some differences of models are due to motivated cognition, but I think many or most models comes more down to different problems that you’re solving.
For example, I had many arguments with habryka about whether there should be norms around keeping the office clean that involved continuous effort on the part of individuals. His opinion was that you should just solve the problem with specialization and systemization. I think motivated cognition have played a role in each of our models, but there were legitimate reasons to prefer one over the other, and those reasons were entangled with each other in messy ways that requires several days of conversation to untangle. (See “Hufflepuff Leadership and Fighting Entropy” for some details about the models, and hopefully an upcoming blogpost about resolving disagreements when you don’t share ontologies)
“just” very different model demands the question of why they (and you) prefer such different models.
It’s motivated cognition all the way down. Choice of model is subject to the same biases that the object-level untruths are. In fact, motivated use of less-than-useful model is probably the MOST common case I encounter of the kinds of self- and other-deception we’re discussing.
I think some differences of models are due to motivated cognition, but I think many or most models comes more down to different problems that you’re solving.
For example, I had many arguments with habryka about whether there should be norms around keeping the office clean that involved continuous effort on the part of individuals. His opinion was that you should just solve the problem with specialization and systemization. I think motivated cognition have played a role in each of our models, but there were legitimate reasons to prefer one over the other, and those reasons were entangled with each other in messy ways that requires several days of conversation to untangle. (See “Hufflepuff Leadership and Fighting Entropy” for some details about the models, and hopefully an upcoming blogpost about resolving disagreements when you don’t share ontologies)