I have some sympathy with the Nate side. It feels a bit like the Scott position is doing separation of concerns wrong. If your beliefs and your actions disagree, I think it better to revise one or the other, rather than coming up with principles about how it’s fine to say one thing and do another.
What do you mean by “better” here?
For humans (or any other kinds of agents) that live in the physical world as opposed to idealized mathematical universes, the process of explicitly revising beliefs (or the equivalent action-generators in the latter case) imposes costs in terms of the time and energy necessary to make the corrections. Since we are limited beings that often go awry because of biases, misconceptions etc, we would need to revise everything constantly and consequently spend a ton of time just ensuring that our (conscious, S2-endorsed beliefs) match our actions.
But if you try to function on the basis of these meta-principles that say “in one case, think about it this way; in another case, think about it this other way (which is actually deeply incompatible with the first one) etc,” you only need to pay the cost once: at the moment you find and commit to the meta-principles. Afterwards, you no longer need to worry about ensuring that everything is coherent and that the different mindsets you use are in alignment with one another; you just plop whatever situation you find yourself confronted with into the meta-principle machine and it spits out which mindset you should select.
So I can agree that revising one of your beliefs and actions to ensure that they agree generates an important benefit, but the more important question is whether that benefit overcomes the associated cost I just mentioned, given the fundamental and structural imperfections of the human mind. I suspect Scott thinks it does not: he would probably say it would be axiologically good if you could do so (the world-state in which you make your beliefs and actions coherent is “better” than the one in which you don’t, all else equal), but because all else is not equal in the reality we live our lives in, it would not be the best option to choose for virtually all humans.
(Upon reflection, it could be that what I am saying here is totally beside the point of your initial dialogue with Anthony, and I apologize if that’s the case)
What do you mean by “better” here?
For humans (or any other kinds of agents) that live in the physical world as opposed to idealized mathematical universes, the process of explicitly revising beliefs (or the equivalent action-generators in the latter case) imposes costs in terms of the time and energy necessary to make the corrections. Since we are limited beings that often go awry because of biases, misconceptions etc, we would need to revise everything constantly and consequently spend a ton of time just ensuring that our (conscious, S2-endorsed beliefs) match our actions.
But if you try to function on the basis of these meta-principles that say “in one case, think about it this way; in another case, think about it this other way (which is actually deeply incompatible with the first one) etc,” you only need to pay the cost once: at the moment you find and commit to the meta-principles. Afterwards, you no longer need to worry about ensuring that everything is coherent and that the different mindsets you use are in alignment with one another; you just plop whatever situation you find yourself confronted with into the meta-principle machine and it spits out which mindset you should select.
So I can agree that revising one of your beliefs and actions to ensure that they agree generates an important benefit, but the more important question is whether that benefit overcomes the associated cost I just mentioned, given the fundamental and structural imperfections of the human mind. I suspect Scott thinks it does not: he would probably say it would be axiologically good if you could do so (the world-state in which you make your beliefs and actions coherent is “better” than the one in which you don’t, all else equal), but because all else is not equal in the reality we live our lives in, it would not be the best option to choose for virtually all humans.
(Upon reflection, it could be that what I am saying here is totally beside the point of your initial dialogue with Anthony, and I apologize if that’s the case)