I think we need to distinguish between what a rational agent should do, and what a non-rational human should do to become more rational. Nesov’s reply to you also concerns the former, I think, but I’m more interested in the latter here.
Unlike a rational agent, we don’t have well-defined preferences, and the preferences that we think we have can be changed by arguments. What to do about this situation? Should we stop thinking up or listening to arguments, and just fill in the fuzzy parts of our preferences with randomness or indifference, in order to emulate a rational agent in the most direct manner possible? That doesn’t make much sense to me.
I’m not sure what we should do exactly, but whatever it is, it seems like arguments must make up a large part of it.
That arguments modify preference means that you are (denotationally) arriving at different preferences depending on arguments. This means that, from the perspective of a specific given preference (or “true” neutral preference not biased by specific arguments), you fail to obtain optimal rational decision algorithm, and thus to achieve high-preference strategy. But at the same time, “absence of action” is also an action, so not exploring the arguments may as well be a worse choice, since you won’t be moving forward towards more clear understanding of your own preference, even if the preference that you are going to understand will be somewhat biased compared to the unknown original one.
Thus, there is a tradeoff:
Irrational perception of arguments leads to modification of preference, which is bad for original preference, but
Considering moral arguments leads to a more clear understanding of some preference close to the original one, which allows to make more rational decisions, which is good for the original preference.
I think we shouldn’t try to emulate rational agents at all, in the sense that we shouldn’t pretend to have rationality-style preferences and supergoals; as a matter of fact we don’t have them.
Up to here we seem to agree, we just use different terminology. I just don’t want to conflate rational preferences with human preferences because they the two systems behave very differently.
Just as an example, in signalling theories of behaviour, you may consciously believe that your preferences are very different from what your behaviour is actually optimizing for when noone is looking. A rational agent wouldn’t normally have separate conscious/unconscious minds unless only the conscious part was sbuject to outside inspection. In this example, it makes sense to update signalling-preferences sometimes, because they’re not your actual acting-preferences.
But if you consciously intend to act out your (conscious) preferences, and also intend to keep changing them in not-always-foreseeable ways, then that isn’t rationality, and when there could be confusion due to context (such as on LW most of the time) I’d prefer not to use the term “preferences” about humans, or to make clear what is meant.
I think we need to distinguish between what a rational agent should do, and what a non-rational human should do to become more rational. Nesov’s reply to you also concerns the former, I think, but I’m more interested in the latter here.
Unlike a rational agent, we don’t have well-defined preferences, and the preferences that we think we have can be changed by arguments. What to do about this situation? Should we stop thinking up or listening to arguments, and just fill in the fuzzy parts of our preferences with randomness or indifference, in order to emulate a rational agent in the most direct manner possible? That doesn’t make much sense to me.
I’m not sure what we should do exactly, but whatever it is, it seems like arguments must make up a large part of it.
That arguments modify preference means that you are (denotationally) arriving at different preferences depending on arguments. This means that, from the perspective of a specific given preference (or “true” neutral preference not biased by specific arguments), you fail to obtain optimal rational decision algorithm, and thus to achieve high-preference strategy. But at the same time, “absence of action” is also an action, so not exploring the arguments may as well be a worse choice, since you won’t be moving forward towards more clear understanding of your own preference, even if the preference that you are going to understand will be somewhat biased compared to the unknown original one.
Thus, there is a tradeoff:
Irrational perception of arguments leads to modification of preference, which is bad for original preference, but
Considering moral arguments leads to a more clear understanding of some preference close to the original one, which allows to make more rational decisions, which is good for the original preference.
Please see my reply to Nesov above, too.
I think we shouldn’t try to emulate rational agents at all, in the sense that we shouldn’t pretend to have rationality-style preferences and supergoals; as a matter of fact we don’t have them.
Up to here we seem to agree, we just use different terminology. I just don’t want to conflate rational preferences with human preferences because they the two systems behave very differently.
Just as an example, in signalling theories of behaviour, you may consciously believe that your preferences are very different from what your behaviour is actually optimizing for when noone is looking. A rational agent wouldn’t normally have separate conscious/unconscious minds unless only the conscious part was sbuject to outside inspection. In this example, it makes sense to update signalling-preferences sometimes, because they’re not your actual acting-preferences.
But if you consciously intend to act out your (conscious) preferences, and also intend to keep changing them in not-always-foreseeable ways, then that isn’t rationality, and when there could be confusion due to context (such as on LW most of the time) I’d prefer not to use the term “preferences” about humans, or to make clear what is meant.
FWIW, my preferences have not been changed by arguments in the last 20 years. So I don’t think your “we” includes me.