I mean ‘rational’ in the ordinary, indefinable sense, whereby calling a decision ‘irrational’ expresses a distinctive kind of criticism—similar to that expressed by the words ‘crazy’, ‘foolish’, ‘unwise’, etc. (By contrast, you can just say “maximizes expected utility” if you really mean nothing more than maximizes expected utility—but note that that’s a merely descriptive concept, not a normative one.)
If you don’t possess this concept—if you never have thoughts about what’s rational, over and above just what maximizes expected utility—then I can’t help you.
When we’re trying to reduce intuitions, there’s no helping starting from informal ideas. Another question is that it’s not proper to stop there, but Richard doesn’t exactly suggest that.
A more salient to me question is, What are suggestions about redefining this “rational” intuitive idea good for, if it’s supposed to be the source material? Such question even explains how the idea of considering, say, “consciousness”, in a more precise sense is methodologically a step in the wrong direction: when words are the data you work with, you should be careful to assign new words to new ideas used for analyzing them.
I’m not sure I understand your second paragraph. Are you suggesting that if we come up with a new theory to explain some aspect of consciousness, we should use a word other than “consciousness” in that theory, to avoid potentially losing some of our intuitions about consciousness?
I mean ‘rational’ in the ordinary, indefinable sense, whereby calling a decision ‘irrational’ expresses a distinctive kind of criticism—similar to that expressed by the words ‘crazy’, ‘foolish’, ‘unwise’, etc. (By contrast, you can just say “maximizes expected utility” if you really mean nothing more than maximizes expected utility—but note that that’s a merely descriptive concept, not a normative one.)
If you don’t possess this concept—if you never have thoughts about what’s rational, over and above just what maximizes expected utility—then I can’t help you.
I don’t think we can make progress with such imprecise thinking. Eliezer has a nice post about that.
When we’re trying to reduce intuitions, there’s no helping starting from informal ideas. Another question is that it’s not proper to stop there, but Richard doesn’t exactly suggest that.
A more salient to me question is, What are suggestions about redefining this “rational” intuitive idea good for, if it’s supposed to be the source material? Such question even explains how the idea of considering, say, “consciousness”, in a more precise sense is methodologically a step in the wrong direction: when words are the data you work with, you should be careful to assign new words to new ideas used for analyzing them.
In this old comment, he does seem to suggest stopping there.
I’m not sure I understand your second paragraph. Are you suggesting that if we come up with a new theory to explain some aspect of consciousness, we should use a word other than “consciousness” in that theory, to avoid potentially losing some of our intuitions about consciousness?
Yes.