I think the definition of rational emotions as those based on correct thinking about reality is a bad definition—it makes both the act of modelling the world correctly and communicating your models to others slightly harder instead of slightly easier.
Imagine there is a faucet in front of me. Let’s say this faucet is running hot water because I turned the right knob because I incorrectly thought the right knob turned cold water. It would be very strange to say that “The faucet’s running of hot water is irrational.”; no one would have a clue what you mean.
You can come up with a definition for what an irrational faucet behavior is, that’s not the point. The point is that this definition doesn’t do much to help you understand or communicate faucet behavior. If you deeply internalize this lens of faucets, whenever you have an undesirable faucet behavior, you are much more likely to automatically ask yourself “Do my beliefs that have led to this particular faucet behavior stem from an incorrect way of viewing the world?” instead of the much more direct and appropriate question of “What can I do to elicit the desired behavior of this faucet?”
When you have an undesirable emotion, you might choose to move away from contexts that cause that emotion, or try to will yourself not to have that emotion, or do a number of other things. Though it is a valid move, you are not restricted to only changing the beliefs that led to that emotion. I am afraid this sort of definition makes those who internalized it more likely to begin emotional problem-solving by first deliberating long on questions like “Are the beliefs that led to this emotion rational?” before moving to (usually) more practical questions like “Can I stop seeing the thing that make me feel this emotion?”
I think the definition of rational emotions as those based on correct thinking about reality is a bad definition—it makes both the act of modelling the world correctly and communicating your models to others slightly harder instead of slightly easier.
Imagine there is a faucet in front of me. Let’s say this faucet is running hot water because I turned the right knob because I incorrectly thought the right knob turned cold water. It would be very strange to say that “The faucet’s running of hot water is irrational.”; no one would have a clue what you mean.
You can come up with a definition for what an irrational faucet behavior is, that’s not the point. The point is that this definition doesn’t do much to help you understand or communicate faucet behavior. If you deeply internalize this lens of faucets, whenever you have an undesirable faucet behavior, you are much more likely to automatically ask yourself “Do my beliefs that have led to this particular faucet behavior stem from an incorrect way of viewing the world?” instead of the much more direct and appropriate question of “What can I do to elicit the desired behavior of this faucet?”
When you have an undesirable emotion, you might choose to move away from contexts that cause that emotion, or try to will yourself not to have that emotion, or do a number of other things. Though it is a valid move, you are not restricted to only changing the beliefs that led to that emotion. I am afraid this sort of definition makes those who internalized it more likely to begin emotional problem-solving by first deliberating long on questions like “Are the beliefs that led to this emotion rational?” before moving to (usually) more practical questions like “Can I stop seeing the thing that make me feel this emotion?”