If person A is rude towards person B, I don’t think, “person A is being a bad person”; I think something like, “person A is frustrated with person B and believes person B is misbehaving, and believes that rudeness is justified in this situation”.
You assume that when someone appears to be acting in anger, they’re actually acting in the way they’ve decided was best after weighing the facts?
Well, no. In the particular case I had in mind, person A was being rude, and so I figured person A was frustrated with person B and believed person B was misbehaving. I asked person A if he thought rudeness was justified in this situation, and he said yes.
If you model X as “rude person”, then you expect him to be rude with a high[er than average] probability cases, period.
However, if you model X as an agent that believes that rudeness is appropriate in common situations A,B,C, then you expect that he might behave less rudely (a) if he would percieve that this instance of a common ‘rude’ situation is nuanced and that rudeness is not appropriate there; or (b) if he could be convinced that rudeness in situations like that is contrary to his goals, whatever those may be.
In essence, it’s simpler and faster to evaluate expected reactions for people that you model as just complex systems, you can usually do that right away. But if you model goal-oriented behavior, “walk a mile in his shoes” and try to understand the intent of every [non]action and the causes of that, then it tends to be tricky but allows you more depth in both accurate expectations, and ability to affect the behavior.
However, if you do it poorly, or simply lack data neccessary to properly understand the reasons/motivations of that person then you’ll tend to get gross misunderstandings.
That’s not what they said. They said that they believe that rudeness is justified in the situation. That belief could change (or could not) upon further reflection. Hence the concept of regret.
There’s a difference between the slow methodical relatively inefficient (in terms of effort required for a decision) mode of thought, and the instant thoughts we all have (which we use for almost everything we do and are pretty good about many things but not all things).
You assume that when someone appears to be acting in anger, they’re actually acting in the way they’ve decided was best after weighing the facts?
Well, no. In the particular case I had in mind, person A was being rude, and so I figured person A was frustrated with person B and believed person B was misbehaving. I asked person A if he thought rudeness was justified in this situation, and he said yes.
Did he ask himself that question before reacting to person B’s behavior?
I doubt that he did, so good point.
What’s the difference between someone who commonly believes that rudeness is appropriate, and a rude person?
If you model X as “rude person”, then you expect him to be rude with a high[er than average] probability cases, period.
However, if you model X as an agent that believes that rudeness is appropriate in common situations A,B,C, then you expect that he might behave less rudely (a) if he would percieve that this instance of a common ‘rude’ situation is nuanced and that rudeness is not appropriate there; or (b) if he could be convinced that rudeness in situations like that is contrary to his goals, whatever those may be.
In essence, it’s simpler and faster to evaluate expected reactions for people that you model as just complex systems, you can usually do that right away. But if you model goal-oriented behavior, “walk a mile in his shoes” and try to understand the intent of every [non]action and the causes of that, then it tends to be tricky but allows you more depth in both accurate expectations, and ability to affect the behavior.
However, if you do it poorly, or simply lack data neccessary to properly understand the reasons/motivations of that person then you’ll tend to get gross misunderstandings.
One has a particular belief, while the other follows a particular pattern of behavior? Not sure I see what you’re getting at.
That’s not what they said. They said that they believe that rudeness is justified in the situation. That belief could change (or could not) upon further reflection. Hence the concept of regret.
Not thinking about a question isn’t a belief, or rocks have beliefs.
There’s a difference between the slow methodical relatively inefficient (in terms of effort required for a decision) mode of thought, and the instant thoughts we all have (which we use for almost everything we do and are pretty good about many things but not all things).
Although we’ve gone from “beliefs” to “thought(s)”, it looks like overall we’re disputing definitions.