After I read the question “do you model people as agents versus complex systems?”, I started to wonder which of the two options is more “sophisticated”. Is an agent more sophisticated than a complex system, or vice versa? I don’t really have an opinion here.
Something I like to tell myself is that people are animals first and foremost. Whenever anyone does anything I find strange, unusual, or irrational, my instinct is to speculate about the cause of the behavior. If person A is rude towards person B, I don’t think, “person A is being a bad person”; I think something like, “person A is frustrated with person B and believes person B is misbehaving, and believes that rudeness is justified in this situation”.
So I guess that when I ask myself what’s the difference between an agent and a complex system, my first thought is to say that an agent is not composed of parts, whereas a complex system is. Under this definition, it’s a fact that humans are complex systems, not agents.
My older brother is the type of person who is “conventionally agenty”: he has goals, and he attempts to achieve them by applying problem-solving skills, usually with great success. A certain other person I know (call him P), on the other hand, is the opposite: he has goals, but he makes no apparent effort to achieve them, and so he doesn’t. (He’s definitely getting better, though—just not very quickly.) The difference between my brother and P seems to come down to one single difference in attitude. My brother’s attitude toward goals is to think about how they could be achieved, and to try to figure out how to achieve the achievable ones. P’s attitude toward goals is to go ahead and achieve them if he already knows how to, and just ignore them otherwise.
“Agent-like” definitely isn’t the way I’d describe my brother; I’d call him proactive.
For what it’s worth, my attitude toward goals is less my-brother-like than seems ideal. Given a goal other than overcoming procrastination, I think about it and try to determine whether it’s a good use of my time or not. If it is, I add it to my to-do list; otherwise, I forget about it. The goal of overcoming procrastination is something I think about carefully many times every day. This goal seems to be extremely difficult for me to achieve, which makes me wonder why everyone else seems to have it so easy.
Okay. I just don’t like words defined as “X, except for Y” (specifically: complex systems, except for those who have magical free will). If we tried to avoid this “excepting”, the question would be rephrased as:
Is a complex system with magical free will more sophisticated than a complex system without magical free will, or vice versa?
But I am not sure how exactly that helps, so… uhm, end of nitpicking.
I model my computer as a complex system; when it has undesired behavior, I give it a known set of conditions and it behaves consistently and often predictably.
I don’t expect it to engage in goal-oriented behavior.
There are people who I model in a similar manner- I know what they do in certain conditions, and I don’t ask what it is they are trying to accomplish. There are cases where I behave in a similar manner, performing sphexish behavior even while consciously aware of it. Noticing that I am doing that evokes cognitive dissonance, so I guess I don’t actually model myself that way, even when it would be accurate to do so.
There are cases where I behave in a similar manner, performing sphexish behavior even while consciously aware of it. Noticing that I am doing that evokes cognitive dissonance, so I guess I don’t actually model myself that way, even when it would be accurate to do so.
Huh. I frequently notice myself behaving in a seemingly robotic fashion, doing stuff “automatically” with no real conscious input (e.g. when doing simple, routine tasks like folding laundry), but it doesn’t give me any feeling of cognitive dissonance.
To try to answer your question: If I find myself behaving “automatically” in a counterproductive manner, that’s an uncomfortable situation to be in, and to me, it emphasizes the fact that I’m not a “pure goal-oriented agent”. I do feel a sort of cognitive dissonance in this cases, I think; I feel like the fact that I’m not behaving productively is “my fault” and it would be easy for me to stop doing what I’m doing, while simultaneously feeling like it would be very difficult to stop doing what I’m doing.
Because I described a situation in which I felt a certain way, and you expressed that you felt a different way in a situation which had certain similarities. I felt that I could identify a significant difference between those situations and wanted to confirm that we probably have similar subjective experiences when confronted with similar enough circumstances.
Had I discovered a difference, it would be worth further discussion. I’m unsure if this similarity is worth further discussion. Feeling like it would be trivial to do something else, believing that I want to do something else, but not doing something else is a common enough failure mode for me to be worrisome.
I haven’t codified the exact distinction that I make between those two concepts; in the case of material science, a ‘problem’ would be a pressure vessel at a low temperature containing high pressure; the failure mode of such a problem would be brittle fracture.
In this case it might also have made sense to call it a class of problems; each instance is different enough that a general solution would be different in nature from a series of specific solutions which combined covered every individual exemplified case.
If person A is rude towards person B, I don’t think, “person A is being a bad person”; I think something like, “person A is frustrated with person B and believes person B is misbehaving, and believes that rudeness is justified in this situation”.
You assume that when someone appears to be acting in anger, they’re actually acting in the way they’ve decided was best after weighing the facts?
Well, no. In the particular case I had in mind, person A was being rude, and so I figured person A was frustrated with person B and believed person B was misbehaving. I asked person A if he thought rudeness was justified in this situation, and he said yes.
If you model X as “rude person”, then you expect him to be rude with a high[er than average] probability cases, period.
However, if you model X as an agent that believes that rudeness is appropriate in common situations A,B,C, then you expect that he might behave less rudely (a) if he would percieve that this instance of a common ‘rude’ situation is nuanced and that rudeness is not appropriate there; or (b) if he could be convinced that rudeness in situations like that is contrary to his goals, whatever those may be.
In essence, it’s simpler and faster to evaluate expected reactions for people that you model as just complex systems, you can usually do that right away. But if you model goal-oriented behavior, “walk a mile in his shoes” and try to understand the intent of every [non]action and the causes of that, then it tends to be tricky but allows you more depth in both accurate expectations, and ability to affect the behavior.
However, if you do it poorly, or simply lack data neccessary to properly understand the reasons/motivations of that person then you’ll tend to get gross misunderstandings.
That’s not what they said. They said that they believe that rudeness is justified in the situation. That belief could change (or could not) upon further reflection. Hence the concept of regret.
There’s a difference between the slow methodical relatively inefficient (in terms of effort required for a decision) mode of thought, and the instant thoughts we all have (which we use for almost everything we do and are pretty good about many things but not all things).
After I read the question “do you model people as agents versus complex systems?”, I started to wonder which of the two options is more “sophisticated”. Is an agent more sophisticated than a complex system, or vice versa? I don’t really have an opinion here.
Something I like to tell myself is that people are animals first and foremost. Whenever anyone does anything I find strange, unusual, or irrational, my instinct is to speculate about the cause of the behavior. If person A is rude towards person B, I don’t think, “person A is being a bad person”; I think something like, “person A is frustrated with person B and believes person B is misbehaving, and believes that rudeness is justified in this situation”.
So I guess that when I ask myself what’s the difference between an agent and a complex system, my first thought is to say that an agent is not composed of parts, whereas a complex system is. Under this definition, it’s a fact that humans are complex systems, not agents.
My older brother is the type of person who is “conventionally agenty”: he has goals, and he attempts to achieve them by applying problem-solving skills, usually with great success. A certain other person I know (call him P), on the other hand, is the opposite: he has goals, but he makes no apparent effort to achieve them, and so he doesn’t. (He’s definitely getting better, though—just not very quickly.) The difference between my brother and P seems to come down to one single difference in attitude. My brother’s attitude toward goals is to think about how they could be achieved, and to try to figure out how to achieve the achievable ones. P’s attitude toward goals is to go ahead and achieve them if he already knows how to, and just ignore them otherwise.
“Agent-like” definitely isn’t the way I’d describe my brother; I’d call him proactive.
For what it’s worth, my attitude toward goals is less my-brother-like than seems ideal. Given a goal other than overcoming procrastination, I think about it and try to determine whether it’s a good use of my time or not. If it is, I add it to my to-do list; otherwise, I forget about it. The goal of overcoming procrastination is something I think about carefully many times every day. This goal seems to be extremely difficult for me to achieve, which makes me wonder why everyone else seems to have it so easy.
Agent is a specific kind of a complex system.
I thought we were pretending that the two are mutually exclusive. Agents have magical free will, complex systems don’t.
Okay. I just don’t like words defined as “X, except for Y” (specifically: complex systems, except for those who have magical free will). If we tried to avoid this “excepting”, the question would be rephrased as:
But I am not sure how exactly that helps, so… uhm, end of nitpicking.
I model my computer as a complex system; when it has undesired behavior, I give it a known set of conditions and it behaves consistently and often predictably.
I don’t expect it to engage in goal-oriented behavior.
There are people who I model in a similar manner- I know what they do in certain conditions, and I don’t ask what it is they are trying to accomplish. There are cases where I behave in a similar manner, performing sphexish behavior even while consciously aware of it. Noticing that I am doing that evokes cognitive dissonance, so I guess I don’t actually model myself that way, even when it would be accurate to do so.
Huh. I frequently notice myself behaving in a seemingly robotic fashion, doing stuff “automatically” with no real conscious input (e.g. when doing simple, routine tasks like folding laundry), but it doesn’t give me any feeling of cognitive dissonance.
What about when the behavior you are doing has counterproductive results?
What are you asking, exactly?
To try to answer your question: If I find myself behaving “automatically” in a counterproductive manner, that’s an uncomfortable situation to be in, and to me, it emphasizes the fact that I’m not a “pure goal-oriented agent”. I do feel a sort of cognitive dissonance in this cases, I think; I feel like the fact that I’m not behaving productively is “my fault” and it would be easy for me to stop doing what I’m doing, while simultaneously feeling like it would be very difficult to stop doing what I’m doing.
Because I described a situation in which I felt a certain way, and you expressed that you felt a different way in a situation which had certain similarities. I felt that I could identify a significant difference between those situations and wanted to confirm that we probably have similar subjective experiences when confronted with similar enough circumstances.
Had I discovered a difference, it would be worth further discussion. I’m unsure if this similarity is worth further discussion. Feeling like it would be trivial to do something else, believing that I want to do something else, but not doing something else is a common enough failure mode for me to be worrisome.
nod
Tangential question: why did you use “failure mode” there instead of “problem”?
I haven’t codified the exact distinction that I make between those two concepts; in the case of material science, a ‘problem’ would be a pressure vessel at a low temperature containing high pressure; the failure mode of such a problem would be brittle fracture.
In this case it might also have made sense to call it a class of problems; each instance is different enough that a general solution would be different in nature from a series of specific solutions which combined covered every individual exemplified case.
You assume that when someone appears to be acting in anger, they’re actually acting in the way they’ve decided was best after weighing the facts?
Well, no. In the particular case I had in mind, person A was being rude, and so I figured person A was frustrated with person B and believed person B was misbehaving. I asked person A if he thought rudeness was justified in this situation, and he said yes.
Did he ask himself that question before reacting to person B’s behavior?
I doubt that he did, so good point.
What’s the difference between someone who commonly believes that rudeness is appropriate, and a rude person?
If you model X as “rude person”, then you expect him to be rude with a high[er than average] probability cases, period.
However, if you model X as an agent that believes that rudeness is appropriate in common situations A,B,C, then you expect that he might behave less rudely (a) if he would percieve that this instance of a common ‘rude’ situation is nuanced and that rudeness is not appropriate there; or (b) if he could be convinced that rudeness in situations like that is contrary to his goals, whatever those may be.
In essence, it’s simpler and faster to evaluate expected reactions for people that you model as just complex systems, you can usually do that right away. But if you model goal-oriented behavior, “walk a mile in his shoes” and try to understand the intent of every [non]action and the causes of that, then it tends to be tricky but allows you more depth in both accurate expectations, and ability to affect the behavior.
However, if you do it poorly, or simply lack data neccessary to properly understand the reasons/motivations of that person then you’ll tend to get gross misunderstandings.
One has a particular belief, while the other follows a particular pattern of behavior? Not sure I see what you’re getting at.
That’s not what they said. They said that they believe that rudeness is justified in the situation. That belief could change (or could not) upon further reflection. Hence the concept of regret.
Not thinking about a question isn’t a belief, or rocks have beliefs.
There’s a difference between the slow methodical relatively inefficient (in terms of effort required for a decision) mode of thought, and the instant thoughts we all have (which we use for almost everything we do and are pretty good about many things but not all things).
Although we’ve gone from “beliefs” to “thought(s)”, it looks like overall we’re disputing definitions.