I think I have a thing very similar to John’s here, and for me at least, it’s mostly orthogonal to “how much you care about this person’s well-being”. Or, like, as relevant for that as whether that person has a likeable character trait.
The main impact is on the ability to coordinate with/trust/relax around that person. If they’re well-modeled as an agent, you can, to wit, model them as a game-theoretic agent: as someone who is going to pay attention to the relevant parts of any given situation and continually make choices within it that are consistent with the pursuit of some goal. They may make mistakes, but those would be well-modeled as the foibles of being a bounded agent.
On the other hand, people who can’t be modeled as agents (in a given context) can’t be expected to behave in this way. They may make decisions based on irrelevant parts of the situations, act in inconsistent ways, and can’t be trusted not to go careening in some random direction in response to random stimuli. Sort of like, ahem, an LLM.
Note that I think it isn’t a binary “There Are Two Types of People” thing: the same person can act as an agent in some contexts and fail at this in others. That said, there is a spectrum of “in how many contexts does this person act as an agent?”, and meaningful clustering around “not very many”, “increasingly many”, etc.
By itself, this is mostly unrelated to how much I care about a given person for the purposes of e. g. wanting their life to be better. (Like, I don’t think non-agent-approximating people have less qualia or something.)
This is relevant for the purposes of who I would want to be friends/allies/colleagues with, for the straightforward reason of “people who are more well-modeled as coherent agents are more reliable allies”, and also because it’s a character trait I like.
I think I have a thing very similar to John’s here, and for me at least, it’s mostly orthogonal to “how much you care about this person’s well-being”. Or, like, as relevant for that as whether that person has a likeable character trait.
The main impact is on the ability to coordinate with/trust/relax around that person. If they’re well-modeled as an agent, you can, to wit, model them as a game-theoretic agent: as someone who is going to pay attention to the relevant parts of any given situation and continually make choices within it that are consistent with the pursuit of some goal. They may make mistakes, but those would be well-modeled as the foibles of being a bounded agent.
On the other hand, people who can’t be modeled as agents (in a given context) can’t be expected to behave in this way. They may make decisions based on irrelevant parts of the situations, act in inconsistent ways, and can’t be trusted not to go careening in some random direction in response to random stimuli. Sort of like, ahem, an LLM.
Note that I think it isn’t a binary “There Are Two Types of People” thing: the same person can act as an agent in some contexts and fail at this in others. That said, there is a spectrum of “in how many contexts does this person act as an agent?”, and meaningful clustering around “not very many”, “increasingly many”, etc.
By itself, this is mostly unrelated to how much I care about a given person for the purposes of e. g. wanting their life to be better. (Like, I don’t think non-agent-approximating people have less qualia or something.)
This is relevant for the purposes of who I would want to be friends/allies/colleagues with, for the straightforward reason of “people who are more well-modeled as coherent agents are more reliable allies”, and also because it’s a character trait I like.