I’ve read all the comments that have been made to this point before making one myself—I was going to make one very much like @Simon Pepin Lehalleur’s. With the resulting conversation-thread as context, I’m still confused by something. When you say having fun with your college group-members involved “suspension of disbelief”, what are you suspending disbelief in, exactly? What is your belief-state when disbelief is suspended, vs. your belief-state when disbelief is not suspended?
Another thing: You mention that imagining what if you were like other people including all relevant psychological and emotional factors puts you in the position of “treating them like cats”, which yes, was a bit misleading in the original post, as you later clarify that you do empathize with cats, in pretty much the way I’d say it may be helpful to empathize with humans. I’d like to disambiguate a few different ways other people can be “like cats”. 1. Cats think very differently than you. 2. Cats are (according to many) of less moral worth than you. 3. Cats are nonhuman.
You also say in your post “my so-called fellow humans”, which is part of what makes “treating people like cats” seem more objectionable than just “treating people as beings who think very differently than I do, generally speaking”. Specifically, “Like cats” coupled with “my so-called fellow humans” gestures indirectly towards treating humans like nonhuman animals, which pattern matches to “what people say before genociding the outgroup”. I’m not suggesting you were thinking or intending anything along those lines, just pointing out why this combination may be distracting from the point you were trying to make, for some.
Does putting yourself in the shoes of a cat involve “suspension of disbelief”? I don’t think it should, you just have accurate understandings of some aspects of cat-psychology, and an awareness that there are many things about cat-psychology and what it’s like to be them that you don’t understand. And similarly with other humans—you may be very different from some of them in some ways, but there shouldn’t be a disbelief you have to suspend in order to interact with them in productive and potentially enjoyable ways? There’s something I’m confused about here.
A key thing, though, is that while 1 may be true of many humans, and arguably from your perspective 2 may as well, what makes someone human is not a certain level of intellectual capacity or drive. So the “so-called” in “my so-called fellow humans” is inaccurate. Even very lazy humans are of the same species as you, and more a fellow to you than a cat is. Maybe what you meant is “my so-called ‘fellow’ humans”, where you feel little fellowship with people very unlike yourself in terms of drive to self-improve, but recognize shared humanity with them, in which case that seems better than the alternative interpretation, and selecting friends who are similar to you in your deeply-held values (such as things that would cause a disgust reaction if someone doesn’t share them) is a thing many people do.
On reflection, I think the thing-in-which-I-suspend-disbelief is moral agency? That’s not how I natively frame things in my head, but I think it’s basically equivalent. Like, I stop thinking of the human as a type-of-thing which it makes sense to assign responsibility to (and therefore stop thinking of them as a type-of-thing one would rely upon as a fellow agent in a group, or give any real voting weight in a group). I can still relate to them as a fun creature, or as a tool, or as a feature of the environment.
Ok, well, one thing is, there are double and triple negatives in your post, which are tripping me up. If you suspend belief in their moral agency in order to have fun with them, that makes sense. If breaking your suspension of disbelief in their moral agency was an issue that prevented you from having fun with them, I’d be (and I was) confused, and start to wonder if I’d mixed up a negation somewhere. X: I believe most humans have moral agency. If they don’t meet a fairly high standard of behaviour, I’ll be disappointed. !X: I disbelieve most humans have moral agency. I can have fun with them even so. !!X=X: I suspend disbelief that most humans have moral agency. A no-fun zone. !!!X=!X: My suspension of disbelief was broken by the conversation with the professor. Fun again?
Anyway, “I can have fun with others when I don’t treat them as moral agents” is clear and makes sense from what I imagine your perspective might be, let’s move on.
Agency isn’t a binary property which humans have and cats and environmental features don’t. A more nuanced perspective allows for gradations of moral agency. For example, we don’t expect young children to be full moral agents or reliable as a fellow agent in a group doing something important, but we do expect them, once they have reached a certain age, not to bite their siblings and to use good manners and otherwise follow most foundational social expectations on good days when they’ve had a nap recently. Our expectations for cats are lower, but we still expect to be able to cooperate with them more than we can with mindless tools or features of the environment such as rocks and trees. So: If you genuinely put your college teammates and anyone who doesn’t meet your high standards for yourself in the same bucket as cats, I think you would be treating them as if they have less agency than they in fact do. They may not be on your level, but they aren’t tools or environmental features or cats or children.
It’s entirely appropriate to conclude you can’t rely on someone to work with you on something important if the evidence shows this is true. But you could probably rely on most people to do some helpful things, and often the “this person isn’t reliable” tag is only applicable to a given person in some situations/contexts—it is very rare for a person to be completely useless for everything. The better you understand a given person, the better able to evaluate their reliability in different situations and for different tasks you will be. Often people are reliable about the things they care about, and not about the things they don’t.
As for voting weight, that should likely be a complicated mix of factors depending on the situation. I think it is a good idea to give people voting weight in a decision if the decision impacts them, for example. Even to the extent of allowing them to vote in favour of what seems clearly like something that will harm them, as long as it doesn’t harm me or have other large externalities. If they need to touch a hot stove to learn it’s hot because they’re not willing to listen when I say “the stove is hot”, as long as it’s not going to kill them, OK then, carry on and learn your lesson. (side note, I’m not always right, sometimes someone’s preferences make sense in a way I don’t appreciate at first, which is another reason to give their vote for something I disagree with weight). I also up-weight someone’s vote (to the extent I have the power to make that decision) if they have some related expertise (I could be conscientious and they could be mostly very flaky, and yet they know more about something I know very little about—I can’t know everything about everything, much as I might try).
I think this misses the distinction I’d consider relevant for moral agency.
I can put a marble on a ramp and it will roll down. But I have to set up the ramp and place the marble; it makes no sense for me to e.g. sign a contract with a marble and expect it to make itself roll down a ramp. The marble has no agency.
Likewise, I can stick a nonagentic human in a social environment where the default thing everyone does is take certain courses and graduate in four years, and the human will probably do that. I can condition a child with rewards and punishments to behave a certain way, and the child will probably do so. Like the marble, both of these are cases where the environment is set up in such a way that the desired outcome is the default outcome, without the candidate “agent” having to do any particular search or optimization to make the outcome happen.
What takes agency—moral agency—is making non-default things happen. (At least, that’s my current best articulation.) Mathematically, I’d frame this in terms of couterfactuals: credit assignment mostly makes sense in the context of comparison to counterfactual outcomes. Moral agency (insofar as it makes sense at all in a physically-reductive universe) is all about thinking of a thing as being capable of counterfactual impact.
Ok, I see your point and acknowledge that that is a good and valuable distinction. And, the reality is that most people are just responding to their environment most of the time, and you would class them as non-agents during those times, morally speaking.
But, unlike if people were literally marbles, you can sign a contract with most people and expect them to follow through on most of their commitments, where in practice there’s nothing preventing them from breaching contract in a way that harms you and helps them in the short term. So they don’t have no agency. And in small daily choices which are unconstrained or less-constrained by the environment, where the default option is less clear, people do make choices that have counterfactual impact. Maybe not on civilization-spanning scale (it would be a very chaotic world if reality was such that everyone correctly thought they could change the world in major ways and did so) but on the scale of their families, friend-groups and communities? Sure, quite often. And those choices shape those groups.
So my opinion is that humans in general: a) Aren’t very smart. b) Mostly copy those around them, not trying to make major changes to how things are. c) When they do try to make changes, the efforts tend to be copied from someone else rather than figured out on their own. d) But are faced with small-scale moral choices on a daily basis, where their actions are not practically constrained, and whether they cooperate or defect will influence the environment for others and their future selves. It is in those contexts where they display moral agency, to the extent that it is present for them.
Very few people are doing things like thinking through the game theory or equilibria effects of their actions, or looking at the big picture of the civilization we live in and going “how is this good/bad, and what changes can we make to get it to a better place?” in a way that’s better than guessing or copying their friends, with the end result of a civilization that thrashes around mostly blindly. If you’re disgusted with anyone who is not actively trying to remake the world in at least some respect, you’re going to be disgusted with almost everyone. But back to moral agency not being binary: the small-scale stuff matters, and standard adult humans are more morally agentic even when using your understanding of “moral agency” than cats are. I would also say, it’s good for people who are unable to accurately predict the long-term consequences of their actions to just copy what seems to have worked in past and respond to incentives, just play the role of a marble unless they’re really sure that their deviation from expected behaviour is good on net. And there are very few who are good enough predictors that they can look at their situations, choose to go uphill instead of down, and pick good hills to die on. Most of them will have grown up in families not composed of such people, and will need to have it pointed out to them that they have and should use more agency.
As an example: It is not at all difficult to talk to your elected representative. They frankly like it (in my experience) when an engaged citizen engages with them. This is a thing anyone can do. When I suggest to someone that this is a thing that might help solve a problem they have (for example, let’s say their interaction with a government agency has gone poorly and there’s clearly a broken process), it is often clear that this is not something they have even considered as being inside their possibility-space. This doesn’t make these people the equivalent of human marbles by their nature. A simple “hey, you can just do things to make the world different, such as this thing for example” is often enough for them to generalize from. Sometimes the idea takes a few examples/repetitions to take root, though.
Now that I’m clearer on what you mean by moral agency, I’m not sure why you would ever expect that to be widespread among the population, and have to suspend the belief that the person you’re interacting with is a moral agent. It’s just straightforwardly true that almost nobody is trying to achieve a really non-default outcome. Any society composed mostly of people trying to change it “for the better” according to their understanding of better, which involves achieving non-default outcomes, rather than just going along with the system they were born into, would have collapsed and gotten invaded by a society that could coordinate better. At our current intelligence levels, anyway. A society composed of very smart people (relative to the current baseline) could probably come to explicit, explained, consciously chosen agreement from each individual on a lot of things and use that as a basis for coordination while leaving people free to explore the possibility-space of available social changes and propose new social agreements based on what they find, but the society we’ve actually got, cannot. So we’ve got to use conformity as a coordination mechanism instead.
Taking this back to empathy for a second: It is usually correct (has better effects) for most people not to swim against the social current. Yes, our society is an evolved system with many problems that would not exist if it were (correctly) intelligently designed instead, but that doesn’t mean most people can just start trying to make changes, without breaking the system and making things much worse. Those who do the default thing, shouldn’t be the subject of disgust, even if they’re one of the rare people who wouldn’t break things by mucking about with them. If understanding that someone just went with the flow provokes disgust in you, I think it’s reasonable for you to ask whether, in that person’s case, they really ought to have done otherwise, and also, whether it’s reasonable for them to have known that, given the society we live in doesn’t teach or encourage the kind of moral agency you respect to its members (for obvious reasons of social stability).
I’ve read all the comments that have been made to this point before making one myself—I was going to make one very much like @Simon Pepin Lehalleur’s. With the resulting conversation-thread as context, I’m still confused by something. When you say having fun with your college group-members involved “suspension of disbelief”, what are you suspending disbelief in, exactly? What is your belief-state when disbelief is suspended, vs. your belief-state when disbelief is not suspended?
Another thing: You mention that imagining what if you were like other people including all relevant psychological and emotional factors puts you in the position of “treating them like cats”, which yes, was a bit misleading in the original post, as you later clarify that you do empathize with cats, in pretty much the way I’d say it may be helpful to empathize with humans. I’d like to disambiguate a few different ways other people can be “like cats”.
1. Cats think very differently than you.
2. Cats are (according to many) of less moral worth than you.
3. Cats are nonhuman.
You also say in your post “my so-called fellow humans”, which is part of what makes “treating people like cats” seem more objectionable than just “treating people as beings who think very differently than I do, generally speaking”. Specifically, “Like cats” coupled with “my so-called fellow humans” gestures indirectly towards treating humans like nonhuman animals, which pattern matches to “what people say before genociding the outgroup”. I’m not suggesting you were thinking or intending anything along those lines, just pointing out why this combination may be distracting from the point you were trying to make, for some.
Does putting yourself in the shoes of a cat involve “suspension of disbelief”? I don’t think it should, you just have accurate understandings of some aspects of cat-psychology, and an awareness that there are many things about cat-psychology and what it’s like to be them that you don’t understand. And similarly with other humans—you may be very different from some of them in some ways, but there shouldn’t be a disbelief you have to suspend in order to interact with them in productive and potentially enjoyable ways? There’s something I’m confused about here.
A key thing, though, is that while 1 may be true of many humans, and arguably from your perspective 2 may as well, what makes someone human is not a certain level of intellectual capacity or drive. So the “so-called” in “my so-called fellow humans” is inaccurate. Even very lazy humans are of the same species as you, and more a fellow to you than a cat is. Maybe what you meant is “my so-called ‘fellow’ humans”, where you feel little fellowship with people very unlike yourself in terms of drive to self-improve, but recognize shared humanity with them, in which case that seems better than the alternative interpretation, and selecting friends who are similar to you in your deeply-held values (such as things that would cause a disgust reaction if someone doesn’t share them) is a thing many people do.
On reflection, I think the thing-in-which-I-suspend-disbelief is moral agency? That’s not how I natively frame things in my head, but I think it’s basically equivalent. Like, I stop thinking of the human as a type-of-thing which it makes sense to assign responsibility to (and therefore stop thinking of them as a type-of-thing one would rely upon as a fellow agent in a group, or give any real voting weight in a group). I can still relate to them as a fun creature, or as a tool, or as a feature of the environment.
Would it be reasonable to map to this taking the intentional stance toward people (as opposed to the design stance or the physical stance)?
Approximately yes. I don’t think that mapping is lossless, but I don’t have a good example off the top of my head of what it loses.
Ok, well, one thing is, there are double and triple negatives in your post, which are tripping me up. If you suspend belief in their moral agency in order to have fun with them, that makes sense. If breaking your suspension of disbelief in their moral agency was an issue that prevented you from having fun with them, I’d be (and I was) confused, and start to wonder if I’d mixed up a negation somewhere.
X: I believe most humans have moral agency. If they don’t meet a fairly high standard of behaviour, I’ll be disappointed.
!X: I disbelieve most humans have moral agency. I can have fun with them even so.
!!X=X: I suspend disbelief that most humans have moral agency. A no-fun zone.
!!!X=!X: My suspension of disbelief was broken by the conversation with the professor. Fun again?
Anyway, “I can have fun with others when I don’t treat them as moral agents” is clear and makes sense from what I imagine your perspective might be, let’s move on.
Agency isn’t a binary property which humans have and cats and environmental features don’t. A more nuanced perspective allows for gradations of moral agency. For example, we don’t expect young children to be full moral agents or reliable as a fellow agent in a group doing something important, but we do expect them, once they have reached a certain age, not to bite their siblings and to use good manners and otherwise follow most foundational social expectations on good days when they’ve had a nap recently. Our expectations for cats are lower, but we still expect to be able to cooperate with them more than we can with mindless tools or features of the environment such as rocks and trees. So: If you genuinely put your college teammates and anyone who doesn’t meet your high standards for yourself in the same bucket as cats, I think you would be treating them as if they have less agency than they in fact do. They may not be on your level, but they aren’t tools or environmental features or cats or children.
It’s entirely appropriate to conclude you can’t rely on someone to work with you on something important if the evidence shows this is true. But you could probably rely on most people to do some helpful things, and often the “this person isn’t reliable” tag is only applicable to a given person in some situations/contexts—it is very rare for a person to be completely useless for everything. The better you understand a given person, the better able to evaluate their reliability in different situations and for different tasks you will be. Often people are reliable about the things they care about, and not about the things they don’t.
As for voting weight, that should likely be a complicated mix of factors depending on the situation. I think it is a good idea to give people voting weight in a decision if the decision impacts them, for example. Even to the extent of allowing them to vote in favour of what seems clearly like something that will harm them, as long as it doesn’t harm me or have other large externalities. If they need to touch a hot stove to learn it’s hot because they’re not willing to listen when I say “the stove is hot”, as long as it’s not going to kill them, OK then, carry on and learn your lesson. (side note, I’m not always right, sometimes someone’s preferences make sense in a way I don’t appreciate at first, which is another reason to give their vote for something I disagree with weight). I also up-weight someone’s vote (to the extent I have the power to make that decision) if they have some related expertise (I could be conscientious and they could be mostly very flaky, and yet they know more about something I know very little about—I can’t know everything about everything, much as I might try).
I think this misses the distinction I’d consider relevant for moral agency.
I can put a marble on a ramp and it will roll down. But I have to set up the ramp and place the marble; it makes no sense for me to e.g. sign a contract with a marble and expect it to make itself roll down a ramp. The marble has no agency.
Likewise, I can stick a nonagentic human in a social environment where the default thing everyone does is take certain courses and graduate in four years, and the human will probably do that. I can condition a child with rewards and punishments to behave a certain way, and the child will probably do so. Like the marble, both of these are cases where the environment is set up in such a way that the desired outcome is the default outcome, without the candidate “agent” having to do any particular search or optimization to make the outcome happen.
What takes agency—moral agency—is making non-default things happen. (At least, that’s my current best articulation.) Mathematically, I’d frame this in terms of couterfactuals: credit assignment mostly makes sense in the context of comparison to counterfactual outcomes. Moral agency (insofar as it makes sense at all in a physically-reductive universe) is all about thinking of a thing as being capable of counterfactual impact.
Ok, I see your point and acknowledge that that is a good and valuable distinction. And, the reality is that most people are just responding to their environment most of the time, and you would class them as non-agents during those times, morally speaking.
But, unlike if people were literally marbles, you can sign a contract with most people and expect them to follow through on most of their commitments, where in practice there’s nothing preventing them from breaching contract in a way that harms you and helps them in the short term. So they don’t have no agency. And in small daily choices which are unconstrained or less-constrained by the environment, where the default option is less clear, people do make choices that have counterfactual impact. Maybe not on civilization-spanning scale (it would be a very chaotic world if reality was such that everyone correctly thought they could change the world in major ways and did so) but on the scale of their families, friend-groups and communities? Sure, quite often. And those choices shape those groups.
So my opinion is that humans in general:
a) Aren’t very smart.
b) Mostly copy those around them, not trying to make major changes to how things are.
c) When they do try to make changes, the efforts tend to be copied from someone else rather than figured out on their own.
d) But are faced with small-scale moral choices on a daily basis, where their actions are not practically constrained, and whether they cooperate or defect will influence the environment for others and their future selves. It is in those contexts where they display moral agency, to the extent that it is present for them.
Very few people are doing things like thinking through the game theory or equilibria effects of their actions, or looking at the big picture of the civilization we live in and going “how is this good/bad, and what changes can we make to get it to a better place?” in a way that’s better than guessing or copying their friends, with the end result of a civilization that thrashes around mostly blindly. If you’re disgusted with anyone who is not actively trying to remake the world in at least some respect, you’re going to be disgusted with almost everyone. But back to moral agency not being binary: the small-scale stuff matters, and standard adult humans are more morally agentic even when using your understanding of “moral agency” than cats are. I would also say, it’s good for people who are unable to accurately predict the long-term consequences of their actions to just copy what seems to have worked in past and respond to incentives, just play the role of a marble unless they’re really sure that their deviation from expected behaviour is good on net. And there are very few who are good enough predictors that they can look at their situations, choose to go uphill instead of down, and pick good hills to die on. Most of them will have grown up in families not composed of such people, and will need to have it pointed out to them that they have and should use more agency.
As an example: It is not at all difficult to talk to your elected representative. They frankly like it (in my experience) when an engaged citizen engages with them. This is a thing anyone can do. When I suggest to someone that this is a thing that might help solve a problem they have (for example, let’s say their interaction with a government agency has gone poorly and there’s clearly a broken process), it is often clear that this is not something they have even considered as being inside their possibility-space. This doesn’t make these people the equivalent of human marbles by their nature. A simple “hey, you can just do things to make the world different, such as this thing for example” is often enough for them to generalize from. Sometimes the idea takes a few examples/repetitions to take root, though.
Now that I’m clearer on what you mean by moral agency, I’m not sure why you would ever expect that to be widespread among the population, and have to suspend the belief that the person you’re interacting with is a moral agent. It’s just straightforwardly true that almost nobody is trying to achieve a really non-default outcome. Any society composed mostly of people trying to change it “for the better” according to their understanding of better, which involves achieving non-default outcomes, rather than just going along with the system they were born into, would have collapsed and gotten invaded by a society that could coordinate better. At our current intelligence levels, anyway. A society composed of very smart people (relative to the current baseline) could probably come to explicit, explained, consciously chosen agreement from each individual on a lot of things and use that as a basis for coordination while leaving people free to explore the possibility-space of available social changes and propose new social agreements based on what they find, but the society we’ve actually got, cannot. So we’ve got to use conformity as a coordination mechanism instead.
Taking this back to empathy for a second: It is usually correct (has better effects) for most people not to swim against the social current. Yes, our society is an evolved system with many problems that would not exist if it were (correctly) intelligently designed instead, but that doesn’t mean most people can just start trying to make changes, without breaking the system and making things much worse. Those who do the default thing, shouldn’t be the subject of disgust, even if they’re one of the rare people who wouldn’t break things by mucking about with them. If understanding that someone just went with the flow provokes disgust in you, I think it’s reasonable for you to ask whether, in that person’s case, they really ought to have done otherwise, and also, whether it’s reasonable for them to have known that, given the society we live in doesn’t teach or encourage the kind of moral agency you respect to its members (for obvious reasons of social stability).