I think this misses the distinction I’d consider relevant for moral agency.
I can put a marble on a ramp and it will roll down. But I have to set up the ramp and place the marble; it makes no sense for me to e.g. sign a contract with a marble and expect it to make itself roll down a ramp. The marble has no agency.
Likewise, I can stick a nonagentic human in a social environment where the default thing everyone does is take certain courses and graduate in four years, and the human will probably do that. I can condition a child with rewards and punishments to behave a certain way, and the child will probably do so. Like the marble, both of these are cases where the environment is set up in such a way that the desired outcome is the default outcome, without the candidate “agent” having to do any particular search or optimization to make the outcome happen.
What takes agency—moral agency—is making non-default things happen. (At least, that’s my current best articulation.) Mathematically, I’d frame this in terms of couterfactuals: credit assignment mostly makes sense in the context of comparison to counterfactual outcomes. Moral agency (insofar as it makes sense at all in a physically-reductive universe) is all about thinking of a thing as being capable of counterfactual impact.
Ok, I see your point and acknowledge that that is a good and valuable distinction. And, the reality is that most people are just responding to their environment most of the time, and you would class them as non-agents during those times, morally speaking.
But, unlike if people were literally marbles, you can sign a contract with most people and expect them to follow through on most of their commitments, where in practice there’s nothing preventing them from breaching contract in a way that harms you and helps them in the short term. So they don’t have no agency. And in small daily choices which are unconstrained or less-constrained by the environment, where the default option is less clear, people do make choices that have counterfactual impact. Maybe not on civilization-spanning scale (it would be a very chaotic world if reality was such that everyone correctly thought they could change the world in major ways and did so) but on the scale of their families, friend-groups and communities? Sure, quite often. And those choices shape those groups.
So my opinion is that humans in general: a) Aren’t very smart. b) Mostly copy those around them, not trying to make major changes to how things are. c) When they do try to make changes, the efforts tend to be copied from someone else rather than figured out on their own. d) But are faced with small-scale moral choices on a daily basis, where their actions are not practically constrained, and whether they cooperate or defect will influence the environment for others and their future selves. It is in those contexts where they display moral agency, to the extent that it is present for them.
Very few people are doing things like thinking through the game theory or equilibria effects of their actions, or looking at the big picture of the civilization we live in and going “how is this good/bad, and what changes can we make to get it to a better place?” in a way that’s better than guessing or copying their friends, with the end result of a civilization that thrashes around mostly blindly. If you’re disgusted with anyone who is not actively trying to remake the world in at least some respect, you’re going to be disgusted with almost everyone. But back to moral agency not being binary: the small-scale stuff matters, and standard adult humans are more morally agentic even when using your understanding of “moral agency” than cats are. I would also say, it’s good for people who are unable to accurately predict the long-term consequences of their actions to just copy what seems to have worked in past and respond to incentives, just play the role of a marble unless they’re really sure that their deviation from expected behaviour is good on net. And there are very few who are good enough predictors that they can look at their situations, choose to go uphill instead of down, and pick good hills to die on. Most of them will have grown up in families not composed of such people, and will need to have it pointed out to them that they have and should use more agency.
As an example: It is not at all difficult to talk to your elected representative. They frankly like it (in my experience) when an engaged citizen engages with them. This is a thing anyone can do. When I suggest to someone that this is a thing that might help solve a problem they have (for example, let’s say their interaction with a government agency has gone poorly and there’s clearly a broken process), it is often clear that this is not something they have even considered as being inside their possibility-space. This doesn’t make these people the equivalent of human marbles by their nature. A simple “hey, you can just do things to make the world different, such as this thing for example” is often enough for them to generalize from. Sometimes the idea takes a few examples/repetitions to take root, though.
Now that I’m clearer on what you mean by moral agency, I’m not sure why you would ever expect that to be widespread among the population, and have to suspend the belief that the person you’re interacting with is a moral agent. It’s just straightforwardly true that almost nobody is trying to achieve a really non-default outcome. Any society composed mostly of people trying to change it “for the better” according to their understanding of better, which involves achieving non-default outcomes, rather than just going along with the system they were born into, would have collapsed and gotten invaded by a society that could coordinate better. At our current intelligence levels, anyway. A society composed of very smart people (relative to the current baseline) could probably come to explicit, explained, consciously chosen agreement from each individual on a lot of things and use that as a basis for coordination while leaving people free to explore the possibility-space of available social changes and propose new social agreements based on what they find, but the society we’ve actually got, cannot. So we’ve got to use conformity as a coordination mechanism instead.
Taking this back to empathy for a second: It is usually correct (has better effects) for most people not to swim against the social current. Yes, our society is an evolved system with many problems that would not exist if it were (correctly) intelligently designed instead, but that doesn’t mean most people can just start trying to make changes, without breaking the system and making things much worse. Those who do the default thing, shouldn’t be the subject of disgust, even if they’re one of the rare people who wouldn’t break things by mucking about with them. If understanding that someone just went with the flow provokes disgust in you, I think it’s reasonable for you to ask whether, in that person’s case, they really ought to have done otherwise, and also, whether it’s reasonable for them to have known that, given the society we live in doesn’t teach or encourage the kind of moral agency you respect to its members (for obvious reasons of social stability).
I think this misses the distinction I’d consider relevant for moral agency.
I can put a marble on a ramp and it will roll down. But I have to set up the ramp and place the marble; it makes no sense for me to e.g. sign a contract with a marble and expect it to make itself roll down a ramp. The marble has no agency.
Likewise, I can stick a nonagentic human in a social environment where the default thing everyone does is take certain courses and graduate in four years, and the human will probably do that. I can condition a child with rewards and punishments to behave a certain way, and the child will probably do so. Like the marble, both of these are cases where the environment is set up in such a way that the desired outcome is the default outcome, without the candidate “agent” having to do any particular search or optimization to make the outcome happen.
What takes agency—moral agency—is making non-default things happen. (At least, that’s my current best articulation.) Mathematically, I’d frame this in terms of couterfactuals: credit assignment mostly makes sense in the context of comparison to counterfactual outcomes. Moral agency (insofar as it makes sense at all in a physically-reductive universe) is all about thinking of a thing as being capable of counterfactual impact.
Ok, I see your point and acknowledge that that is a good and valuable distinction. And, the reality is that most people are just responding to their environment most of the time, and you would class them as non-agents during those times, morally speaking.
But, unlike if people were literally marbles, you can sign a contract with most people and expect them to follow through on most of their commitments, where in practice there’s nothing preventing them from breaching contract in a way that harms you and helps them in the short term. So they don’t have no agency. And in small daily choices which are unconstrained or less-constrained by the environment, where the default option is less clear, people do make choices that have counterfactual impact. Maybe not on civilization-spanning scale (it would be a very chaotic world if reality was such that everyone correctly thought they could change the world in major ways and did so) but on the scale of their families, friend-groups and communities? Sure, quite often. And those choices shape those groups.
So my opinion is that humans in general:
a) Aren’t very smart.
b) Mostly copy those around them, not trying to make major changes to how things are.
c) When they do try to make changes, the efforts tend to be copied from someone else rather than figured out on their own.
d) But are faced with small-scale moral choices on a daily basis, where their actions are not practically constrained, and whether they cooperate or defect will influence the environment for others and their future selves. It is in those contexts where they display moral agency, to the extent that it is present for them.
Very few people are doing things like thinking through the game theory or equilibria effects of their actions, or looking at the big picture of the civilization we live in and going “how is this good/bad, and what changes can we make to get it to a better place?” in a way that’s better than guessing or copying their friends, with the end result of a civilization that thrashes around mostly blindly. If you’re disgusted with anyone who is not actively trying to remake the world in at least some respect, you’re going to be disgusted with almost everyone. But back to moral agency not being binary: the small-scale stuff matters, and standard adult humans are more morally agentic even when using your understanding of “moral agency” than cats are. I would also say, it’s good for people who are unable to accurately predict the long-term consequences of their actions to just copy what seems to have worked in past and respond to incentives, just play the role of a marble unless they’re really sure that their deviation from expected behaviour is good on net. And there are very few who are good enough predictors that they can look at their situations, choose to go uphill instead of down, and pick good hills to die on. Most of them will have grown up in families not composed of such people, and will need to have it pointed out to them that they have and should use more agency.
As an example: It is not at all difficult to talk to your elected representative. They frankly like it (in my experience) when an engaged citizen engages with them. This is a thing anyone can do. When I suggest to someone that this is a thing that might help solve a problem they have (for example, let’s say their interaction with a government agency has gone poorly and there’s clearly a broken process), it is often clear that this is not something they have even considered as being inside their possibility-space. This doesn’t make these people the equivalent of human marbles by their nature. A simple “hey, you can just do things to make the world different, such as this thing for example” is often enough for them to generalize from. Sometimes the idea takes a few examples/repetitions to take root, though.
Now that I’m clearer on what you mean by moral agency, I’m not sure why you would ever expect that to be widespread among the population, and have to suspend the belief that the person you’re interacting with is a moral agent. It’s just straightforwardly true that almost nobody is trying to achieve a really non-default outcome. Any society composed mostly of people trying to change it “for the better” according to their understanding of better, which involves achieving non-default outcomes, rather than just going along with the system they were born into, would have collapsed and gotten invaded by a society that could coordinate better. At our current intelligence levels, anyway. A society composed of very smart people (relative to the current baseline) could probably come to explicit, explained, consciously chosen agreement from each individual on a lot of things and use that as a basis for coordination while leaving people free to explore the possibility-space of available social changes and propose new social agreements based on what they find, but the society we’ve actually got, cannot. So we’ve got to use conformity as a coordination mechanism instead.
Taking this back to empathy for a second: It is usually correct (has better effects) for most people not to swim against the social current. Yes, our society is an evolved system with many problems that would not exist if it were (correctly) intelligently designed instead, but that doesn’t mean most people can just start trying to make changes, without breaking the system and making things much worse. Those who do the default thing, shouldn’t be the subject of disgust, even if they’re one of the rare people who wouldn’t break things by mucking about with them. If understanding that someone just went with the flow provokes disgust in you, I think it’s reasonable for you to ask whether, in that person’s case, they really ought to have done otherwise, and also, whether it’s reasonable for them to have known that, given the society we live in doesn’t teach or encourage the kind of moral agency you respect to its members (for obvious reasons of social stability).