Thank you for the follow-up post. The distinction between empathy and the attribution of “moral agency” is a very helpful update and greatly clarifies the crux of your original argument.
That said, your conflict doesn’t seem to stem from simply attributing moral agency, but from an implicit assumption about the *utility function* that all moral agents *should* be optimizing.
If we model humans as self-optimizing agents, we can distinguish between:
1. Terminal Goals: The final, intrinsic objective. I would argue that for most humans, this approximates some form of “well-being” or “satisfaction” (happiness, to put it simply). It is the utility the system is trying to maximize. 2. Instrumental Goals: The subgoals an agent pursues because it believes they will help it achieve its terminal goal. This is where everything else comes in: strength (`Tsuyoku naritai`), competence, knowledge, wealth, social connection, validation, security, etc.
Your original post and this follow-up suggest that you have elevated a very specific instrumental goal—competence and personal growth—to the status of a universal terminal goal. The “disgust” or “disappointment” you feel when activating the “moral agency module” seems to be a reaction to agents who are not optimizing for *your* chosen instrumental goal.
The conclusion isn’t that you should “lower your standards” for yourself. The conclusion could be that treating someone as a “moral agent” doesn’t just mean demanding that they be responsible, but also recognizing that their responsibility is to their own, unique utility function, not to yours.
Thank you for the follow-up post. The distinction between empathy and the attribution of “moral agency” is a very helpful update and greatly clarifies the crux of your original argument.
That said, your conflict doesn’t seem to stem from simply attributing moral agency, but from an implicit assumption about the *utility function* that all moral agents *should* be optimizing.
If we model humans as self-optimizing agents, we can distinguish between:
1. Terminal Goals: The final, intrinsic objective. I would argue that for most humans, this approximates some form of “well-being” or “satisfaction” (happiness, to put it simply). It is the utility the system is trying to maximize.
2. Instrumental Goals: The subgoals an agent pursues because it believes they will help it achieve its terminal goal. This is where everything else comes in: strength (`Tsuyoku naritai`), competence, knowledge, wealth, social connection, validation, security, etc.
Your original post and this follow-up suggest that you have elevated a very specific instrumental goal—competence and personal growth—to the status of a universal terminal goal. The “disgust” or “disappointment” you feel when activating the “moral agency module” seems to be a reaction to agents who are not optimizing for *your* chosen instrumental goal.
The conclusion isn’t that you should “lower your standards” for yourself. The conclusion could be that treating someone as a “moral agent” doesn’t just mean demanding that they be responsible, but also recognizing that their responsibility is to their own, unique utility function, not to yours.