I do not suspend my thinking of cats as moral agents when considering their actions, and generally try to treat them as peers in that regard. I have found that treating cats as tools is an unreliable move. I do my best to communicate my intentions to cats, to express structured disapproval when they do something I don’t like and try to explain why, etc. I once had an experience where a cat was biting me too hard, so I stood in front of him and bit myself too hard and then cringed; then bit myself very very gently and sighed happily. Then I offered to let him bite me a few times, and turned away from him to show my disapproval when he bit too hard, and slow blinked when he bit nicely. I gave him treats at the end. Reinforcement learning, but I tried to make it easily detectable to a cat brain why there was reinforcement learning happening, in the expectation that he could then use that information to decide what to do. I also never repeated this and for a long time he didn’t try to love-bite me, even though he seems to really like lovebiting.
I think there’s a way to look at these things which is something vaguely akin to importing both “empathy” and a form of the see-them-as-an-agent thing, something like “helping them to be a good person by both of our lights”, which can take the thing that’s currently rendering as disgust for you and direct it in a way that can be transmitted to them such that they’ll be able to act on it; something like, encode it in a way that is better-lubricated by nature of your not treating them as a tool, and in fact having sympathy for the fact that facing the full brunt of what you want them to be might hurt at first, or some such thing. as well as including in the encoding of the request-to-change that it’s not a thing where you claim to be the “true agent” here, that you’re a peer not an authority. (well, unless you want to claim to be an authority, in which case, like, alright whatever).
In other words, or maybe just in very similar words again because I don’t feel like I’ve said this clearly yet, I think there’s a region between “treat as tool” and “treat as a peer who should already have chosen to do better”, something like “present clearly that you’d like to treat them as a tool because you’re disappointed in their effectiveness and request that they try to take on some of the effort of agency-towards-morally-good-outcomes that you’re having to do through them”.
If you were to do this to me, for example, just being disgusted, eg “I’m disappointed in your progress. You’ve been avoiding the hard work of doing the exercises that would teach you this math, and you’re not at my level yet.” that would likely make me averse to interacting with you—you’re telling me things I already know and part of my brain yells at me about regularly, and the problem is not that I don’t think that this is bad, but rather that I oscillate between going too hard at learning to having a feeling of “wow, doing the actual work is hard, can we budget the time spent on it?”; but if you were to say, “I’m disappointed in your progress, and to get best results from you as a contributor, I’d like to improve it as much as possible. What would it take to do that?” then you’re asking me to narrate the bottlenecks I know about that are interfering with my ability to go hard.
Maybe I’m a bad test case, though—I already have a lot of motivation to go hard, and getting it to happen is where the problem is. If you’re interested in saying this to people where your disapproval is pointed at the fact that they seem to reflectively endorse not caring to save the world even though they know the arguments for it, then I still think it would be a reasoning error to treat them as hopeless, but you’d need to do more things focused on achieving willingness-to-try.
I do not suspend my thinking of cats as moral agents when considering their actions, and generally try to treat them as peers in that regard. I have found that treating cats as tools is an unreliable move. I do my best to communicate my intentions to cats, to express structured disapproval when they do something I don’t like and try to explain why, etc. I once had an experience where a cat was biting me too hard, so I stood in front of him and bit myself too hard and then cringed; then bit myself very very gently and sighed happily. Then I offered to let him bite me a few times, and turned away from him to show my disapproval when he bit too hard, and slow blinked when he bit nicely. I gave him treats at the end. Reinforcement learning, but I tried to make it easily detectable to a cat brain why there was reinforcement learning happening, in the expectation that he could then use that information to decide what to do. I also never repeated this and for a long time he didn’t try to love-bite me, even though he seems to really like lovebiting.
I think there’s a way to look at these things which is something vaguely akin to importing both “empathy” and a form of the see-them-as-an-agent thing, something like “helping them to be a good person by both of our lights”, which can take the thing that’s currently rendering as disgust for you and direct it in a way that can be transmitted to them such that they’ll be able to act on it; something like, encode it in a way that is better-lubricated by nature of your not treating them as a tool, and in fact having sympathy for the fact that facing the full brunt of what you want them to be might hurt at first, or some such thing. as well as including in the encoding of the request-to-change that it’s not a thing where you claim to be the “true agent” here, that you’re a peer not an authority. (well, unless you want to claim to be an authority, in which case, like, alright whatever).
In other words, or maybe just in very similar words again because I don’t feel like I’ve said this clearly yet, I think there’s a region between “treat as tool” and “treat as a peer who should already have chosen to do better”, something like “present clearly that you’d like to treat them as a tool because you’re disappointed in their effectiveness and request that they try to take on some of the effort of agency-towards-morally-good-outcomes that you’re having to do through them”.
If you were to do this to me, for example, just being disgusted, eg “I’m disappointed in your progress. You’ve been avoiding the hard work of doing the exercises that would teach you this math, and you’re not at my level yet.” that would likely make me averse to interacting with you—you’re telling me things I already know and part of my brain yells at me about regularly, and the problem is not that I don’t think that this is bad, but rather that I oscillate between going too hard at learning to having a feeling of “wow, doing the actual work is hard, can we budget the time spent on it?”; but if you were to say, “I’m disappointed in your progress, and to get best results from you as a contributor, I’d like to improve it as much as possible. What would it take to do that?” then you’re asking me to narrate the bottlenecks I know about that are interfering with my ability to go hard.
Maybe I’m a bad test case, though—I already have a lot of motivation to go hard, and getting it to happen is where the problem is. If you’re interested in saying this to people where your disapproval is pointed at the fact that they seem to reflectively endorse not caring to save the world even though they know the arguments for it, then I still think it would be a reasoning error to treat them as hopeless, but you’d need to do more things focused on achieving willingness-to-try.