I may have the same bias, and may in fact believe it’s not a bias. People are highly mutable and contextual in how they perceive others, especially strangers, especially when they’re framed as outgroup.
The fact that a LOT of people could be killers and torturers in the right (or very wrong) circumstances doesn’t seem surprising to me, and this doesn’t contradict my beliefs that many or perhaps most do genuinely care about others with a better framing and circumstances.
There is certainly a selection effect, likewise for modern criminal-related work, that people with the ability to frame “otherness” and some individual-power drive, tend to be drawn to it. There are certainly lots of Germans who did not participate in those crimes, and lots of current humans who prefer to ignore the question of what violence is used against various subgroups*.
But there’s also a large dollop of “humans aren’t automatically ANYTHING”. They’re far more complex and reactive than a simple view can encompass.
* OH! that’s a bias that’s insanely common. I said “violence against subgroups” rather than “violence by individuals against individuals, motivated by membership and identification with different subgroups”.
I’ve gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?
On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they’ll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).
I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they’ll act extremely selfishly.
The way that I like to think about this is in terms of “moral weight”. How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with “moral weights” to assign to different types of people. But I think that people don’t really assign a moral weight and then act consistently. In some situations they’ll act as if their answer to my previous question is 100,000, and in other situations they’ll act like it’s 0.00001.
My model of utility (and the standard one, as far as I can tell) doesn’t work that way. No rational agent ever gives up a utilon—that is the thing they are maximizing. I think of it as “how many utilons do you get from thinking about John Doe’s increased satisfaction (not utilons, as you have no access to his, though you could say “inferred utilons”) compared to the direct utilons you would otherwise get”.
Those moral weights are “just” terms in your utility function.
And, since humans aren’t actually rational, and don’t have consistent utility functions, actions that imply moral weights are highly variable and contextual.
I may have the same bias, and may in fact believe it’s not a bias. People are highly mutable and contextual in how they perceive others, especially strangers, especially when they’re framed as outgroup.
The fact that a LOT of people could be killers and torturers in the right (or very wrong) circumstances doesn’t seem surprising to me, and this doesn’t contradict my beliefs that many or perhaps most do genuinely care about others with a better framing and circumstances.
There is certainly a selection effect, likewise for modern criminal-related work, that people with the ability to frame “otherness” and some individual-power drive, tend to be drawn to it. There are certainly lots of Germans who did not participate in those crimes, and lots of current humans who prefer to ignore the question of what violence is used against various subgroups*.
But there’s also a large dollop of “humans aren’t automatically ANYTHING”. They’re far more complex and reactive than a simple view can encompass.
* OH! that’s a bias that’s insanely common. I said “violence against subgroups” rather than “violence by individuals against individuals, motivated by membership and identification with different subgroups”.
Yeah, I echo this.
I’ve gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?
On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they’ll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).
I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they’ll act extremely selfishly.
The way that I like to think about this is in terms of “moral weight”. How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with “moral weights” to assign to different types of people. But I think that people don’t really assign a moral weight and then act consistently. In some situations they’ll act as if their answer to my previous question is 100,000, and in other situations they’ll act like it’s 0.00001.
My model of utility (and the standard one, as far as I can tell) doesn’t work that way. No rational agent ever gives up a utilon—that is the thing they are maximizing. I think of it as “how many utilons do you get from thinking about John Doe’s increased satisfaction (not utilons, as you have no access to his, though you could say “inferred utilons”) compared to the direct utilons you would otherwise get”.
Those moral weights are “just” terms in your utility function.
And, since humans aren’t actually rational, and don’t have consistent utility functions, actions that imply moral weights are highly variable and contextual.
Ah yeah, that makes sense. I guess utility isn’t really the right term to use here.