I’ve gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?
On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they’ll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).
I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they’ll act extremely selfishly.
The way that I like to think about this is in terms of “moral weight”. How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with “moral weights” to assign to different types of people. But I think that people don’t really assign a moral weight and then act consistently. In some situations they’ll act as if their answer to my previous question is 100,000, and in other situations they’ll act like it’s 0.00001.
My model of utility (and the standard one, as far as I can tell) doesn’t work that way. No rational agent ever gives up a utilon—that is the thing they are maximizing. I think of it as “how many utilons do you get from thinking about John Doe’s increased satisfaction (not utilons, as you have no access to his, though you could say “inferred utilons”) compared to the direct utilons you would otherwise get”.
Those moral weights are “just” terms in your utility function.
And, since humans aren’t actually rational, and don’t have consistent utility functions, actions that imply moral weights are highly variable and contextual.
Yeah, I echo this.
I’ve gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?
On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they’ll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).
I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they’ll act extremely selfishly.
The way that I like to think about this is in terms of “moral weight”. How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with “moral weights” to assign to different types of people. But I think that people don’t really assign a moral weight and then act consistently. In some situations they’ll act as if their answer to my previous question is 100,000, and in other situations they’ll act like it’s 0.00001.
My model of utility (and the standard one, as far as I can tell) doesn’t work that way. No rational agent ever gives up a utilon—that is the thing they are maximizing. I think of it as “how many utilons do you get from thinking about John Doe’s increased satisfaction (not utilons, as you have no access to his, though you could say “inferred utilons”) compared to the direct utilons you would otherwise get”.
Those moral weights are “just” terms in your utility function.
And, since humans aren’t actually rational, and don’t have consistent utility functions, actions that imply moral weights are highly variable and contextual.
Ah yeah, that makes sense. I guess utility isn’t really the right term to use here.