I usually associate things like “being evil” more with something like “part of my payoff matrix has a negative coefficient on your payoff matrix”. I.e. actively wanting to hurt people and taking inherent interest in making them worse off. Selfishness feels pretty different from being evil emotionally, at least to me.
Judgement of evil follows the same pressures as evil itself. Selfishness feels different from sadism to you, at least in part because it’s easier to find cooperative paths with selfishness. And this question really does come down to “when should I cooperate vs defect”.
If your well-being has exactly zero value in my preference function, that literally means that I would kill you in a dark alley if I believed there was zero chance of being punished, because there is a chance you might have some money that I could take. I would call that “evil”, too.
You can’t hypothesize zeros and get anywhere. MANY MANY psychopaths exist, and very few of them find it more effective to murder people for spare change than to further their ends in other ways. They may not care about you, but your atoms are useful to them in their current configuration.
They may not care about you, but your atoms are useful to them in their current configuration.
There are ways of hurting people other than stabbing them, I just used a simple example.
I think there is a confusion about what exactly “selfish” means, and I blame Ayn Rand for it. The heroes in her novels are given the label “selfish” because they do not care about possibilities to actively do something good for other people unless there is also some profit for them (which is what a person with zero value for others in their preference function would do), but at the same time they avoid actively harming other people in ways that could bring them some profit (which is not what a perfectly selfish person would do).
As a result, we get quite unrealistic characters who on one hand are described as rational profit maximizers who don’t care about others (except instrumentally), but on the other hand they follow an independently reinvented deontological framework that seems like designed by someone who actually cares about other people but is in deep denial about it (i.e. Ayn Rand).
A truly selfish person (someone who truly does not care about others) would hurt others in situations where doing so is profitable (including second-order effects). A truly selfish person would not arbitrarily invent a deontological code against hurting other people, because such code is merely a rationalization invented by someone who already has an emotional reason not to hurt other people but wants to pretend that instead this is a logical conclusion derived from first principles.
Interacting with a psychopath with likely get you hurt. It will likely not get you killed, because some other way of hurting you has a better risk:benefit profile. Perhaps the most profitable way is to scam you of some money and use you to get introduced to your friends. Only once in a while a situation will arise when raping someone is sufficiently safe, or killing someone is extremely profitable, e.g. because that person stands in a way of a grand business.
I’m not sure what our disagreement actually is—I agree with your summary of Ayn Rand, I agree that there are lots of ways to hurt people without stabbing. I’m not sure you’re claiming this, but I think that failure to help is selfish too, though I’m not sure it’s comparable with active harm.
It may be that I’m reacting badly to the use of “truly selfish”—I fear a motte-and-bailey argument is coming, where we define it loosely, and then categorize actions inconsistently as “truly selfish” only in extremes, but then try to define policy to cover far more things.
I think we’re agreed that the world contains a range of motivated behaviors, from sadistic psychopaths (who have NEGATIVE nonzero terms for others’ happiness) to saints (whose utility functions weight very heavily toward other’s happiness over their own). I don’t know if we agree that “second-order effect” very often dominate the observed behaviors over most of this range. I hope we agree that almost everyone changes their behavior to some extent based on visible incentives.
I still disagree with your post that a coefficient of 0 for you in someone’s mind implies murder for pocket change. And I disagree with the implication that murder for pocket change is impossible even if the coefficient is above 0 - circumstances matter more than innate utility function.
To the OP’s point, it’s hard to know how to accomplish “make people less selfish”, but “make the environment more conducive to positive-sum choices so selfish people take cooperative actions” is quite feasible.
I still disagree with your post that a coefficient of 0 for you in someone’s mind implies murder for pocket change.
I believe this is exactly what it means, unless there is a chance of punishment or being hurt by victim’s self-defense or a chance of better alternative interaction with given person. Do you assume that there is always a more profitable interaction? (What if the target says “hey, I just realized that you are a psychopath, and I do not want to interact with you anymore”, and they mean it.)
Could you please list the pros and cons of deciding whether to murder a stranger who refuses to interact with you, if there is zero risk of being punished, from the perspective of a psychopath? As I see it, the “might get some pocket change” in the pro column is the only nonzero item in this model.
unless there is a chance of punishment or being hurt by victim’s self-defense or a chance of better alternative interaction with given person.
There always is that chance. That’s mostly our disagreement. Using real-world illustrations (murder) for motivational models (utility) really needs to acknowledge the uncertainty and variability, which the vast majority of the time “adds up to normal”. There really aren’t that many murders among strangers. And there are a fair number of people who don’t value others’ very highly.
Yes, I would make this distinction too. Yet, I submit that few people actually believe, or even say they believe, that the main problems in the world are caused by people being gratuitously or sadistically evil. There are some problems that people would explain this way: violent crime comes to mind. But I don’t think the evil hypothesis is the most common explanation given by non-rationalists for why we have, say, homelessness and poverty.
That is to say that, insofar as the common rationalist refrain of “problems are caused by incentives dammit, not evil people” refers to an actual argument people generally give, it’s probably referring to the argument that people are selfish and greedy. And in that sense, the rationalists and non-rationalists are right: it’s both the system and the actors within it.
I usually associate things like “being evil” more with something like “part of my payoff matrix has a negative coefficient on your payoff matrix”. I.e. actively wanting to hurt people and taking inherent interest in making them worse off. Selfishness feels pretty different from being evil emotionally, at least to me.
Judgement of evil follows the same pressures as evil itself. Selfishness feels different from sadism to you, at least in part because it’s easier to find cooperative paths with selfishness. And this question really does come down to “when should I cooperate vs defect”.
If your well-being has exactly zero value in my preference function, that literally means that I would kill you in a dark alley if I believed there was zero chance of being punished, because there is a chance you might have some money that I could take. I would call that “evil”, too.
You can’t hypothesize zeros and get anywhere. MANY MANY psychopaths exist, and very few of them find it more effective to murder people for spare change than to further their ends in other ways. They may not care about you, but your atoms are useful to them in their current configuration.
There are ways of hurting people other than stabbing them, I just used a simple example.
I think there is a confusion about what exactly “selfish” means, and I blame Ayn Rand for it. The heroes in her novels are given the label “selfish” because they do not care about possibilities to actively do something good for other people unless there is also some profit for them (which is what a person with zero value for others in their preference function would do), but at the same time they avoid actively harming other people in ways that could bring them some profit (which is not what a perfectly selfish person would do).
As a result, we get quite unrealistic characters who on one hand are described as rational profit maximizers who don’t care about others (except instrumentally), but on the other hand they follow an independently reinvented deontological framework that seems like designed by someone who actually cares about other people but is in deep denial about it (i.e. Ayn Rand).
A truly selfish person (someone who truly does not care about others) would hurt others in situations where doing so is profitable (including second-order effects). A truly selfish person would not arbitrarily invent a deontological code against hurting other people, because such code is merely a rationalization invented by someone who already has an emotional reason not to hurt other people but wants to pretend that instead this is a logical conclusion derived from first principles.
Interacting with a psychopath with likely get you hurt. It will likely not get you killed, because some other way of hurting you has a better risk:benefit profile. Perhaps the most profitable way is to scam you of some money and use you to get introduced to your friends. Only once in a while a situation will arise when raping someone is sufficiently safe, or killing someone is extremely profitable, e.g. because that person stands in a way of a grand business.
I’m not sure what our disagreement actually is—I agree with your summary of Ayn Rand, I agree that there are lots of ways to hurt people without stabbing. I’m not sure you’re claiming this, but I think that failure to help is selfish too, though I’m not sure it’s comparable with active harm.
It may be that I’m reacting badly to the use of “truly selfish”—I fear a motte-and-bailey argument is coming, where we define it loosely, and then categorize actions inconsistently as “truly selfish” only in extremes, but then try to define policy to cover far more things.
I think we’re agreed that the world contains a range of motivated behaviors, from sadistic psychopaths (who have NEGATIVE nonzero terms for others’ happiness) to saints (whose utility functions weight very heavily toward other’s happiness over their own). I don’t know if we agree that “second-order effect” very often dominate the observed behaviors over most of this range. I hope we agree that almost everyone changes their behavior to some extent based on visible incentives.
I still disagree with your post that a coefficient of 0 for you in someone’s mind implies murder for pocket change. And I disagree with the implication that murder for pocket change is impossible even if the coefficient is above 0 - circumstances matter more than innate utility function.
To the OP’s point, it’s hard to know how to accomplish “make people less selfish”, but “make the environment more conducive to positive-sum choices so selfish people take cooperative actions” is quite feasible.
I believe this is exactly what it means, unless there is a chance of punishment or being hurt by victim’s self-defense or a chance of better alternative interaction with given person. Do you assume that there is always a more profitable interaction? (What if the target says “hey, I just realized that you are a psychopath, and I do not want to interact with you anymore”, and they mean it.)
Could you please list the pros and cons of deciding whether to murder a stranger who refuses to interact with you, if there is zero risk of being punished, from the perspective of a psychopath? As I see it, the “might get some pocket change” in the pro column is the only nonzero item in this model.
There always is that chance. That’s mostly our disagreement. Using real-world illustrations (murder) for motivational models (utility) really needs to acknowledge the uncertainty and variability, which the vast majority of the time “adds up to normal”. There really aren’t that many murders among strangers. And there are a fair number of people who don’t value others’ very highly.
Yes, I would make this distinction too. Yet, I submit that few people actually believe, or even say they believe, that the main problems in the world are caused by people being gratuitously or sadistically evil. There are some problems that people would explain this way: violent crime comes to mind. But I don’t think the evil hypothesis is the most common explanation given by non-rationalists for why we have, say, homelessness and poverty.
That is to say that, insofar as the common rationalist refrain of “problems are caused by incentives dammit, not evil people” refers to an actual argument people generally give, it’s probably referring to the argument that people are selfish and greedy. And in that sense, the rationalists and non-rationalists are right: it’s both the system and the actors within it.