“Do you believe that impersonal and accidental forces of history generate as much misery, which you can fight against, as the deliberate efforts of people who disagree with you? Wouldn’t that be surprising if it were true?”
Yes, I believe that, and no, it is not surprising. Issues where people disagree are likely to be mixed issues, where making changes will do harm as well as benefit. That is exactly why people disagree. So working on those issues will tend to do less benefit than working on the issues everyone agrees on, which are likely to be much less mixed.
Issues where people disagree are likely to be mixed issues, where making changes will do harm as well as benefit. That is exactly why people disagree.
Harm and benefit are two-place words; harm is always to someone, and according to someone’s values or goals.
If two people have different values—which can be as simple as each wanting the same resource for themselves, or as complex as different religious beliefs—then harm to the one can be benefit to the other. It might not be a zero-sum game because their utility functions aren’t exact inverses, but it’s still a tradeoff between the two, and each prefers their own values over the other’s.
On this view, such issues where people disagree are tautologically those where each change one of them wants benefits themselves and harms the other. Any changes that benefit everyone are quickly implemented until there aren’t any left.
If you share the values of one of these people, then working on the problem will result in benefit (by your values), and you won’t care about the harm (by some other person’s values).
If on most or all such divisive issues, you don’t side with any established camp, that is a very surprising fact that makes you an outlier. Can you build an EA movement out of altruists who don’t care about most divisive issues?
Issues where people disagree are likely to be mixed issues, where making changes will do harm as well as benefit. That is exactly why people disagree. So working on those issues will tend to do less benefit than working on the issues everyone agrees on, which are likely to be much less mixed.
A disagreement could resolve into one side being mostly right and another mostly wrong, so actual harm+benefit isn’t necessary, only expected harm+benefit. All else equal, harm+benefit is worse than pure benefit, but usually there are other relevant distinctions, so that the effect of a harm+benefit cause could overwhelm available pure benefit causes.
The disagreements I was talking about—which I clam are many, perhaps most, disagreements—are not about unknown or disputed facts, but about conflicting values and goals. Such disagreements can’t be resolved into sides being objectively right or wrong (unless you’re a moral realist). If you side with one of the sides, that’s the same as saying their desires are ‘right’ to you, and implementing their desires usually (in most moral theories in practice) outweighs the cost of the moral outrage suffered by those who disagree. (E.g., I would want to free one slave even if it made a million slave-owners really angry, very slightly increasing the incidence of heart attacks and costing more QALYs in aggregate than the one slave gained.)
This is true in principle, but since I take disagreements pretty seriously I think it is normally false in practice. In other words there is actual harm and actual benefit in almost every real case.
Of course the last part of your comment is still true, namely that a mixed cause could still be better than a pure benefit case. However, this will not be true on average, and especially if I am always acting on my own opinion, since I will not always be right.
… a mixed cause could still be better than a pure benefit case. However, this will not be true on average …
That’s the question, what is the base rate of the options you are likely to notice. If visible causes come in equivalent pairs, one with harm in it and another without, all other traits similar, that would be true. Similarly if pure benefit causes tend to be stronger. But it could be the case that best pure benefit causes have less positive impact than best mixed benefit causes.
… since I take disagreements pretty seriously I think it is normally false in practice. In other words there is actual harm and actual benefit in almost every real case.
How does your taking disagreements seriously (what do you mean by that?) inform the question of whether most real (or just contentious?) causes involve actual harm as well as benefit? (Do you mean to characterize your use of the term “disagreement”, which causes you point to as involving disagreement? For example, global warming could be said to involve no disagreement that’s to be taken seriously.)
Yes, it could be the case that the best pure benefit causes have less positive impact than the best mixed benefit causes. But I have no special reason to believe this is the case. If benefit of the doubt is going to go to one side without argument, I would put it on the side of pure benefit causes, since they don’t have the additional negative factor.
By taking disagreements seriously, I mean that I think that if someone disagrees with me, there is a good chance that there is something right about what he is saying, and especially in issues of policy (i.e. I don’t expect people to advocate policies that are 100% bad, with extremely rare exceptions.)
That global warming is happening, and that human beings are a substantial part of the cause, is certainly true. This isn’t an issue of policy in itself, and I don’t take disagreement about it very seriously in comparison to most disagreements. However, there still may be some truth in the position of people who disagree, e.g. there is a good chance that the effects will end up being not as bad as generally predicted. A broad outside view also suggests this, as for example in previous environmental issues such as the Kuwait oil fires, or the Y2K computer issue, and so on.
In any case the kind of disagreement I was talking about was about policy, and as I said I don’t generally expect people other than Hitler to advocate purely evil policies. Restricting carbon emissions, for example, may be a benefit overall, but it has harmful effects as well, and that is precisely the reason why some people would oppose it.
“Do you believe that impersonal and accidental forces of history generate as much misery, which you can fight against, as the deliberate efforts of people who disagree with you? Wouldn’t that be surprising if it were true?”
Yes, I believe that, and no, it is not surprising. Issues where people disagree are likely to be mixed issues, where making changes will do harm as well as benefit. That is exactly why people disagree. So working on those issues will tend to do less benefit than working on the issues everyone agrees on, which are likely to be much less mixed.
Harm and benefit are two-place words; harm is always to someone, and according to someone’s values or goals.
If two people have different values—which can be as simple as each wanting the same resource for themselves, or as complex as different religious beliefs—then harm to the one can be benefit to the other. It might not be a zero-sum game because their utility functions aren’t exact inverses, but it’s still a tradeoff between the two, and each prefers their own values over the other’s.
On this view, such issues where people disagree are tautologically those where each change one of them wants benefits themselves and harms the other. Any changes that benefit everyone are quickly implemented until there aren’t any left.
If you share the values of one of these people, then working on the problem will result in benefit (by your values), and you won’t care about the harm (by some other person’s values).
If on most or all such divisive issues, you don’t side with any established camp, that is a very surprising fact that makes you an outlier. Can you build an EA movement out of altruists who don’t care about most divisive issues?
A disagreement could resolve into one side being mostly right and another mostly wrong, so actual harm+benefit isn’t necessary, only expected harm+benefit. All else equal, harm+benefit is worse than pure benefit, but usually there are other relevant distinctions, so that the effect of a harm+benefit cause could overwhelm available pure benefit causes.
The disagreements I was talking about—which I clam are many, perhaps most, disagreements—are not about unknown or disputed facts, but about conflicting values and goals. Such disagreements can’t be resolved into sides being objectively right or wrong (unless you’re a moral realist). If you side with one of the sides, that’s the same as saying their desires are ‘right’ to you, and implementing their desires usually (in most moral theories in practice) outweighs the cost of the moral outrage suffered by those who disagree. (E.g., I would want to free one slave even if it made a million slave-owners really angry, very slightly increasing the incidence of heart attacks and costing more QALYs in aggregate than the one slave gained.)
This is true in principle, but since I take disagreements pretty seriously I think it is normally false in practice. In other words there is actual harm and actual benefit in almost every real case.
Of course the last part of your comment is still true, namely that a mixed cause could still be better than a pure benefit case. However, this will not be true on average, and especially if I am always acting on my own opinion, since I will not always be right.
That’s the question, what is the base rate of the options you are likely to notice. If visible causes come in equivalent pairs, one with harm in it and another without, all other traits similar, that would be true. Similarly if pure benefit causes tend to be stronger. But it could be the case that best pure benefit causes have less positive impact than best mixed benefit causes.
How does your taking disagreements seriously (what do you mean by that?) inform the question of whether most real (or just contentious?) causes involve actual harm as well as benefit? (Do you mean to characterize your use of the term “disagreement”, which causes you point to as involving disagreement? For example, global warming could be said to involve no disagreement that’s to be taken seriously.)
Yes, it could be the case that the best pure benefit causes have less positive impact than the best mixed benefit causes. But I have no special reason to believe this is the case. If benefit of the doubt is going to go to one side without argument, I would put it on the side of pure benefit causes, since they don’t have the additional negative factor.
By taking disagreements seriously, I mean that I think that if someone disagrees with me, there is a good chance that there is something right about what he is saying, and especially in issues of policy (i.e. I don’t expect people to advocate policies that are 100% bad, with extremely rare exceptions.)
That global warming is happening, and that human beings are a substantial part of the cause, is certainly true. This isn’t an issue of policy in itself, and I don’t take disagreement about it very seriously in comparison to most disagreements. However, there still may be some truth in the position of people who disagree, e.g. there is a good chance that the effects will end up being not as bad as generally predicted. A broad outside view also suggests this, as for example in previous environmental issues such as the Kuwait oil fires, or the Y2K computer issue, and so on.
In any case the kind of disagreement I was talking about was about policy, and as I said I don’t generally expect people other than Hitler to advocate purely evil policies. Restricting carbon emissions, for example, may be a benefit overall, but it has harmful effects as well, and that is precisely the reason why some people would oppose it.