Yes, it certainly cuts both ways. Of course, your country’s welfare system is also available to you and your family if you ever need it, and you benefit more directly from social peace and democracy in your country, which is helped by these transfers. It is hard to see how you could have a functioning democracy without poor people voting for some transfers, so unless you think democracy has no useful function for you, that’s a cost in your best interest to pay.
Andaro
But an expanding circle of moral concern increases value differences. If I have to pay for a welfare system, or else pay for a welfare system and also biodiversity maintainance and also animal protection and also development aid and also a Mars mission without a business model and also far-future climate change prevention, I’d rather just pay for the welfare system. Other ideological conflicts would also go away, such as the conflict between preventing animal suffering and maintaining pristine nature, ethical natalism vs. ethical anti-natalism, and so on.
To be precise, this seems like a cost to Alice of Bob having a wide circle, if Alice and Bob are close. If they aren’t, and especially if we bring in a veil of ignorance, then Alice is likely to benefit somewhat from Bob having a wide circle.
Yes, but Alice doesn’t benefit from Bob’s having a circle so wide it contains nonhuman animals, far future entities or ecosystems/biodiversity for their own sake.
and my reaction is that none of that stops children from dying of malaria, which is really actually a thing I care about and don’t want to stop caring about
The OP asks us to reexamine our moral circle. Having done that, I find that nonhuman animals and far future beings are actually a thing I don’t care about and don’t want to start caring about.
If the required kind of multiverse exists, this leads to all kinds of contradictions.
For example, in some universes, Personal Identity X may have given consent to digital resurrection, while in others, the same identity may have explicitly forbidden it. In some universes, their relatives and relationships may have positive prefrences regarding X’s resurrection, in others, they may have negative preferences.
Given your assumed model of personal identity and the multiverse, you will always find that shared identities have contradicting preferences. They may also have made contradicting decisions in their respecting pasts, which makes multiverse-spanning acausal reciprocity highly questionable. For every conceivable identity, there are instances that have made decisions in favor of your values, but also instances who did the exact opposite.
These problems go away if you define personal identity differently, e.g. by requiring biographical or causal continuity rather than just internal state identity. But then your approach no longer works.
I personally am not motivated to be created in other Everett branches, nor do I extend my reciprocity to acausal variants.
>We all want to save the world, right?
No. This is your first mistake, I think. You take the ideology’s authority for granted. You shouldn’t. Dropping altruism outside of self-based reciprocity was the single best decision I have ever made. The world is not worth saving. It’s not worth destroying either.
If you’re suffering from being low-status in the EA movement, you should not be a part of the EA movement. EA as an ideology has deep flaws, and as a social dynamic, it’s outright horrible. Politically, it’s parasitic.
The last part is the only part I still care about. I went through a curve from caring about making the world a better place and therefore supporting EA to wanting to make the world a better place but being skeptical about EA’s consequences to not wanting to make the world a better place.
If EAs weren’t politically parasitic, we would be free to simply ignore them, and this would be the correct answer. Unfortunately, we can’t ignore them, because they push policies and influence politics in a way that makes us worse off. This is why I’m willing to actively oppose their goals.
I distinguish two aspects of status. One is to feel good about being accepted by others. That’s nice, but I don’t think it’s central. There are many ways to feel good and many options to substitute for acceptance of any particular person or group.
The second aspect is “getting things done”. Unfortunately, we live in a world filled with people who can harm us. Coercing or convincing them not to do so is unfortunately an important practical necessity. This is why we can’t simply ignore the EA movement, or organized religion, or neonazis or any other ideology that wants to extract value from our lives or limit our personal choices.
I really do recommend that you stop supporting the EA movement. Nothing good will come of it.
I have no idea what toonalfrink’s goals for the conversation are. But when someone writes something like,
>So you find yourself in this volunteering opportunity with some EA’s and they tell you some stuff you can do, and you do it, and you’re left in the dark again. Is this going to steer you into safe waters? Should you do more? Impress more? Maybe spend more time on that Master’s degree to get grades that set you apart, maybe that’ll get you invited with the cool kids?
then the only sensible option from my perspective is to take a step back and consider why you’re seeking status from this community in the first place. What motivations go into this behavior. At this point, I think it’s well worth reflecting
1) Why altruism in the first place?
2) Given 1, why EA?
3) Given 2, why seeking status?
Community norms tend to be self-reinforcing. It’s worth pointing out that there are people with a genuinely different perspective, and that this perspective has a reason.
I agree with other commenters that the slavery framing is unhelpful. However, I mostly do agree with Jordan Peterson otherwise.
Human rights set expectations how we treat each other. From my perspective, respect for them is conditional on reciprocity. I will not respect the rights of an individual who doesn’t respect mine. Their function is to set standards of behavior that make everybody better off.
A benefit of human rights, rather than mammal rights or just smaller-identity rights is that they benefit everyone who can understand the concept, so they’re memetically adequate to cover the basics in a globalized world without incurring the huge cost of including the very large number of nonhuman animals. Basically, everybody who can participate in the discussion should be able to agree on the concept—and benefit from that agreement—without having to commit to universal species-independent collectivism.
For this reason, I don’t see the suffering of animals as a problem except for empathy management and perhaps creating a culture of anti-cruelty, if we need it for other purposes.
One problem with human rights is that they are not necessarily well-defined in all contexts, and sometimes people can do strategically better by respecting the rights only of a subset of people. A possible solution would be to insist on minimal standards for the very basic expectations, e.g. don’t randomly torture or murder people you dislike, while setting higher standards only for subgroups, e.g. citizenship transferring the right to live and work in a certain territory.
Indeed, as mentioned, without altruism, voting behaviour is fairly inexplicable.
I vote to reward or penalize politicians based on their previous choices, rather than to create better outcomes. That is, I look back, not forward.
There are some exceptions, e.g. when a candidate before assuming office is sending unusually credible signals, e.g. glorifying torture or some such. Other than that, I mostly ignore promises, and instead implement reciprocity for past decisions.
Edited after more reflection:
Whereas the expected benefit of voting to you alone is the Brexit harm to you / 3 million, = $3 trillion / 2 (effect on UK only) / 65 million (UK population) / 3 million = 0.7 cents – illustrating why voting needs at least a tiny bit of altruism to be rational.
This is interesting. I do expect for things like marginal tax rates, my emotions are scope-insensitive and my reciprocity mostly symbolic/psychological.
However, if I share interests with many other voters who voted for those interests, all of their votes benefited my interests and I can reciprocate not just for/against politicians, but also for/against all these other voters. If I like low tax rates, I can benefit every voter who’s voted for low tax rates by voting for low tax rates.
More importantly, some issues have much higher impact on my utility than marginal tax rates. If I could choose between $1 billion personal purchasing power, and the liberty to buy a deadly dose of pentobarbital if/when I choose to die peacefully, I’d take the pentobarbital. Which means that politicians who’ve reduced the probability that this liberty is legal for me have forced an opportunity cost of over $1 billion on me. Perhaps voting is still not the best way to implement reciprocity in such a case, but outside of direct attacks on ex-politicians, e.g. what the Christians did to Els Borst, it’s one of the remaining ways to get back at them and therefore still well worth doing.
The demand for sexual violence in fiction is easy to explain. It allows us to fantasize about behavior that would be prohibitively disadvantageous in practice, and it allows us to reflect on hypothetical situations that are relevant to our interests, such as how to deal with violent people.
My default model for abusive relationships *where the right to exit is not blocked* is indeed revealed preference. Not necessarily revealed preference for the abuse, but for the total package of goods and bads in the relationship.
The sex and romance market is a market after all, and different individuals have different market power. This is why some people pay for sex, and I’m sure some people accept abuse they would not tolerate from a partner with less market power.
Of course, this isn’t true if someone breaks a promise unexpectedly, like ignoring an agreed-upon safe word. That’s massive enemy action. But if it happens repeatedly, and the relationship is maintained for longer periods of time, even though the right to exit is not blocked and both partners could break it off, my default interpretation is still reveled preference for the total package.
I think it’s worth making the distinction between reward hacking, pleasure wireheading, and addiction more clearly. There’s some overlap, but these are different concepts with different implications for our utility.
The whole ideological subtext reeks with puritan moralism. You imply that we exist to make humanity’s future bigger, rather than to do whatever the hell we actually prefer.
As long as pleasure wireheading is consensual, you longtermists can simply forgo your own pleasure wireheading and instead work very hard on the whole growth and reproduction agenda. However, we are not slaves owned by you who owe you labor and financial support for that agenda. If you can’t find enough people willing to forgo consensual pleasure-wireheading to build the future you want to build, consider that it may be an indicator that people don’t actually see your agenda as worth supporting.
Personally, I’d gladly take a drug that eliminates all my suffering and doubles all my pleasure, even if it drastically reduced my life expectancy. Mere existence isn’t everything.
if we are able to wirehead in an effective manner it might be morally obligatory to force them into wireheading to maximize utility.
Not interested in this kind of “moral obligation”. If you want to be a hedonistic utilitarian, use your own capacity and consent-based cooperation for it.
I didn’t read the whole post, but most of that is just the right to exit being blocked by various mechanisms, including socioeconomic pressure and violence. And the socioeconomic ones aren’t even necessarily incompatible with revealed preference; if the alternative is homelessness, this may suck, but the partner still has no obligation to continue the relationship and the socioeconomic advantages are obviously a part of the package.
What? Why? No sane person would classify “he will murder me if I leave” as “the right to exit isn’t blocked”. I don’t expect much steelmanning from the downvote-bots here, but if you’re strawmanning on a rationalist board, good-faith communication becomes disincentivized. It’s not like I have skin in the game; all my relationships are nonviolent and I neither give a shit about feminism nor anti-feminism.
Still, if “she’s such a nice person but sometimes she explodes” isn’t compatible with revealed preference for the overall relationship, I don’t know what is. My argument was never an argument that such relationships are great or that you should absolutely never use your right to exit. It’s just a default interpretation of many relationships that are being maintained even though they contain abuse. Obviously if you’re ankle-chained to a wall without a phone, that doesn’t qualify as revealed preference. And while I don’t object to ways government can buffer against the suffering of homelessness or socioeconomic hardship, it’s still a logical necessity that the socioeconomic advantages of a relationship are a part of that relationship’s attractiveness, just like good pay is a reason for people to stay in shitty jobs and it doesn’t violate the concept of revealed preference, it doesn’t make those jobs nonconsensual and it wouldn’t necessarily make people better off if those jobs didn’t exist.
And by the way, it’s right to exit, not right to exist. There’s a big difference.
I observe that you are communicating in bad faith and with hostility, so I will use my right to exit for any further communication with you.
Vaniver, your post is eloquent and relevant, yet of course no one gives a shit about that after being downvoted for engaging in a controversial topic in the first place. At that point, all I see is undifferentiated hostility and I’m not going to engage in the cognitive effort to change that view.
It’s not even really your fault. I engaged in a conversation of a controversial, moralistic nature without having any strategic selfish reason to do so. That’s a bad habit if there ever was one. Alas, humans are not always strategic, and sometimes I need the reminder what really matters and what doesn’t.
From that perspective, domestic abuse is irrelevant. The average abuse victim has never done anything for me to deserve my positive reciprocity. I’m not an abuse victim and if I were, I’d simply take personal revenge. Unless of course the abuser is so valuable to my life that I see them as a net-benefit despite the occasional abuse. Hard but not impossible, which was of course my whole point.
Less Wrong and its community has done little for me. You’re not as terrible as EA, and I’ve gained the occasional useful insight here, but you’re still toxic on net, so I’d classify you as minor enemies. Marginally worth harming but no where near the top of my list.
So to sum up, fuck it and good riddance. I actually kind of thank you for the downvotes in this case, this type of negative interaction helps me refocus my perspective and priorities. In fact, I’m now slightly less caring about consent and abuse than I was before this conversation, and that’s probably quite rational for my personal values.
Not all proposed solutions to x-risk fit this pattern: If government spends taxes to build survival shelters that will shelter only a chose few who will then go on to perpetuate humanity in case of a cataclysm, most tax payers receive no personal benefit.
Similarly, if government-funded programs solve AI value loading problems and the ultimate values don’t reflect my personal self-regarding preferences, I don’t benefit from the forced funding and may in fact be harmed by it. This is also true for any scientific research whose effect can be harmful to me personally even if it reduces x-risk overall.
I’m confused about OpenAI’s agenda.
Ostensibly, their funding is aimed at reducing the risk of AI dystopia. Correct? But how does this research prevent AI dystopia? It seems more likely to speed up its arrival, as would any general AI research that’s not specifically aimed at safety.
If we have an optimization goal like “Let’s not get kept alive against our will and tortured in the most horrible way for millions of years on end”, then it seems to me that this funding is actually harmful rather than helpful, because it increases the probability that AI dystopia arrives while we are still alive.
>Our life could be eternal and thus have meaning forever.
Or you could be tortured forever without consent and without even being allowed to die. You know, the thing organized religion has spent millennia moralizing through endless spin efforts, which is now a part of common culture, including popular culture.
Let’s just look at our culture, as well as contemporary and historical global cultures. Do we have:
a consensus of consensualism (life and suffering should be voluntary)? Nope, we don’t.
a consensus of anti-torture (torturing people being illegal and immoral universally)? Nope, we don’t.
a consensus of proportionality (finite actions shouldn’t lead to infinite punishments)? Nope, we don’t.
You’d need at least one of these to just *reduce* the probability of eternal torture, and then it still wouldn’t guarantee an acceptable outcome. And we have none of these.
They would if they could, and the only reason you’re not being already tortured for all eternity is because they haven’t found a way to implement it.
The probability of getting it done is small, but that is not an argument in favor of your suggestion; if it can’t be done, you don’t get eternal meaning either, if it can be done, you have effectually increased the risk of eternal torture for all of us by working in this direction.
>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.
They’re almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn’t naturally occur. And that life obviously will contain large amounts of suffering. People don’t like hearing that, especially in the x-risk reduction demographic, but it’s pretty clear the goals are at odds.
Since I’m a non-altruist, there’s not really any reason to care about most of that future suffering (assuming I’ll be dead by then), but there’s not really any reason to care about saving humanity from extinction, either.
There are some reasons why the angle is not a full 180 degrees: There might be aliens who would also cause suffering and humanity might compete with them for resources, humanity might wipe itself out in ways that also cause suffering such as AGI, or there might be a practical correlations between political philosophies that cause high-suffering and also high-extinction-probability, e.g. torturers are less likely to care about humanity’s survival. But none of these make the goals point in the same direction.
The moral circle is not ever expanding, and I consider that a good thing.
A very wide moral circle is actually very costly to a person. Not only can it cause a lot of stress to think of the suffering of beings in the far future or nonhuman animals in farming or in the wild, but it also requires a lot of self-sacrifice to actually live up to this expanded circle.
In addition, it can put you at odds with other well-meaning people who care about the same beings, but in a different way. For example, when I still cared about future generations, I mostly cared about them in terms of preventing their nonconsensual suffering and victimization. However, the common far-future altruism narrative is that we ought to make sure they exist, not that they be prevented from suffering or being victimized without their consent. This is cause for conflict, as exemplified by the −25 karma points or so I gathered on the Effective Altruism Forum for it at the time.
Since then, my moral circle has contracted massively, and I consider this to be a huge improvement. It now contains only me and the people who have made choices that benefit me (or at least benefit me more than they harm me). There is also a circle of negative concern now, containing all the people who have harmed me more than they benefit me. I count their harm as a positive now.
My basic mental heuristic is, how much did a being net-benefit or net-harm me through deliberate choices and intent, how much did I already reciprocate in harming or benefitting them, and how cheap or expensive is it for me to harm or benefit them further on the margin? These questions get integrated into an intuitive heuristic that shifts my indifference curves for everyday choices.
The psychological motivation for this contracted circle is based on the simple truth that the utility of others is not my utility, and the self-awareness that I have an intrinsic desire for reciprocity.
There is yet another cost to a wide circle of moral concern, and that is the discrepancy with people who have a smaller circle. If you’re my compatriot or family member or fellow present human being, and you have a small circle of concern, I can expect you to allocate more of your agency to my benefit. If you have a wide circle of concern that includes all kinds of entities who can’t reciprocate, I benefit less from having you as an ally.
When people have a wide circle of concern and advocate for its widening as a norm, this makes me nervous because it implies huge additional costs forced on me, through coercive means like taxation or regulations, or simply by spreading benevolence onto a large number of non-reciprocators instead of me and the people who’ve benefitted me. That actually makes me worse off, and people who make me worse off are more likely to receive negative reciprocity rather than positive reciprocity.
I love human rights because they’re a wonderful coordination instrument that makes us all better off, but I now see animal rights as a huge memetic mistake. Similarly, there is little reason to care about far-future generations whose existence is never going to overlap with any of us in terms of reciprocity, and yet we’re surrounded by memes that require we pay massive costs to their wellbeing.
Moralists who advocate this often use moralistic language to justify it. This gives them high social status and it serves as an excuse to impose costs on people who don’t intrinsically care, like me. If I reciprocate this harm against them, I am instantly a villain who deserves to be shunned for being a villain. This dynamic has made me understand the weird paradoxical finding that some people punish what ostensibly seems to be prosocial behavior. Moralism can really harm us, and the moralists should be forced to compensate us for this harm.