It doesn’t appear this is discussed much, so I thought I’d start a conversation:
Who on LessWrong is uncomfortable with or doesn’t like so much discussion of effective altruism here? if so, why?
Other Questions:
Do you feel there’s too much of it now, or would even a little bit of it seem averse?
Do you think such discussion is inappropriate given the implicit or explicit goals of LessWrong?
Has too much discussion of effective altruism caused you to think less of LessWrong, or use it less?
For what reason(s) do you disagree with effective altruism? Is it because of your values and what you care about, or because you don’t like normative pressure to take such strong personal actions? Or something else?
I want to discuss it because what proportion of the LessWrong community is averse or even indifferent or disinterested in effective altruism doesn’t express their opinions much. Also, while I identify with effective altruism, I don’t only value this site as a means to altruistic ends, and I don’t want other parts of the rationalist community to feel neglected.
Personally, I’m indifferent to EA. It seems to me a result of decompartmentalizing and taking utilitarianism overly seriously. I don’t really disagree with it, just not interested. As I’ve mentioned before, I care about myself, my family, my friends, and maybe some prominent people who don’t know me, but whose work makes my life better. I feel for the proverbial African children, but not enough for anything more than a token contribution. If LW had a budget, /r/EA would be a good subreddit, though one of those I would rarely, if ever, visit. As it is, I skip the EA discussions, but I don’t find them annoyingly pervasive.
That is exactly my own view. I can see the force of the arguments for EA, but remain unmoved by them. I don’t mind it being discussed here, but take little interest in the discussions. I have no arguments against it (although the unfortunate end of George Price is a cautionary tale, a warning of a dragon on the way), and I certainly don’t want to persuade anyone to do less good in the world.
It’s rather like the Christian call to sainthood. Many are called, but few are chosen.
ETA: I am interested, as a spectator, in seeing how the movement develops.
I find the extent of my power should be my concern. My local community; those who I can reach and touch. for the sake of drawing a number out of the air; anyone further than 100km from me does not deserve my attention; indeed anyone further than 50km probably also (except that I may one day cross paths with them).
I would rather spend $X towards the local homeless people of my city than the unknown suffering in a distant and far off place. (In fact I would rather not spend $X and would rather donate my time to the community nearby; which is exactly what I do)
While this is my opinion I certainly don’t mind the EA stuff I see; I just don’t partake in it very much.
Is your rule about distances actually a base part of your ethics, or is it a heuristic based on you not having much to do with them? I’m assuming that you take it somewhat figuratively, e.g. if you have family in another country you’re still invested in what happens to them.
Do you care whether the unknown people are suffering more? If donating $X does more than donating Y hours of your time, does that concern you?
Is your rule about distances actually a base part of your ethics, or is it a heuristic based on you not having much to do with them?
Its more of a heuristic. Any ethic that used a specific measurement of distance in its raw calculation would be odd. There might somewhere be a line where on one side I might care about a person, and on the other I might not. Where someone could stand on the line exactly. That would be mostly silly.
if you have family in another country you’re still invested in what happens to them.
most of my family lives within a few suburbs of me. I have a few cousins who have been living in England for a few years; I barely even know what they are doing with their lives any more. (I wouldn’t excommunicate someone for being far away, but I wouldn’t try as hard as someone living in the same city as me) My grandmother keeps in touch with the cousins far away but I don’t think its a requirement for me to do, and I am sure they also don’t feel like they have to keep up with my life either.
Do you care whether the unknown people are suffering more?
Mostly because of the unknowns—no. Unknown people are suffering by an unknown amount—without seeking out those unknowns I have no reason to care.
If donating $X does more than donating Y hours of your time, does that concern you?
There is also the case of warm fuzzy utilons; where I can know that my intended impact hit the nail on the head; where I might otherwise find it difficult to know if $X made the intended impact. Its kinda like outsourcing making an impact to someone else in letting them use that $ for what they feel is right. I don’t necessarily feel like I can trust others with my effectiveness desires.
Does this make sense? I can try to explain it again if you point out what isn’t making sense...
Upvoted not for agreement*, but for expressing well what seems like a common enough sentiment, in a way that’s efficient and useful.
*Agreement, given my current state of mind, would be odd because I, well, identify with effective altruism in ways you don’t. I don’t disagree with you, though, because I don’t disbelieve your stated preferences and values. I don’t think less of someone who isn’t “on board” with effective altruism, either. Agree/Disagree seems like an error; we just perceive and act on what we value differently.
(Disclaimer: My lifetime contribute to MIRI is in the low six digits.)
It appears to me that there are two LessWrongs.
The first is the LessWrong of decision theory. Most of the content in the Sequences contributed to making me sane, but the most valuable part was the focus on decision theory and considering how different processes performed in the prisoner’s dilemma. Understanding decision theory is a precondition to solving the friendly AI problem.
The first LessWrong results in serious insights that should be integrated into one’s life. In Program Equilibrium in the Prisoner’s Dilemma via Lob’s Theorem, the authors take a moment to discuss the issue of “Defecting Against CooperateBot”—if you know that you are playing against CooperateBot, you should defect. I remember when I first read the paper and the concept just clicked. Of course you should defect against CooperateBot. But this was an insight that I had to be told and LessWrong is valuable to me as it has helped internalize game theory. The first year that I took the LessWrong survey, I answered that of course you should cooperate in the one shot non-shared source code prisoner’s dilemma. On the latest survey, I instead put the correct answer.
The second LessWrong is the LessWrong of utilitarianism, especially of a Singerian sort, which I find to clash with the first LessWrong. My understanding is that Peter Singer argues that because you would ruin your shoes to jump into a creek to save a drowning child, you should incur an equivalent cost to save the life of a child in the third world.
Now never mind that saving the child might have postive expected value to the jumper. We can restate Singer’s moral obligation as a prisoner’s dilemma, and then we can apply something like TDT to it and make the FairBot version of Singer: I want to incur a fiscal cost to save a child on the other side of the world iff parents on the other side of the world would incur a fiscal cost to save my child. I believe Singer would deny this statement (and would be more aghast at the PrudentBot version), and would insist that there’s a moral obligation regardless of the other theoretical reciprocation.
I notice that I am being asked to be CooperateBot. I don’t think CFAR has “Don’t be CooperateBot,” as a rationality technique, but they should.
Practically, I find that ‘altruism’ and ‘CooperateBot’ are synonyms. The question of reciprocality hangs in the background. It must, because Azathoth both generates those who are CooperateBot and those who exploit CooperateBots.
I will also point out that this whole discussion is happening on the website that exists to popularize humanity’s greatest collective action problem. Every one of us has a selfish interest in solving the friendly AI problem. And while I am not much of a utilitarian, I would assume that the correct utilitarian charity answer in terms of number of people saved/generated would be MIRI, and that the most straightforward explanation is Hansonian cynacism.
‘Altruism’ for me doesn’t mean ‘I assign infinite value to my own happiness (and freedom, beauty, etc.) and 0 to others’, but everyone would be better off (myself included) if I sacrificed my own happiness for others’. So I’ll sacrifice my own happiness for others’.′ Rather, I assign some value to my own happiness, but a lot more value to others’ happiness. I care unconditionally about others’ happiness.
Since it’s only a Prisoner’s Dilemma if I value ‘I defect, you cooperate’ over ‘we both cooperate’, for me high-stakes ‘defecting’ would mean directly indulging in my desire to help others, while ‘cooperating’ via UDT would mean sacrificing humanity’s welfare in some small way in order to keep a non-utilitarian agent from doing even more to reduce humanity’s welfare. The structure of the PD has nothing to do with whether the agents are selfish vs. altruistic (as long as you take that into account when initially calculating payoffs).
Thought experiments like Singer’s are how I found out that I do in fact terminally value people who are distant from me in space (and time). My behavior isn’t perfectly utilitarian, but I’d take a pill to become more so, so my revealed preferences aren’t what I’d prefer them to be.
I don’t know if my praise means anything to you, but you have it. If the MIRI brings about a positive singularity then its members and supporters are likely to receive lots more praise, from a lot more people, for a very long time.
My understanding is that Peter Singer argues that because you would ruin your shoes to jump into a creek to save a drowning child, you should incur an equivalent cost to save the life of a child in the third world.
BTW that is no longer possible (if it ever even was) unless you’re wearing pretty expensive shoes indeed.
“Defect against cooperate bot” makes sense if the Prisoner’s Dilemma game is the only thing that’s going on. However, in the real world, cooperatebot might have friends who will take revenge, or CB might be doing useful work.
From memory: a person who refused to defect in a PD because they didn’t want to be that sort of person.
Defecting against CB is equivalent to “Never give a sucker an even break”, and it might lead to a world where people spend a lot more resources than otherwise necessary on defending themselves.
Seeing as, in terms of absolute as well as disposable income, I’m probably closer to being a recipient of donations rather than a giver of them, effective altruism is among those topics that make me feel just a little extra alienated from LessWrong. It’s something I know I couldn’t participate in, for at least 5 to 7 more years, even if I were so inclined (I expect to live in the next few years on a yearly income between $5000 and $7000, if things go well). Every single penny I get my hands on goes, and will continue to go, strictly towards my own benefit, and in all honesty I couldn’t afford anything else. Maybe one day when I’ll stop always feeling a few thousand $$ short of a lifestyle I find agreeable, I may reconsider. But for now, all this EA talk does for me is reinforce the impression of LW as a club for rich people in which I feel maybe a bit awkward and not belonging. If you ain’t got no money, take yo’ broke ass home!
Anyway, the manner in which my own existence relates to goals such as EA is only half the story, probably the more morally dubious half. Disconnected from my personal circumstances, the Effective Altruism movement seems one big mix of good and not-so-good motives and consequences. On the one hand, the fact that there are people dedicated to donating large fractions of their income is a laudable thing in itself. On the other hand...
I don’t believe for one second that effective altruism would have been nearly as big of a phenomenon on LessWrong, if the owners of LessWrong hadn’t been living off people’s donations. MIRI is a charity that wants money. Giving to charity is probably the biggest moral credential on LW. Coincidence? I think not.
Ensuring the flow of money in a particular direction may not be the very best effort one can put into making the world a better place. Sure, it’s something, and at least in the short term a very vital something, but more than anything else it seems to be a way to patch up, or prop up, a part of the system that was shaky to begin with. The long-term end goal should be to make people less reliant on charity money. Sometimes there is a shortage of knowledge, or of power, or of good incentives, rather than of money. “Throwing money at a cause” is just one way to help—although I suppose effective altruist organizations already incorporate the knowledge of this problem in their concept of “room for more funding”.
We already have governments that take away a large portion of our incomes anyway, that have systems in place for allocating funds and efforts, and that purport to promote the same kinds of causes as charities, yet often function inefficiently and even harmfully. However, they’re a lot more reliable in terms of actually ensuring the collection of “enough” funds. To pay taxes and to give to charity (yes, I’m aware that charitable giving unlocks tax deductions) is to contribute to two systems that are doing the same job, the second being there mostly because the first isn’t doing its job as it should. In this way, and possibly assuming that EA would be a larger movement in the future than it is now, charity might work to mask government inefficiencies and damage or to clean up after them.
In the context of earning to give, participating in a particularly noxious industry as a way of earning your livelihood, and using part of that money to contribute to altruist causes, is something that looks to me like a tax on the well-being you thus cause into the world. I’m not sure that tax is always smaller than 100%. And it’s more difficult to quantify the negative externalities from your job than it is to quantify the positive effects of your donations, because the first are more causally distant.
To take the discussion back to the meta level, I’m but one user with not so much karma and probably a non-central example of a LessWronger, so I don’t demand that anyone accommodates me and my preferences not to discuss EA. However, knowing that other users basically come from an effective altruism mindset makes discussion with them somewhat difficult, since we don’t have the same assumptions about the relationship between money and welfare. The most annoying of all is the very rare and very occasional display of charitable snobbery, or a commitment not to aid first world people who are not effective altruists, or who don’t donate enough. (I’ve seen that, but Google seems to fail me at this moment.) It seems easier and more pleasant to discuss ethical matters with people who don’t come from an EA worldview, and personally I’d like to see more of a plurality of approaches on the matter on LW.
tl;dr It’s a rich people thing and therefore alien to me; as for objective merits, I’ve got mixed positive and negative feelings about it. But in the end, to each their own.
I think that the image of EA on LW has been excessively donation-focused, but I’d like to point out that things like earning to give are only one part of EA.
EA is about having the biggest positive impact that you can have on the world, given your circumstances and personality. If your circumstances mean that you can’t donate, or disagree with donations being the best way to do good, that still leaves options like e.g. working directly for some organization (be it a non-profit or for-profit) having a positive impact on the world. Some time back I wrote the following:
Effective altruism says that, if you focus on the right career, you can have an even bigger impact! And the careers don’t even need to be exotic, demanding ones that only a few select ones can do (even if some of them are). Some of the top potential careers that 80,000 hours has identified so far include thing as diverse as being an academic, civil servant, journalist, marketer, politician, or software engineer, among others. Not only that, they also emphasize finding your fit. To have a big impact on the world, you don’t need to shoehorn yourself into a role that doesn’t suit you and that you hate—in fact you’re explicitly encouraged to find an high-impact career that fits you personally.
Analytic? Maybe consider research, in one form or another. Want to mostly support the cause from the side, not thinking about things too much? Let the existing charity evaluation organizations guide who you donate to and don’t worry about the rest. Or help out other effective altruists. People person? Plenty of ways you could have an impact. There’s always something you can do—and still be effective. It’s not about needing to be superhuman, it’s about doing the best that you can, given your personality, talents and interests.
I know this may come across as sociopathically cold and calculating, but given that post-singularity civilisation could be at least thirty orders of magnitude larger than current civilisation, I don’t really think short term EA makes sense. I’m surprised that the EA and existential risk efforts seem to be correlated, since logically it seems to me that they should be anti-correlated.
And if the response is that future civilisation is ‘far’ in the overcoming bias sense, well, so are starving children in Africa.
It doesn’t come across as sociopathically cold and calculating to me. It may come across like that to others. Some people who have never encountered effective altruism or Less Wrong might think you sociopathic, but most people don’t reflective enough to realize if they care about the overwhelming magnitude of future civilizations, or starving children far away. So, the consequences of what most others signal and believe of their own values don’t lead to consequences different than yours. The capacity to care about so many far away people seems difficult to maintain all the time, mostly because if you carried so much empathy in the forefront of your mind all the time it’d be overwhelming. Saying so about real people in particular might seem sociopathic no matter who says it.
Anyway, it at first confused me why existential risk reduction is correlated with effective altruism. Effective altruism is a common banner which promotes the values common enough to existential risk reduction and Less Wrong, such as reflective thinking, evidence-based evaluation, and far preferences for helping others through time and space. I think the x-risk reduction community makes a choice to go with effective altruism because they get a strong enough position to attract more capital: financial capital, human capital, relevant expertise, etc.
While x-risk may only get a small slice of the pie that is effective altruism, as effective altruism grows, so does the absolute size of the added support x-risk reduction receives. Also it’s the common impression effective altruists are talented and reflective folk to begin with, so if one can cross-convert their concerns from poverty reduction and global health to existential risk reduction, it helps out. Further, cause areas which would otherwise be at odds with each other accept each other within effective altruism because they all gain from cooperation with each other. For example, such efforts are coordinated by the Centre for Effective Altruism, which leads to everyone under the ‘EA’ banner receiving more attention.
Meanwhile, the existential risk reduction community doesn’t look worse by associating with effective altruism, even if it will always be a smaller part of it than poverty reduction. It’s not like associating with effective altruism costs the cause of x-risk reduction so much it would be smaller or weaker movement. Aside from the coverage of the Future Humanity Institute’s publications like Superintelligence by Nick Bostrom (and its consequences, like Elon Musk’s support), effective altruism might be boosting the profile of x-risk more than anything else.
The attitude you express towards short-term effective altruism given the magnitude and importance of post-Singularity civilization is one I’ve seen expressed by people, some from Less Wrong, within or adjacent to the effective altruist community. I think these disagreements and sentiments don’t come out much from central or mainstream coverage of effective altruism because it would look bad and be confusing to the public.
Proponents of both have the same attitude of “this is a thing that people ocassionally give lip service to, that we’re going to follow to a more logical conclusion and actually act on”.
I’d say well over 80%. The probability of the whole of humanity deciding to stop technological development, and actually successfully co-coordinating this is minimal. Even if the human mind cannot be run on a classical computer, we would still tile the universe with quantum computronium.
You people sound awfully sure about far-off future. How well, do you think, an educated Egyptian from, say, 2000 BC would have fared at predicting the future path of the society?
Was there any noticeable technological progress back in 2000 BC?
Looking at science fiction from the 19th century, aerial warfare, armoured land warfare, space exploration were all predicted. The details were all wrong, and I doubt we can predict the details of the future with any great accuracy. But the general theme of humanity expanding across the universe seems a safe extrapolation, even if I don’t know whether the starships will be beam riders or ramscoops or wormhole navigators or Alcubierre drive or some other technology that has not yet been conceived.
Was there any noticeable technological progress back in 2000 BC?
Shitloads. Empires rose and fell as they obsoleted each other’s military technologies, architecture evolved tremendously, crop plants diversified and became more nutritious, extractive farming techniques gave way to those that preserved the fertility of the soil rather than stripmining it, new naval technology was partially responsible for the late bronze age collapse… (yes I’m aware these gradiate towards 1000 BC)
Well for starters his decedents would no longer be ruled by someone (purporting to be) a living incarnation of the sun god. Something he would no doubt consider extremely shocking.
The life of a typical Egyptian didn’t much change from 2000 BC to 1000 AD. And for most of this time the leaders claimed to have a strong connection or endorsement from the divine. An educated Egyptian living in 2000 BC would be aware of the diversity of religion in the world and would probably expect that over the next 3000 years religious practices would change in form in his country.
Good point! I would have thought the great filter probably lies in our past, most likely with the origin of life or perhaps multicellular life, but the Fermi paradox is still information against space colonisation.
It’s also unfortunately a distinctly uninformative piece of evidence about anything but space colonization and exponential expansion. All it tells us is that nothing self-replicates across the galaxy to a scale we could see in sheer infrared emissions or truly ridiculous levels of active attempts to be visible. There are so many orders of magnitude and divergent possibilities of things that could exist that we simply wouldn’t know about right now given the observations we have made.
My brain filters it out automatically. Altruism is not even on my mind AT ALL, until I sorted out my own problems and feel the life of me and my family is reasonably secure, happy, safe, and going up and up. I don’t feel I have any surplus for altruism.
I guess in practice I do altruistic things all the time. People ask me for help, I don’t say no. I just don’t seek out opportunities to.
My biggest problem with EA is the excessive focus on a specific metric with no consideration of higher order plans or effects. The epitome of naive utilitarianism.
On one hand, I’m not sure that’s all of effective altruism. Those concerned about existential risk reduction, such as the MIRI, consider themselves part of effective altruism, and haven’t always been about quantifying the value of ensuring a flourishing future civilization of trillions of human-like descendants in terms of quality-adjusted life years (henceforth referred to as QALYs). On the other hand, at the 2014 Effective Altruism Summit (I attended, and it’s just a big EA conference), Eliezer Yudkowsky presented the potential value of the MIRI’s work, given their work would prevent a counter-factual extinction of humanity and Earth-originating intelligence, in terms of QALYs. It was some extravagantly big number expressed in scientific notation, calculated as the expected years of happy life for so many trillions of future people. This is just my impression, but I think Mr. Yudkowsky and the MIRI did this to accommodate the rest of the community’s knee-jerk demand for specific metrics.
I’ve also met several folk hailing from Less Wrong and its cluster in person-space with loftier visions of improving the fare of humanity in the nearer-term future, than just handing out mosquito nets or deworming children near the equator, who are lukewarm towards or supportive of effective altruism as a community. They seem to be dismissive of naive utilitarianism in effective altruism, too. I myself take issue with bringing too much utilitarianism injected into effective altruism. I think as effective altruism as a vehicle which took inspiration from utilitarianism, but would mostly serve as a motivator and coordinating network for pragmatic action among all sorts of people, rather than so much theory of ethics which can and should be picked apart. I admit we in effective altruism don’t tackle this issue well. This could be because the opinion that utilitarianism is overriding what could be the dynamic rationality of effective altruism is a minority one. I’m not confident I and like-minded others can change that for the better.
Evan—I am also involved in effective altruism, and am not a utilitarian. I am a consequentialist and often agree with the utilitarians in mundane situations, though.
drethelin—What would be an example of a better alternative?
It doesn’t appear this is discussed much, so I thought I’d start a conversation:
Who on LessWrong is uncomfortable with or doesn’t like so much discussion of effective altruism here? if so, why?
Other Questions:
Do you feel there’s too much of it now, or would even a little bit of it seem averse?
Do you think such discussion is inappropriate given the implicit or explicit goals of LessWrong?
Has too much discussion of effective altruism caused you to think less of LessWrong, or use it less?
For what reason(s) do you disagree with effective altruism? Is it because of your values and what you care about, or because you don’t like normative pressure to take such strong personal actions? Or something else?
I want to discuss it because what proportion of the LessWrong community is averse or even indifferent or disinterested in effective altruism doesn’t express their opinions much. Also, while I identify with effective altruism, I don’t only value this site as a means to altruistic ends, and I don’t want other parts of the rationalist community to feel neglected.
Personally, I’m indifferent to EA. It seems to me a result of decompartmentalizing and taking utilitarianism overly seriously. I don’t really disagree with it, just not interested. As I’ve mentioned before, I care about myself, my family, my friends, and maybe some prominent people who don’t know me, but whose work makes my life better. I feel for the proverbial African children, but not enough for anything more than a token contribution. If LW had a budget, /r/EA would be a good subreddit, though one of those I would rarely, if ever, visit. As it is, I skip the EA discussions, but I don’t find them annoyingly pervasive.
That is exactly my own view. I can see the force of the arguments for EA, but remain unmoved by them. I don’t mind it being discussed here, but take little interest in the discussions. I have no arguments against it (although the unfortunate end of George Price is a cautionary tale, a warning of a dragon on the way), and I certainly don’t want to persuade anyone to do less good in the world.
It’s rather like the Christian call to sainthood. Many are called, but few are chosen.
ETA: I am interested, as a spectator, in seeing how the movement develops.
Upvote for agreement.
I find the extent of my power should be my concern. My local community; those who I can reach and touch. for the sake of drawing a number out of the air; anyone further than 100km from me does not deserve my attention; indeed anyone further than 50km probably also (except that I may one day cross paths with them).
I would rather spend $X towards the local homeless people of my city than the unknown suffering in a distant and far off place. (In fact I would rather not spend $X and would rather donate my time to the community nearby; which is exactly what I do)
While this is my opinion I certainly don’t mind the EA stuff I see; I just don’t partake in it very much.
Is your rule about distances actually a base part of your ethics, or is it a heuristic based on you not having much to do with them? I’m assuming that you take it somewhat figuratively, e.g. if you have family in another country you’re still invested in what happens to them.
Do you care whether the unknown people are suffering more? If donating $X does more than donating Y hours of your time, does that concern you?
Its more of a heuristic. Any ethic that used a specific measurement of distance in its raw calculation would be odd. There might somewhere be a line where on one side I might care about a person, and on the other I might not. Where someone could stand on the line exactly. That would be mostly silly.
most of my family lives within a few suburbs of me. I have a few cousins who have been living in England for a few years; I barely even know what they are doing with their lives any more. (I wouldn’t excommunicate someone for being far away, but I wouldn’t try as hard as someone living in the same city as me) My grandmother keeps in touch with the cousins far away but I don’t think its a requirement for me to do, and I am sure they also don’t feel like they have to keep up with my life either.
There is also the case of warm fuzzy utilons; where I can know that my intended impact hit the nail on the head; where I might otherwise find it difficult to know if $X made the intended impact. Its kinda like outsourcing making an impact to someone else in letting them use that $ for what they feel is right. I don’t necessarily feel like I can trust others with my effectiveness desires.
Does this make sense? I can try to explain it again if you point out what isn’t making sense...
Upvoted not for agreement*, but for expressing well what seems like a common enough sentiment, in a way that’s efficient and useful.
*Agreement, given my current state of mind, would be odd because I, well, identify with effective altruism in ways you don’t. I don’t disagree with you, though, because I don’t disbelieve your stated preferences and values. I don’t think less of someone who isn’t “on board” with effective altruism, either. Agree/Disagree seems like an error; we just perceive and act on what we value differently.
On my part, it strikes me as the greatest and most important contribution this place has had on my life.
(Disclaimer: My lifetime contribute to MIRI is in the low six digits.)
It appears to me that there are two LessWrongs.
The first is the LessWrong of decision theory. Most of the content in the Sequences contributed to making me sane, but the most valuable part was the focus on decision theory and considering how different processes performed in the prisoner’s dilemma. Understanding decision theory is a precondition to solving the friendly AI problem.
The first LessWrong results in serious insights that should be integrated into one’s life. In Program Equilibrium in the Prisoner’s Dilemma via Lob’s Theorem, the authors take a moment to discuss the issue of “Defecting Against CooperateBot”—if you know that you are playing against CooperateBot, you should defect. I remember when I first read the paper and the concept just clicked. Of course you should defect against CooperateBot. But this was an insight that I had to be told and LessWrong is valuable to me as it has helped internalize game theory. The first year that I took the LessWrong survey, I answered that of course you should cooperate in the one shot non-shared source code prisoner’s dilemma. On the latest survey, I instead put the correct answer.
The second LessWrong is the LessWrong of utilitarianism, especially of a Singerian sort, which I find to clash with the first LessWrong. My understanding is that Peter Singer argues that because you would ruin your shoes to jump into a creek to save a drowning child, you should incur an equivalent cost to save the life of a child in the third world.
Now never mind that saving the child might have postive expected value to the jumper. We can restate Singer’s moral obligation as a prisoner’s dilemma, and then we can apply something like TDT to it and make the FairBot version of Singer: I want to incur a fiscal cost to save a child on the other side of the world iff parents on the other side of the world would incur a fiscal cost to save my child. I believe Singer would deny this statement (and would be more aghast at the PrudentBot version), and would insist that there’s a moral obligation regardless of the other theoretical reciprocation.
I notice that I am being asked to be CooperateBot. I don’t think CFAR has “Don’t be CooperateBot,” as a rationality technique, but they should.
Practically, I find that ‘altruism’ and ‘CooperateBot’ are synonyms. The question of reciprocality hangs in the background. It must, because Azathoth both generates those who are CooperateBot and those who exploit CooperateBots.
I will also point out that this whole discussion is happening on the website that exists to popularize humanity’s greatest collective action problem. Every one of us has a selfish interest in solving the friendly AI problem. And while I am not much of a utilitarian, I would assume that the correct utilitarian charity answer in terms of number of people saved/generated would be MIRI, and that the most straightforward explanation is Hansonian cynacism.
‘Altruism’ for me doesn’t mean ‘I assign infinite value to my own happiness (and freedom, beauty, etc.) and 0 to others’, but everyone would be better off (myself included) if I sacrificed my own happiness for others’. So I’ll sacrifice my own happiness for others’.′ Rather, I assign some value to my own happiness, but a lot more value to others’ happiness. I care unconditionally about others’ happiness.
Since it’s only a Prisoner’s Dilemma if I value ‘I defect, you cooperate’ over ‘we both cooperate’, for me high-stakes ‘defecting’ would mean directly indulging in my desire to help others, while ‘cooperating’ via UDT would mean sacrificing humanity’s welfare in some small way in order to keep a non-utilitarian agent from doing even more to reduce humanity’s welfare. The structure of the PD has nothing to do with whether the agents are selfish vs. altruistic (as long as you take that into account when initially calculating payoffs).
Thought experiments like Singer’s are how I found out that I do in fact terminally value people who are distant from me in space (and time). My behavior isn’t perfectly utilitarian, but I’d take a pill to become more so, so my revealed preferences aren’t what I’d prefer them to be.
I don’t know if my praise means anything to you, but you have it. If the MIRI brings about a positive singularity then its members and supporters are likely to receive lots more praise, from a lot more people, for a very long time.
BTW that is no longer possible (if it ever even was) unless you’re wearing pretty expensive shoes indeed.
On the other hand, a pair of dress shoes and a suit might still be more expensive, and might get ruined when you save somebody from a pool of mud.
“Defect against cooperate bot” makes sense if the Prisoner’s Dilemma game is the only thing that’s going on. However, in the real world, cooperatebot might have friends who will take revenge, or CB might be doing useful work.
From memory: a person who refused to defect in a PD because they didn’t want to be that sort of person.
Defecting against CB is equivalent to “Never give a sucker an even break”, and it might lead to a world where people spend a lot more resources than otherwise necessary on defending themselves.
Seeing as, in terms of absolute as well as disposable income, I’m probably closer to being a recipient of donations rather than a giver of them, effective altruism is among those topics that make me feel just a little extra alienated from LessWrong. It’s something I know I couldn’t participate in, for at least 5 to 7 more years, even if I were so inclined (I expect to live in the next few years on a yearly income between $5000 and $7000, if things go well). Every single penny I get my hands on goes, and will continue to go, strictly towards my own benefit, and in all honesty I couldn’t afford anything else. Maybe one day when I’ll stop always feeling a few thousand $$ short of a lifestyle I find agreeable, I may reconsider. But for now, all this EA talk does for me is reinforce the impression of LW as a club for rich people in which I feel maybe a bit awkward and not belonging. If you ain’t got no money, take yo’ broke ass home!
Anyway, the manner in which my own existence relates to goals such as EA is only half the story, probably the more morally dubious half. Disconnected from my personal circumstances, the Effective Altruism movement seems one big mix of good and not-so-good motives and consequences. On the one hand, the fact that there are people dedicated to donating large fractions of their income is a laudable thing in itself. On the other hand...
I don’t believe for one second that effective altruism would have been nearly as big of a phenomenon on LessWrong, if the owners of LessWrong hadn’t been living off people’s donations. MIRI is a charity that wants money. Giving to charity is probably the biggest moral credential on LW. Coincidence? I think not.
Ensuring the flow of money in a particular direction may not be the very best effort one can put into making the world a better place. Sure, it’s something, and at least in the short term a very vital something, but more than anything else it seems to be a way to patch up, or prop up, a part of the system that was shaky to begin with. The long-term end goal should be to make people less reliant on charity money. Sometimes there is a shortage of knowledge, or of power, or of good incentives, rather than of money. “Throwing money at a cause” is just one way to help—although I suppose effective altruist organizations already incorporate the knowledge of this problem in their concept of “room for more funding”.
We already have governments that take away a large portion of our incomes anyway, that have systems in place for allocating funds and efforts, and that purport to promote the same kinds of causes as charities, yet often function inefficiently and even harmfully. However, they’re a lot more reliable in terms of actually ensuring the collection of “enough” funds. To pay taxes and to give to charity (yes, I’m aware that charitable giving unlocks tax deductions) is to contribute to two systems that are doing the same job, the second being there mostly because the first isn’t doing its job as it should. In this way, and possibly assuming that EA would be a larger movement in the future than it is now, charity might work to mask government inefficiencies and damage or to clean up after them.
In the context of earning to give, participating in a particularly noxious industry as a way of earning your livelihood, and using part of that money to contribute to altruist causes, is something that looks to me like a tax on the well-being you thus cause into the world. I’m not sure that tax is always smaller than 100%. And it’s more difficult to quantify the negative externalities from your job than it is to quantify the positive effects of your donations, because the first are more causally distant.
To take the discussion back to the meta level, I’m but one user with not so much karma and probably a non-central example of a LessWronger, so I don’t demand that anyone accommodates me and my preferences not to discuss EA. However, knowing that other users basically come from an effective altruism mindset makes discussion with them somewhat difficult, since we don’t have the same assumptions about the relationship between money and welfare. The most annoying of all is the very rare and very occasional display of charitable snobbery, or a commitment not to aid first world people who are not effective altruists, or who don’t donate enough. (I’ve seen that, but Google seems to fail me at this moment.) It seems easier and more pleasant to discuss ethical matters with people who don’t come from an EA worldview, and personally I’d like to see more of a plurality of approaches on the matter on LW.
tl;dr It’s a rich people thing and therefore alien to me; as for objective merits, I’ve got mixed positive and negative feelings about it. But in the end, to each their own.
I think that the image of EA on LW has been excessively donation-focused, but I’d like to point out that things like earning to give are only one part of EA.
EA is about having the biggest positive impact that you can have on the world, given your circumstances and personality. If your circumstances mean that you can’t donate, or disagree with donations being the best way to do good, that still leaves options like e.g. working directly for some organization (be it a non-profit or for-profit) having a positive impact on the world. Some time back I wrote the following:
I know this may come across as sociopathically cold and calculating, but given that post-singularity civilisation could be at least thirty orders of magnitude larger than current civilisation, I don’t really think short term EA makes sense. I’m surprised that the EA and existential risk efforts seem to be correlated, since logically it seems to me that they should be anti-correlated.
And if the response is that future civilisation is ‘far’ in the overcoming bias sense, well, so are starving children in Africa.
It doesn’t come across as sociopathically cold and calculating to me. It may come across like that to others. Some people who have never encountered effective altruism or Less Wrong might think you sociopathic, but most people don’t reflective enough to realize if they care about the overwhelming magnitude of future civilizations, or starving children far away. So, the consequences of what most others signal and believe of their own values don’t lead to consequences different than yours. The capacity to care about so many far away people seems difficult to maintain all the time, mostly because if you carried so much empathy in the forefront of your mind all the time it’d be overwhelming. Saying so about real people in particular might seem sociopathic no matter who says it.
Anyway, it at first confused me why existential risk reduction is correlated with effective altruism. Effective altruism is a common banner which promotes the values common enough to existential risk reduction and Less Wrong, such as reflective thinking, evidence-based evaluation, and far preferences for helping others through time and space. I think the x-risk reduction community makes a choice to go with effective altruism because they get a strong enough position to attract more capital: financial capital, human capital, relevant expertise, etc.
While x-risk may only get a small slice of the pie that is effective altruism, as effective altruism grows, so does the absolute size of the added support x-risk reduction receives. Also it’s the common impression effective altruists are talented and reflective folk to begin with, so if one can cross-convert their concerns from poverty reduction and global health to existential risk reduction, it helps out. Further, cause areas which would otherwise be at odds with each other accept each other within effective altruism because they all gain from cooperation with each other. For example, such efforts are coordinated by the Centre for Effective Altruism, which leads to everyone under the ‘EA’ banner receiving more attention.
Meanwhile, the existential risk reduction community doesn’t look worse by associating with effective altruism, even if it will always be a smaller part of it than poverty reduction. It’s not like associating with effective altruism costs the cause of x-risk reduction so much it would be smaller or weaker movement. Aside from the coverage of the Future Humanity Institute’s publications like Superintelligence by Nick Bostrom (and its consequences, like Elon Musk’s support), effective altruism might be boosting the profile of x-risk more than anything else.
The attitude you express towards short-term effective altruism given the magnitude and importance of post-Singularity civilization is one I’ve seen expressed by people, some from Less Wrong, within or adjacent to the effective altruist community. I think these disagreements and sentiments don’t come out much from central or mainstream coverage of effective altruism because it would look bad and be confusing to the public.
Proponents of both have the same attitude of “this is a thing that people ocassionally give lip service to, that we’re going to follow to a more logical conclusion and actually act on”.
This just strikes me as another pascal’s mugging.
Disagree because the probability of this happening is significant. I would rate as >80% conditional on us not destroying ourselves.
I’d say well over 80%. The probability of the whole of humanity deciding to stop technological development, and actually successfully co-coordinating this is minimal. Even if the human mind cannot be run on a classical computer, we would still tile the universe with quantum computronium.
You people sound awfully sure about far-off future. How well, do you think, an educated Egyptian from, say, 2000 BC would have fared at predicting the future path of the society?
Was there any noticeable technological progress back in 2000 BC?
Looking at science fiction from the 19th century, aerial warfare, armoured land warfare, space exploration were all predicted. The details were all wrong, and I doubt we can predict the details of the future with any great accuracy. But the general theme of humanity expanding across the universe seems a safe extrapolation, even if I don’t know whether the starships will be beam riders or ramscoops or wormhole navigators or Alcubierre drive or some other technology that has not yet been conceived.
Shitloads. Empires rose and fell as they obsoleted each other’s military technologies, architecture evolved tremendously, crop plants diversified and became more nutritious, extractive farming techniques gave way to those that preserved the fertility of the soil rather than stripmining it, new naval technology was partially responsible for the late bronze age collapse… (yes I’m aware these gradiate towards 1000 BC)
What makes you think that in 4000 years people will think there was noticeable technological progress in the XXI century?
Actually, no, if the limit of the speed of light holds, either there won’t be much expansion or the result of the expansion won’t be very human.
Fairly well for the next 3000 years since not a lot changed.
And yet I feel you don’t want to follow that example of success :-P
Well for starters his decedents would no longer be ruled by someone (purporting to be) a living incarnation of the sun god. Something he would no doubt consider extremely shocking.
So we have gone from worshiping the sun god to worshiping the son of god.
Nice pun. Now do you have a serious response?
The life of a typical Egyptian didn’t much change from 2000 BC to 1000 AD. And for most of this time the leaders claimed to have a strong connection or endorsement from the divine. An educated Egyptian living in 2000 BC would be aware of the diversity of religion in the world and would probably expect that over the next 3000 years religious practices would change in form in his country.
Are you joking?
No, the life of the average human didn’t much change from 2000 BC to 1000 AD.
Jim
If not for the Fermi paradox, I would agree.
Good point! I would have thought the great filter probably lies in our past, most likely with the origin of life or perhaps multicellular life, but the Fermi paradox is still information against space colonisation.
It’s also unfortunately a distinctly uninformative piece of evidence about anything but space colonization and exponential expansion. All it tells us is that nothing self-replicates across the galaxy to a scale we could see in sheer infrared emissions or truly ridiculous levels of active attempts to be visible. There are so many orders of magnitude and divergent possibilities of things that could exist that we simply wouldn’t know about right now given the observations we have made.
My brain filters it out automatically. Altruism is not even on my mind AT ALL, until I sorted out my own problems and feel the life of me and my family is reasonably secure, happy, safe, and going up and up. I don’t feel I have any surplus for altruism.
I guess in practice I do altruistic things all the time. People ask me for help, I don’t say no. I just don’t seek out opportunities to.
My biggest problem with EA is the excessive focus on a specific metric with no consideration of higher order plans or effects. The epitome of naive utilitarianism.
On one hand, I’m not sure that’s all of effective altruism. Those concerned about existential risk reduction, such as the MIRI, consider themselves part of effective altruism, and haven’t always been about quantifying the value of ensuring a flourishing future civilization of trillions of human-like descendants in terms of quality-adjusted life years (henceforth referred to as QALYs). On the other hand, at the 2014 Effective Altruism Summit (I attended, and it’s just a big EA conference), Eliezer Yudkowsky presented the potential value of the MIRI’s work, given their work would prevent a counter-factual extinction of humanity and Earth-originating intelligence, in terms of QALYs. It was some extravagantly big number expressed in scientific notation, calculated as the expected years of happy life for so many trillions of future people. This is just my impression, but I think Mr. Yudkowsky and the MIRI did this to accommodate the rest of the community’s knee-jerk demand for specific metrics.
I’ve also met several folk hailing from Less Wrong and its cluster in person-space with loftier visions of improving the fare of humanity in the nearer-term future, than just handing out mosquito nets or deworming children near the equator, who are lukewarm towards or supportive of effective altruism as a community. They seem to be dismissive of naive utilitarianism in effective altruism, too. I myself take issue with bringing too much utilitarianism injected into effective altruism. I think as effective altruism as a vehicle which took inspiration from utilitarianism, but would mostly serve as a motivator and coordinating network for pragmatic action among all sorts of people, rather than so much theory of ethics which can and should be picked apart. I admit we in effective altruism don’t tackle this issue well. This could be because the opinion that utilitarianism is overriding what could be the dynamic rationality of effective altruism is a minority one. I’m not confident I and like-minded others can change that for the better.
Evan—I am also involved in effective altruism, and am not a utilitarian. I am a consequentialist and often agree with the utilitarians in mundane situations, though.
drethelin—What would be an example of a better alternative?
I dont think anyone really CAN reliably consider all but the crudest higher order effects like population size...