Btw, “you” was “general you”, not you personally, and mine was trying to piggyback. Post edited to clarify.
Journeyman
This post has some faults, but it correctly points out the narrowness of currently EA thinking.
The problem with effective altruism is that it depends on values, and values are hard. Values are also notoriously gameable by politics. Currently, EA is Afrocentric and only effective for a very narrow value system.
EA is focused on saving the max number of lives in the present, or giving directly to the poorest areas. This approach is beneficial for those people, but it’s not clear that this approach has a large impact on the future of humanity. It also seems very near-mode.
GiveWell claims that there are flow-through effects of charity, such as greater economic development, but these are underspecified.
Science, technology, medicine, and economic development has had a large positive impact on humanity. The style of EA that appeals to me would focus on promoting those things. Existential risk reduction also has appeal. Current EA claims to benefit economic development, but it’s not clear that it’s the best way to do that. And current EA seems weak for promoting science, technology, and medicine.
Most scientific, medical, and technological advances have come from the West (and Asia). If we want to see more of those advances, then shouldn’t we be investing capital in the places with a historical track record of accomplishment?
If you are approaching EA with the attitude of an investor in the future of humanity, then you must also consider national differences in IQ and the correlation of intelligence with per capita income. An investor with a blank slate attitude will be sorely disappointed, because many areas will likely hit a wall in accomplishment.
The current EA approach seems to focus on aid over investment. From a redistributive standpoint, helping the most needy makes sense. Yet from an investment standpoint, helping the most productive makes more sense, even if the bang for your buck is less. Since the most productive people are typically less needy, these two approaches come to diametrically opposite conclusions. There is also a potential conflict between X-risk reduction and technological progress. This underscores how values are hard, and the tensions between different potential value systems in EA.
Yet perhaps there is a way to reconcile the aid and investment approaches: find a place in the world that has poverty or other problems but is high in human capital, and invest there. Is there really no such place in the world like this?
EA’s current research seems focus on need-based, accomplishment-blind aid, but this only satisfies a narrow range of the values that EA could represent. It is curious that all major recommended EA interventions seem politically appealing, and there have been no major EA interventions proposed (to my knowledge) that are politically incorrect. Yes, EA has recommend avoiding certain popular interventions, but only in order to get better results within the same progressive value system.
We live in a very convenient world if helping humanity involves doing things that just happen to make people look good in Bay Area parties and the media in 2015. I am concerned that there is a file drawer effect for potential EA approaches that are politically awkward.
- 8 Jun 2015 2:26 UTC; -2 points) 's comment on Taking Effective Altruism Seriously by (
Finally someone else who is thinking like an investor. See my longer comment below for more along this line of thought.
The other advantage of investing is that you have a degree of self-insurance against adverse events. This will help you and your family avoid falling on social safety nets (which could be seen as “negative EA”). Typically EA starts by thinking about foreign countries, but perhaps EA should start at home and move outward.
Additionally, investing and waiting helps deal with the problem of values. Right now, EA suffers from a lack of good moral arguments for what to do with money. The current dominant approaches depend on very narrow and politicized moral assumptions. Waiting will allow more time for better arguments to emerge and to see which direction the world is going.
What sort of thing would you consider “good moral arguments”? What makes something “politicized”?
All moral arguments are either politicized or have the potential to be.
My impression is that EA assumes a utilitarian framework which weights people the same and operates mostly in near-mode. EA towards the third world has never been shown to be morally superior to advancing science, medicine, technology, X-risk reduction, or investing the money until better opportunities emerge.
Better moral arguments would involve taking a broader look at the future of humanity, and the current geopolitical and civilizational state of the world. EA does pay some attention to existential risks, but there is insufficient attention to risks that are short of existential risks. Think of all the risks an individual or society faces over a decade. Even if you take a bunch of low probability risks, the probability of at least one bad event happening is going to be higher.
My comment about Bay Area parties is suggesting that if EA’s conclusions are all politically palatable, then this could be due to hidden political assumptions, or thinking in a bounded way.
It’s certainly plausible that people are biased in favor of keeping their money in their pockets. But perhaps their pockets should be the default place to keep their money until compelling reasons appear to part with it.
My intuition is that if you want to see more good stuff happen, then maybe we should be giving some resources to the kinds of people who have made good stuff happen historically, and make sure we are getting a return on investment. I do not think all these people are located in the Bay Area, and my previous post does suggest trying to find poor people who are likely to be highly productive.
I think most of this discussion just boils down into a difference of values. You suggest that donating to the world’s poorest people seems like to way to increase net utility, but this depends on a utility function and moral framework that I am questioning. I have alluded to at least two objections, which is that this outlook seems too near-mode, and it assumes that people should be weighted the same. I agree with you that getting into a deeper discussion of values would not be fruitful.
Your model is interesting, but it still looks like it weights utility of different people the same, and it doesn’t take into account resulting incentives and externalities.
It’s possible to imagine a value system and geopolitical picture where saving lives in the third world has zero utility, weakly positive utility, or weakly negative utility. If so, then investing in people who are productive at least does something with your money.
Or my suspicions could be wrong, and there could be flow-through effects that I would find compelling. If I had a comprehensive and strong alternative EA approach and clearly superior value system, then I could be more explicit.
I do want to clarify that I don’t consider investing in the stock market to be EA, at least, not very strong EA. I see the stock market more as a way to grow money so that you can do EA later.
Another piece of the rationalist diaspora is neoreaction. They left LW because it wasn’t a good place for talking about anything politically incorrect, an ever expanding set. LW’s “politics is the mindkiller” attitude was good for social cohesion, but bad for epistemic rationality, because so many of our priors are corrupted by politics and yesterday’s equivalent of social justice warriors.
Neoreaction is free of political correctness and progressive moral signaling, and it takes into account history and historical beliefs when forming priors about the world. This approach allows all sorts of uncomfortable and repulsive ideas, but it also results in intellectual progress along novel lines of thought.
Neoreactionary thought varies in quality and rigor, but the current leadership contains rationalists now, and they have recognized the need to provide more rigorous arguments. I predict that more and more rationalists will explore neoreaction once they get over their absurdity heuristic and realize what it actually is.
I think many people would have loved to see a response by Moldbug, and found his response disappointing. My guess is that Moldbug felt that his writings already answered a lot of Scott’s objections, or that Scott’s approach wasn’t fair. And Moldbug isn’t the same thing as neoreaction; there were other responses by neoreactionaries to Scott’s FAQ.
The FAQ nails neoreaction on a lot of object-related issues, and it has some good philosophical objections. But it doesn’t do a good job of showing the object-related issues that neoreaction got right, and it doesn’t quite do justice to some ideas, like The Cathedral and demotism. And the North Korea stuff has really easy to anticipate objections from neoreactionaries (like the fact that it was lead by communists).
The FAQ answers the question “what are a bunch of objections to neoreaction?”, but it doesn’t answer the question “how good a philosophy is neoreaction?” because it only makes a small dent. If you consider the FAQ in conjunction with Neoreactionary Philosophy in an Enormous, Planet-sized Nutshell, then you would get a better sense of the big picture of neoreaction, but he doesn’t really integrate his arguments across the two essays, which causes an unfortunately misleading impression.
The FAQ put me off getting into neoreaction for a while, but when I did, I was much more impressed than I expected. The only way to get a good sense of what it actually is would be spending a lot of time with it.
Gay historian Rictor Norton vehemently disagrees with the notion that gay identities are recent. Here is his basic position:
Gay identities have existing for a long time, not just recognition of gay behaviors
Recent conceptions of homosexuality are politicized, but this does not mean that concepts of homosexuality are new
The politicization of modern gay politics, combined with poor record-keeping and past suppression, erases the history of gay identities and cultures.
He takes a position against social constructionism:
It is very easy for historians to establish that most of the sexual categories which are supposed to have arisen under modern capitalism in fact existed much earlier. …
One of the reasons why many contemporary lesbian and gay theorists fail to appreciate that homosexuals existed before 1869 is the politically correct view that terms such as ‘queer’ and ‘faggot’ and ‘queen’ are not nice, and especially since the late 1960s people have endeavoured to use the phrase ‘gay and lesbian’ wherever possible. There are some men who lived before 1869 whom I would feel uneasy at calling ‘gay’ or ‘homophile’, but I would not hesitate to call them queer or even silly old queens. Many of the mollies of the early eighteenth century were undoubtedly queens, whose interests and behaviour are virtually indistinguishable from queens I have known in the early 1960s (and later). …
‘Queer’ was the word of preference for homosexuals as well as homophobes for the first half of the twentieth century, and of course is being reclaimed today in defiant rather than defensive postures. In English during the eighteenth and most of the nineteenth century the words of preference were ‘molly’ and ‘sapphist’, for which good modern equivalents are ‘queer’ and ‘dyke’. During the seventeenth century and earlier the commonest terms were ‘Sodomite’ and ‘tribade’, for which, again, good modern equivalents are ‘queer’ and ‘dyke’. In ancient and indigenous and premodern cultures there were many terms for which good modern equivalents are ‘queer’ and ‘tomboy’. And the nearest modern equivalent for the nineteenth-century term ‘homosexual’ is: queer. …
I add my voice to the widespread dissatisfaction with social constructionist thought, that seems to have been based on nothing and to have lead nowhere in the past twenty years. Its initial premises have been constantly reinforced by restatement and incestuous quotation amongst constructionist colleagues rather than supported by scholarly research.
To see more, check out these excerpts from The Myth of the Modern Homosexual.
That’s Foucault’s theory, but Rictor Norton’s book I linked to convincingly debunks Foucault as ideological and ahistorical. Quoting an excerpt, here are historical cases of unmarried men going for each other instead of marriage and children:
In between these two extremes of lust and idealism we find a sense of identity based upon ordinary and unremarkable same-sex love. The records of the Inquisition in Spain, Portugal and Brazil; the police archives of early eighteenth-century Paris; the records of the Officers of the Night of sixteenth-century Venice – all clearly document a preponderance of men who were bachelors and who preferred their own sex. Statistical analysis of the particularly full and detailed Florentine records ’of the marital status of the men incriminated for sodomy from 1478 to 1483 reveals that fully three-fourths of all such men aged nineteen to seventy were unmarried.
These guys sound like they are exclusive, obligate homosexuals.
As for identity, just because the historical labels for queer people were negative, it does not mean that those terms were just externally-imposed slurs, and that homosexual identities did not exist:
In Foucault’s famous statement: ‘Homosexuality appeared as one of the forms of sexuality when it was transposed from the practice of sodomy onto a kind of superior androgyny, a hermaphroditism of the soul. The sodomite had been a temporary aberration; the homosexual was now a species.’ He ludicrously dates this shift to 1870. But the men discussed in the preceding paragraph had a sense of themselves that transcended both ‘the practice of sodomy’ and ‘temporary aberration’. In fact Dutch sodomites in 1734 were described by contemporaries as ‘hermaphrodites in their minds’ (Boon 1989) – an exact match for Foucault’s ‘hermaphroditism of the soul’. The concepts of masculine homosexual women and effeminate homosexual men dominated the premodern world. The homosexual was considered an androgynous species in Aristophanes, in Juvenal, in all the ancient literature about the transgendered priests of Cybele in the ancient and classical world. It was not a modern construct.
The truth is that a homosexual category existed many centuries prior to the nineteenth century. There are literally scores of fifteenth-century Italian authors who portray homosexual characters rather than homosexual incidents (G. Dall’Orto, ‘Italian Renaissance’, EH), and it is a nonsense to label such sodomites ‘temporary aberrations’ rather than members of a species. In real life there is the famous example of self-labelling, the painter Antonio Bazzi (1477–1549) who was proud of his nickname ‘Il Sodoma’. According to his contemporary Vasari ‘he did not take [it] with annoyance or disdain, but rather gloried in it, making jingles and verses on the subject, which he pleasantly sang to the accompaniment of the lute’.
Rictor Norton is a widely published queer historian, his research goes back centuries, and seems very solid. I think we should go with his account and toss Foucault’s social constructionism.
While both the left and the right have their own forms of ideological conformity, the term “political correctness” is associated with left ideological conformity. There is a reason that ideological purges and struggle sessions throughout history are associated with the left. I realize that “political correctness” is a loaded term, but I agree with its connotations and I’m not interested in feigning neutrality.
As for Scott, I cannot comment on that particular case, but him as a leader of NRx wouldn’t make sense anyway because he isn’t right-leaning enough.
I liked your description of certain unconventional schools of thought as “tough-minded” and “creative.” Tough-minded, creative thought processes will often involve concepts and metaphors that make people uncomfortable, including the people who think them up.
Sometimes, understanding the behavior of large groups of people involves concepts or metaphors that would be unhealthy to apply at the individual level. For instance, you can learn a lot about human behavior by thinking about game theory and the Prisoner’s Dilemma. This does not mean that you need to think about other people as “prisoners,” or think about your interactions with them as a “game” or as a “dilemma.”
I think you probably do have a lot of differences in values from people who are “red-pillers, manosphericals, conservatives, reactionaries, libertarians,” but I think this case is really just about inferential distance on the object-level. Although “sexual access” has potential problematic connotations, it actually accurately describes situations where some people’s dating challenges are so great that they are effectively excluded. I apologize for the length this post will be, but I want to drop down to the object-level for a while to give you sufficient evidence to chew on:
Demographics: sex ratio and operational sex ratio have a gigantic influence on society. Exhibit A: China has a surplus of men. Exhibit B: The shortage of black men due to imprisonment turns dating upside-down in the black community and causes black women to compete fiercely for black men. Exhibit C: In virtually all US cities (not just the West Coast), there are more single men than women below age 35 (scroll down for the age breakdown or use the sliders). Young men face a level of competition than young women do not.
If something like 120 men are competing for 100 women, in the system if monogamous, then 20 of those men are going to be excluded from marriage. Yes, in some sense, all 120 have an “opportunity,” but we know that under monogamy, 20 of them will be left out in the cold. And under a poly system, the results will be even worse, because humans are more polygynous than polyandrous. When low-status men are guaranteed to lose out in dating and marriage due to an unfavorable sex ratio, then that starts looking like a lack of “access.”
Let’s talk about polygyny a bit more. A recent article defended gay marriage from the charge of opening up the door to polygamy:
Here’s the problem with it: when a high-status man takes two wives (and one man taking many wives, or polygyny, is almost invariably the real-world pattern), a lower-status man gets no wife. If the high-status man takes three wives, two lower-status men get no wives. And so on.
This competitive, zero-sum dynamic sets off a competition among high-status men to hoard marriage opportunities, which leaves lower-status men out in the cold. Those men, denied access to life’s most stabilizing and civilizing institution, are unfairly disadvantaged and often turn to behaviors like crime and violence. The situation is not good for women, either, because it places them in competition with other wives and can reduce them all to satellites of the man.
I’m not just making this up. There’s an extensive literature on polygamy.
And there’s that word again: “access.” The notion of men being shut out of dating under polygyny mating appears in an entirely mainstream and liberal source. There are also concepts like “high-status” and “low-status” males, which feminists would often object to in other contexts.
Cultural forces: the quality of information about dating for introverted men is so poor that it is actively damaging and has the effect of excluding them from dating. There is also a decline in socialization and institutions around dating. For evidence, it is sufficient to look at the existence of the PUA community. Look at hookup culture on college campuses. In a healthy society, with healthy socialization and a monogamous mating system, we wouldn’t even be having this conversation because many of the same men in the manosphere or PUA community would be too busy hanging out with their girlfriends or wives to be complaining on the internet.
Legal and economic forces: In some Asian countries, women’s minimum expectations for husbands involves buying a house with multiple bedrooms, and only some men can economically afford that; the rest lack access to marriage because they lack the economic prerequisites. In many Western countries, if men get divorced, they can face such punishing child support and alimony burden that they must move to a small apartment (or even end up in debtor’s prison if they can’t pay). These men face steep challenges in attracting future girlfriends and wives due to their economic dispossession.
As I’ve shown at the object level, there are large cultural, demographic, economic, and legal forces that influence how challenging dating is and how people behave. These problems are much larger than asshole men blaming women for not putting out. Lack of “sexual access” is an entirely reasonable way to describe what happens to men under a skewed operational sex ratio or polygyny, though I would be totally fine to try other terms instead. I realize the term isn’t perfect, and that some people who use it might have objectionable beliefs, but if we give into crimestop and guilt-by-association, then we would know a lot less about the world.
On one side, I see people who are high-status, intellectual, and look really nice and empathic and compassionate. Of course my instincts like that. On the other side, I see people who look brave, tough, critical-minded and creative, plus they seem to be far more historically literate, so basically NRx and libertarians and similar folks give me that kind of “inventor” vibe, which incidentally is also something my instincts like.
So, basically, there are two groups of people with grievances. The ingroup is very good at impression management and public relations. The outgroup is bad at impression management, but your gut is telling you that they might be on to something. Yet you are suspicious of some of the outgroup’s arguments, because the ingroup says that the outgroup is just a bunch of “smart assholes,” and because the outgroup’s claims have problematic connotations in the outgroup’s moral framework.
I don’t think your reaction is unreasonable given your vantage point and level of inferential distance from the outgroup. But note that there is a strong incentive for the ingroup to set an incredibly high bar for the moral acceptability of the outgroup’s grievances, so it’s necessary to apply a healthy degree of skepticism to the ingroup’s moral arguments unless you have confirmed them independently.
In some cases, we will have to go to the object-level to discover which group is the “smart assholes” who are confabulating. Of course both groups will try to tar the others’ motives and reputations, but the seeming victor of that conflict will be the group with the best public relations skills, not necessarily the group with the more accurate views.
If your gut is telling you that there is potential truth in the outgroup’s arguments, then don’t let the ingroup’s moral framework shut down your investigation, especially when that investigation has implications for whether the ingroup’s moral framework is any good in the first place. Otherwise, you risk getting stuck in an closed loop of belief. I think the same argument applies to one’s own moral framework, also.
I think your “mental muscle” analogy is interesting: you are suggesting that exercising mental grievance or ressentiment is unhealthy for relationships, and is part of why men red pill men have an “uphill battle.” You argue that love is incompatible with resentment. You also argue that certain terms “demonstrate” particular unhealthy and resentful mindsets, or lead to “objectification” which is tantamount to not viewing others as people.
I share your concern that some red pill men have toxic attitudes towards women which hamper their relationships. I disagree that language like “sexual access” is sufficient to demonstrate resentment of women, and I explained other reasoning behind that language in my previous comment where I discussed operational sex ratio, polygyny, and other impersonal forces.
My other argument is that views of relationships operate at different levels of explanation. There are least 3 levels: the macro level of society, the local level of your peers and dating pool, and the dyadic level of your interpersonal relationships. Why can’t someone believe that dating is a brutal, unfair, dog-eat-dog competition at the macro or local level, but once they succeed in getting into a relationship, they fall in love and belief in sacrifice, like you want? It’s also possible to have a grievance towards a group of people, like bankers, but still respect your personal banker as a human being.
A metaphor that is useful for understanding the mating market at the societal or local level can be emotionally toxic if you apply it at the dyadic level. If you believe that the current mating market results in some men lacking sexual access at the macro level, that’s a totally correct and neutral description of what happens under a skewed operational sex ratio and polygyny. If you tell your partner “honey, you’ve been denying me sexual access for the past week,” then you’re being an asshole.
In the past, men and women of the past held beliefs about gender roles and sex differences that would be considered scandalously sexist today. It seems implausible that our ancestors didn’t love each other. People are good at compartmentalizing and believing that their partner is special.
Your theory about concepts leading to resentment and resentment being a barrier to relationships could be true, but I think it’s much more likely that you have the causal relationship backwards: it’s mostly loneliness that causes resentment, not the other way around. For instance, in the case of a skewed operational sex ratio, some people are just going to end up single no matter how zen their attitudes are.
Even if there is a risk of alienation from understanding sex differences, and sexual economics, I still think it’s better to try to build an epistemically accurate view of relationships, and then later make peace with any resentment that is a by-product of this understanding.
It seems like the only alternative is to try to mentally avoid any economic, anthropological, or gender-political insight into dating that might cause you to feel resentment: blinkering your epistemic rationality for the instrumentally rational goal of harmonious relationships.
There’s also a genuinely open question of how big sex differences are: if sex differences are smaller than I think, then I’m probably harming my relationships by being too cynical, but if they are larger than I think, then I’m naive and risk finding out the hard way. I really doubt that relationships are the one place where Litany of Tarski doesn’t apply.
It sounds like your current relationship attitudes are bringing you success in your relationship and that terms like “objectification” are more helpful to you than “sexual access.” That’s totally fine, but other people have different challenges and are coming from a different place, so I recommend suspending judgment about what concepts their mindsets entail and why they are single. If you believe that toxic attitudes towards women are correlated with their concepts, then that’s plausible, though it’s a different argument.
To go a bit more meta, I would argue that a lot of the resistance towards men developing inconvenient conclusions about sex ratio, polygyny, sex differences, etc… is not because these ideas are necessarily harmful to male-female relationships, but because they are harmful to feminist narratives about male privilege. It is morally reprehensible how feminists use their own grievance-based concepts of “objectification” to reject any macro-level analysis of male-female dynamics that might be unflattering towards women. It’s just far too convenient how sociological, economic, and anthropological arguments that would be acceptable in any other circumstance are dismissed as denying women’s humanity or personhood. I think you should be just as skeptical towards feminist grievance concepts as you are towards red pill grievance concepts.
Effective Altruism is a well-intentioned but flawed philosophy. This is a critique of typical EA approaches, but it might not apply to all EAs, or to alternative EA approaches.
Edit: In a follow up comment, I clarify that this critique is primarily directed at GiveWell and Peter Singer’s styles of EA, which are the dominant EA approaches, but are not universal.
There is no good philosophical reason to hold EA’s axiomatic style of utilitarianism. EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.
Even if you agree with EA’s utilitarianism, it is unclear that EA is actually effective at optimizing for it over a longer time horizon. EA focuses on maximizing lives saved in the present, but it has never been shown that this approach is optimal for human welfare over the long-run. The existential risk strand of EA gets this better, but it is too far off.
If EA is true, then moral philosophy is a solved problem. I don’t think moral philosophy works that way. Values are much harder than EA gives credit for. Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.
EA has an opportunity cost, and its confidence is crowding out better ideas. What would those better altruistic interventions be? I don’t know, but I feel like we can do better.
EAs have a weak understanding of geopolitics and demographics. The current state of the world is that Western Civilization, the goose that laid the golden egg, is declining. If indeed Western Civilization is in trouble, and we are facing near or medium-term catastrophic risks like social collapse, turning into Brazil, or war with Russia or China, then the highest-value opportunities for altruism will be at home. Unless you think we have a hard-takeoff AI scenario or technological miracles in the near-term, we should be very worried about geopolitics, demographics, and civilization in the medium-term and long-term.
If Western Civilization collapses, or is over-taken by China, then that will not be a good future for human welfare. Averting this possibility is way more high-impact than anything else that EAs are currently doing. If the West is secure and abundant, then maybe EAs have the right idea by redistributing wealth out of the West. But if the West is precarious and fragile, then redistribution makes less sense, and addressing the risks in the West seems more important.
EAs do not understand demographics, or are not taking them seriously if they do. The West is currently faltering in fertility and undergoing population replacement from people from areas with higher crime and corruption. Meanwhile, altruism itself varies between populations based on clannishness and inbreeding. We are heading towards a future that is demographically more clannish and less altruistic.
Some EAs are open borders advocates, but open borders is a ridiculously dangerous experiment for the West. They have not satisfactorily accounted for the crime and corruption that immigrants may bring. Additionally, under democracy, immigrants can vote and change the culture. Open border advocates hope that institutions will survive, but they have provided no good arguments that Western institutions will survive rapid demographic change. Institutions might seem fine and then rapidly collapse in a non-linear way. If Western Civilization collapses into ethnic turmoil or Soviet sclerosis, then humans everywhere will suffer.
Some EAs have a skeptical attitude towards parenthood, because it takes away money from charity, and believe that EAs are easier to convert than create. In some cases, EAs who want to become parents justify parenthood as an unprincipled exception. This whole conversation is ridiculous and exemplifies EAs’ flawed moral philosophy and understanding of humans. Altruistic parents are likely to have altruistic children due to the heritability of behavioral traits. If altruistic people fail to breed, then they will take their altruistic genes to the grave with them, like the Shakers. If altruism itself is a casualty of changing demographics, then human welfare will suffer in the future. (If you doubt this can happen, then check out the earlier two links, and good luck getting Eastern Europeans or Middle-Easterners interested in EA.)
I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments; see the interest of many EAs in open borders and animal rights. I do not see a large understanding in EA of what altruism is and how it can become pathological. Pathological altruism is where people become practically addicted to a feeling of doing good which leads them to act sometime with negative consequences. A quote from the book in that review, which shows some of the difficulties disentangling moral psychological from moral philosophy:
Despite the fact that a moral conviction feels like a deliberate rational conclusion to a particular line of reasoning, it is neither a conscious choice nor a thought process. Certainty and similar states of ‘knowing that we know’ arise out of primary brain mechanisms that, like love or anger, function independently of rationality or reason. . . .
What feels like a conscious life-affirming moral choice—my life will have meaning if I help others—will be greatly influenced by the strength of an unconscious and involuntary mental sensation that tells me that this decision is “correct.” It will be this same feeling that will tell you the “rightness” of giving food to starving children in Somalia, doing every medical test imaginable on a clearly terminal patient, or bombing an Israeli school bus. It helps to see this feeling of knowing as analogous to other bodily sensations over which we have no direct control.
It seems that some people have strong intuitions towards altruism or animal rights, but it’s another thing entirely to say that those arguments are philosophically strong. It seems that people who are biologically predisposed towards altruism will be motivated to find philosophical arguments that justify what they already want to do. I don’t think EAs have corrected for this bias. If EAs’ arguments are flawed, then their adoption of them must be explained by their moral intuitions or signaling desires. Since EA provides great opportunities to signal altruism, intelligence, and discernment, it seems that there would be a gigantic temptation for some personalities to get into EA and exaggerate the quality of its arguments, or adopt its axioms even though other axioms are possible. Even though EAs employ reason and philosophy unlike typical pathological altruists, moral philosophy is subjective, and choice of particular moral theories seems highly related to personality.
The other psychological bias of EAs is due to them getting nerd-sniped by narrowly defining problems, or picking problems that are easier to solve or charities that are possible to evaluate. They seem to operate from the notion that giving away some of their money to charity is taken for granted, so they just need to find the best charity out of those that are possible to evaluate. In an inconvenient world for an altruist, the high-value opportunities are unknown or unknowable, throwing your money at what seems best might result in a negligible or negative effect, and keeping your money in your piggy bank until more obvious opportunities emerge might make the most sense.
EA isn’t all bad. It’s probably better than typical ineffective charities, so if you absolute must give to a charity, then effective charities are probably better. EAs have the right idea by trying to evaluate charities. Many EA arguments are strong within the bounds of utilitarianism, or the confines of a particular problem. But EAs have a hard road towards justification because their philosophy advocates spending money on strong moral claims, and being wrong about important things about the world will totally throw off their results.
My criticisms here don’t apply to all EAs or all possible EA approaches, just the median EA arguments and interventions I’ve seen. It is conceivable that in the future EA will become more persuasive to a larger group of people once it has greater knowledge about the world and incorporates that knowledge into its philosophy. An alternative approach to EA would focus on preserving Western Civilization and avoiding medium-term political/demographic catastrophies. But nobody is sufficiently knowledgeable at this point to know how we could spend money towards this goal.
- 13 Sep 2015 2:28 UTC; 0 points) 's comment on A new response to effective altruism by (EA Forum;
EAs might believe that, but that would be an example of their lack of knowledge of humanity and adoption of simplistic progressivism. Human traits for either altruism or accomplishment are not distributed evenly: people vary in clannishness, charity, civic-mindness, corruption, and IQ. It is most likely that differences between people explains why some groups have trouble building functional institutions and meeting their own needs.
Whether basic needs are met doesn’t explain why some groups within Europe are so different from each other. Southern Europe and parts of Eastern Europe have extremely low concentrations of charitable organizations. Also, good luck explaining the finding in the post I linked in my previous comment finding that vegetarianism in the US is correlated at 0.68 with English ancestry (but only weakly with European ancestry). Even different groups of white people are really, really different from each other, such as differences between Yankees and Southerners in the US, stemming from differences between settlers from different part of England.
Human groups evolved with geographical separation and selection pressures. For example, the clannishness source I linked show how tons of different outcomes are related to whether groups are inside or outside the Hajnal Line of inbreeding. Different rates of inbreeding will result in different strength of kin selection vs. reciprocal altruism. For example, here is the map of corruption with the Hajnal Line superimposed.
There is no good reason to believe that humans have equal potential for altruism and accomplishment, though there are benefits to signaling this belief.
- 17 Jul 2015 19:44 UTC; 8 points) 's comment on Effective Altruism from XYZ perspective by (
I do believe that my comment accurately characterizes the large EA organizations like GiveWell and philosophers like Peter Singer. I do realize that EAs are smart people, and many individual EAs have other beliefs and engage in all sorts of research. For example, some EA are concerned about nuclear war with Russia, and today I discovered the Global Catastrophic Risk Institute and the Global Priorities Project, which are outside of my critique. However, for now, Peter Singer, Give Well, Giving What We Can, and similar approaches are the most emblematic of EA, and it is towards this style of EA that my critique is directed, which I indicated in my previous comment when I said I was addressing “typical” or “median” EA. I believe it is fair to judge EA (as it currently exists) by these dominant approaches.
I disagree with you that I am stereotyping, but I think it’s good for me to clarify the scope of my critique, so I am adding a note to my previous comment that links to this comment.
That 80,000 Hours post doesn’t contradict my argument at all, and in fact reinforces it. My comment never argued that EAs believe that everyone earned to give, only that they are very confident about their moral claims about what people should do with their money. That post still shows that 80,000 Hours believes that at least 10% of people should earn to give, which is still an incredibly strong ethical claim.
A lot of the post seems to confuse complex strategic moves like GiveWell’s move to start by focusing on life saved by proven interventions with the belief that life saved by proven interventions is the most important thing.
Obviously GiveWell cannot show that their interventions are the “most important thing.” But GiveWell does claim that that its proven interventions are a sufficiently good thing to justify you spending money on them, and this is an immense moral claim. It’s not like GiveWell is a purely informational website.
In the context of the larger EA movement, Peter Singer’s philosophy and EA pledges argue with incredible confidence that people should be giving. EA is extremely evangelical, and Singer’s philosophy is incredibly flawed and emotionally manipulative.
The problem is that none of the most common EA approaches have defeated the “null giving hypothesis” of spending your money on yourself, or saving it in an investment account and then giving the compounded amount to another cause in the future. If someone is already insisting on giving to charity, then GiveWell might redirect their money in a direction that is actually useful, but EA is also trying to get people involved who were not doing charity before, and its moral arguments and understanding of the world are just not strong enough to justify spending money on the most dominant charitable approaches.
“X is the most efficient birdfeeder on the market” is a different type of claim from “the best birdfeeder on the market is worth spending money on,” or “feeding birds is a moral imperative,” or “we should pledge to feed birds and evangelize other people to do so, too.” My impression is that EAs are getting these kinds of claims mixed up.
- 10 Jul 2015 2:20 UTC; 12 points) 's comment on Effective Altruism from XYZ perspective by (
That would be another example of things which some EAs do, but which don’t yet seem to percolate through to the public-facing parts of the movement. For example, valuing other EAs due to flow-though contradicts Singer’s view, as far as I understand him:
Effective altruists do not discount suffering because it occurs far away or in another country or afflicts people of a different race or religion. They agree that the suffering of animals counts too and generally agree that we should not give less consideration to suffering just because the victim is not a member of our species.
Part of the reason I wrote my critique is that I know that at least some EAs will learn something from it and update their thinking.
VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”
I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.
Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system.
Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?
Regardless of whether you are an antirealist, not all value systems are created equal. Many people’s value systems are hopelessly contradictory, or corrupted by politics. For example, some people claim to support gay people, but they also support unselective immigration from countries with anti-gay attitudes, which will inevitably cause negative externalities for gay people. That’s a contradiction.
I just don’t think a lot of EAs have thought their value systems through very thoroughly, and their knowledge of history, politics, and object-level social science is low. I think there are a lot of object-level facts about humanity, and events in history or going on right now which EAs don’t know about, and which would cause them to update their approach if they knew about it and thought seriously about it.
Look at the argument that EAs make towards ineffective altruists: they know so little about charity and the world that they are hopelessly unable to achieve significant results in their charity. When EAs talk to non-EAs, they advocate that (a) people reflect on their value system and priorities, and (b) they learn about the likely consequences of charities at an object-level. I’m doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.
However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.
What is or isn’t controversial in society is more a function of politics than of ethics. Progressive politics is memetically dominant, potentially religiously-descended, and falsely presents itself as universal. Imagine what an EA would do in Nazi Germany under the influence of propaganda. How about Soviet Effective Altruists, would they actually do good, or would they say “collectivize faster, comrade?” How do we know we aren’t also deluded by present-day politics?
It seems like there should be some basic moral requirement that EAs give their value a system a sanity-check instead of just accepting whatever the respectable politics of the time tell them. If indeed politics has a very pervasive influence on people’s knowledge and ethics, then giving your value system a sanity-check would require separating out the political component of your worldview. This would require deep knowledge of politics, history, and social science, and I just don’t see most EAs or rationalists operating at this level (I’m certainly not: the more I learn, the more I realize I don’t know).
The fact that the major EA interventions are so palatable to progressivism suggests that EA is operating with very bounded rationality. If indeed EA is bounded by progressivism, and progressivism is a flawed value system, then there are lots of EA missed opportunities lying around waiting for someone to pick them up.
No need for you to address any particular political point I’m making. For now, it is sufficient for me to suggest that reigning progressive ideas about politics are flawed and holding EAs back, without you committing to any particular alternative view.
I’m glad to hear that EAs are focusing more on movement-building and collaboration. I think there is a lot of value in eigenaltruism: being altruistic only towards other eigenaltruistic people who “pay it forward” (see Scott Aaronson’s eigenmorality). Civilizations have been built with reciprocal altruism. The problem with most EA thinking is that is one-way, so the altruism is consumed immediately. This post argues that morality evolved as a system of mutual obligation, and that EAs misunderstand this.
Although there is some political heterogeneity in EA, it is overwhelmed by progressives, and the main public recommendations are all progressive causes. Moral progress is a tricky concept: for example, the French Revolution is often considered moral progress, but the pictures paint another story.
On open borders, economic analyses like Roodman’s are just too narrow. They do not take into account all of the externalities, such as crime and changes to cultural institutions. OpenBorders.info addresses many of the objections, sometimes; it does a good job of summarizing some of the anti-open borders arguments, but often fails to refute them, yet this lack of refutation doesn’t translate into them updating their general stance on immigration.
If humans are interchangeable homo economicus then open borders would be a economic and perhaps moral imperative. If indeed human groups are significantly different, such as in crime rates, then it throws a substantial wrench into open borders. If the safety of open borders is in question, then it is a risky experiment.
Some of early indicators are scary, like the Rotherham Scandal. There are reports of similar coverups in other areas, and economic analyses do not capture the harms to these thousands of children. High-crime areas where the police have trouble enforcing rule of law are well documented in Europe: they are called “no-go zones” or “sensitive urban zones” (“no-go zone” is controversial because technically you can go there, but would you want to go to this zone, especially if you were Jewish?). Britain literally has Sharia Patrols harassing gay people and women.
These are just the tip of the iceberg of what is happening with current levels of immigration. Just imagine what happens with fully open borders. I really don’t think its advocates have grappled with this graph, and what it means for Europe under open borders. No matter how generous Europe was, its institutions would never be able to handle the wave of immigrants, and open borders advocates are seriously kidding themselves if they don’t see that Europe would turn into South Africa mixed with Syria, and the US would turn into Brazil. And then who would send aid to Africa?
Rule of law is slowly breaking down in the West, and elite Westerners are sitting in their filter bubbles fiddling while Rome burns. I’m not telling you to accept this scenario as likely; you would need to go do your own research at the object-level. But with even a small risk that this scenario is possible, it’s very significant for future human welfare.
Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?
I’ll think about it. I think some of the sources I’ve cited start answering that question: finding people who are knowledgeable about the giant space of stuff that the media and academia is sweeping under the carpet for political reasons.
Note to all rationalists:
Politics has already slashed your tires.
Politics has already pwned your brain.
Politics has already smashed the Overton Window.
Politics has already kicked over your Schelling fence.
Politics has already planted weeds in your garden.
What are you going to do about it?