Focusing on helping the worst-off instead of empowering the best
I feel like this is getting at some directionally correct stuff but, feels off.
EA was the union of a few disparate groups, roughly encapsulated by:
Peter Singer / Giving What We Can folk
Early Givewell
Early LessWrong
There are other specific subgroups. But, my guess is early Givewell was most loadbearing in there ending up being an “EA” identity, in that there were concrete recommendations of what to do with the money that stood up to some scrutiny. Otherwise it’d have just been more “A”, or, “weird transhumanists with weird goals.”
Givewell started out with a wider variety of cause areas, including education in America. It just turned out that it seemed way more obviously cost effective to do specific global health interventions than to try to fix education in America. (I realize education in America isn’t particularly “empowering the best”, but, the flow towards “helping worst off” seems to me like it wasn’t actually the initial focus)
I agree some memeplex accreted around that, which had some of the properties you describe.
But meanwhile:
EA started off with global health and ended up pivoting hard to AI safety, AI governance, etc.
It seems off to say “started off in global health, and pivoted to AI”, when all the AI stuff was there from the beginning at the very first pre-EA-Global events, and just eventually became clear that it was real, and important. The worldview that generated AI was not (exactly) the same one generating global health, they were just two clusters of worldview that were in conversation with each other from the beginning.
It seems off to say “started off in global health, and pivoted to AI”, when all the AI stuff was there from the beginning at the very first pre-EA-Global events, and just eventually became clear that it was real, and important. The worldview that generated AI was not (exactly) the same one generating global health, they were just two clusters of worldview that were in conversation with each other from the beginning.
I agree with all the facts cited here, but I think it still understates the way that there was an intentional pivot.
The EA brand to broader world emphasized earning to give and effective global poverty charities in particular. That’s what most people who had heard of it associated with “effective altruism”. And most of the people who got involved before 2019 got involved with an EA bearing that brand.
I guess that in 2015, the average EAG-goer was mostly interested in GiveWell style effective charities, and gave a bit of difference to the more speculative x-risk stuff (because smart EAs seem to take it seriously), but mostly didn’t focus on it very much.
And while it’s true that AI risk was part of the discussion from the very beginning, there were explicit top-down pushes from the leadership to prioritize it and to give it more credibility.
(And more than that, I’m told that at least some of the leadership had the explicit strategy of building credibility and reputation with GiveWell-like stuff, and boosting the reputation of AI risk by association.)
Yep agree with all that. (I stand by my comment as mostly arguing directionally against Richard’s summary but seems fine to also argue directionally against mine)
The worldview that generated AI was not (exactly) the same one generating global health, they were just two clusters of worldview that were in conversation with each other from the beginning.
Yes, I agree; my point is that people with the global health worldview ended up being convinced of a bunch of the high-level conclusions of the rationalist worldview, but without updating much away from the generators of the global health worldview.
The terminology is a little tricky here because they’re so entangled but I think it’s reasonable to talk about “EA” as a cluster as opposed to “rationalism” as a cluster even though a lot of people are in both.
E.g. if AI weren’t a big deal then rationalists would probably be doing cryonics or solving aging or something. Whereas if EAs weren’t into AI they’d probably be doing global health, factory farming, etc.
E.g. if AI weren’t a big deal then rationalists would probably be doing cryonics or solving aging or something
Strong disagree. We could have done those things, but the rationality movement didn’t have enough motive force or coordination capacity to do much, beyond AI safety.
Yes, because it funneled all of its best and brightest into AI safety?
We might be evaluating the hypothetical at different points. I’m thinking of the movement coalescing around the sequences except the message underlying the sequence is “you should solve ageing” rather than “you should solve alignment”.
Richard is saying that in the hypothetical world in which AGI was proven to be hypothetically impossible or something of that nature, the cluster of people who can be referred as belonging to the rationalist—EA[1] set would be trying to solve aging and perfecting cryonics, whereas the cluster of people in the EA—rationalist set would be into global health and ending factory farming.
You had a (critique?) of rationalists in that they didn’t have motive force or coordination capacity to do much beyond AI safety, but Richard is saying that’s because AI safety took all the talent of the rationalist movement. If AI never existed, obviously, those rationalists would be doing something else.
Maybe you could trying to attack the hypothetical from a counterfactual angle? That the people in a hypothetical AI-less world wouldn’t have even coalesced around anything without AI safety, so there wouldn’t even be an organized community around cryonics and aging? Or that even in our current world, rationalists should have gone into cryonics and aging even with AI looming over our heads?
I think the idea that rationalists in an AI-less counterfactual world would have gone into cryonics and aging is not at all disproven by showing that rationalists in an AI world have not revolutionized cryonics and anti-aging. That doesn’t grok to me at all, I agree with Richard here.
There’s likely not that many people in the pure rationalist—EA set, but I’m referring to dispositions and norms here, the set of self-identified rationalists who are more further away from EA.
In general, EA emerged as the convergence from 2008 to 2012 at least 4 distinct but overlapping proto-EA communities, in order of founding:
The Singularity Institute (now known as Machine Intelligence Research Institute; MIRI) and the “rationalist” discussion forum LessWrong, founded by Eliezer Yudkowsky and others in 2000 and 2006
GiveWell, founded by Holden Karnofsky and Elie Hassenfeld in 2007, and Good Ventures, founded by with Dustin Moskovitz and Cari Tuna in 2011, which partnered together in 2014 as GiveWell Labs (now Open Philanthropy)
Felicifia, created by Seth Baum, Ryan Carey, and Sasha Cooper in 2008 as a utilitarianism discussion forum, which is how I got involved as discussed above; these discussions largely moved to other venues such as Facebook in 2012, and Felicifia is no longer active.
Giving What We Can (2009) and 80,000 Hours (2011), founded by Will MacAskill and Toby Ord, philosophers at the University of Oxford, and the umbrella organization Centre for Effective Altruism; Will has written about the early history of EA on the TLYCS blog and the history of the term on the Effective Altruism Forum.
As the EA flag was being planted, there were many effectiveness-focused altruists who came out of the woodwork but did not have formal involvement with one of these 4 groups, especially people inspired by the famous philosopher and utilitarian Peter Singer, particularly his essay “Famine, Affluence, and Morality” (1972)3 and book Animal Liberation (1975). Many were also involved in the evidence-based “randomista” movement in economic development, emphasizing evidence-based strategies to help the world’s poorest people, including academic research on this topic since the 1990s, especially IPA (2002) and JPAL (2003). Additionally, there were other email lists and community forms related to EA such as SL4 on the possibility of a technological singularity, as well as personal blogs, such as Brian Tomasik’s. Some were inspired by famous altruists such as Zell Kravinsky. I met many people in the early days of EA who said they had been thinking along EA lines for years and were so thrilled to find a community centered on this mindset. This is less common in 2022 because the movement is so visible and established that people run across it quickly once they start thinking in these ways.
The need to decide upon a name came from two sources:
First, the Giving What We Can (GWWC) community was growing. 80,000 Hours (80k) had soft-launched in February 2011, moving the focus in Oxford away from just charity and onto ethical life-optimisation more generally. There was also a growing realization among the GWWC and 80k Directors that the best thing for us each to be doing was to encourage more people to use their life to do good as effectively as possible (which is now usually called ‘movement-building’).
Second, GWWC and 80k were planning to incorporate as a charity under an ‘umbrella’ name, so that we could take paid staff (decided approx. Aug 2011; I was Managing Director of GWWC at the time and was pushing for this, with Michelle Hutchinson and Holly Morgan as the first planned staff members). So we needed a name for that umbrella organization (the working title was ‘High Impact Alliance’). We were also just starting to realize the importance of good marketing, and therefore willing to put more time into things like choice of name.
At the time, there were a host of related terms: on 12 March 2012 Jeff Kaufman posted on this, listing ‘smart giving’, ‘efficient charity’, ‘optimal philanthropy’, among others. Most of the terms these referred to charity specifically. The one term that was commonly used to refer to people who were trying to use their lives to do good effectively was the tongue-in-cheek ‘super-hardcore do-gooder’. It was pretty clear we needed a new name! I summarized this in an email to the 80k team (then the ‘High Impact Careers’ team) on 13 October 2011:
We need a name for “someone who pursues a high impact lifestyle”. This has been such an obstacle in the utilitarianesque community - ‘do-gooder’ is the current term, and it sucks.”
What happened, then, is that there was a period of brainstorming—combining different terms like ‘effective’, ‘efficient’, ‘rational’ with ‘altruism’, ‘benevolence’, ‘charity’. Then the Directors of GWWC and 80k decided, in November 2011, to aggregate everyone’s views and make a final decision by vote. This vote would decide both the name of the type of person we wanted to refer to, and for the name of the organization we were setting up. …
And then the vote came down to this shortlist (emphasis mine):
Rational Altruist Community RAC
Effective Utilitarian Community EUC
Evidence-based Charity Association ECA
Alliance for Rational Compassion ARC
Evidence-based Philanthropy Association EPA
High Impact Alliance HIA
Association for Evidence-Based Altruism AEA
Optimal Altruism Network OAN
High Impact Altruist Network HIAN
Rational Altruist Network RAN
Association of Optimal Altruists AON
Centre for Effective Altruism CEA
Centre for Rational Altruism CRA
Big Visions Network BVN
Optimal Altruists Forum OAF
… In the vote, CEA won, by quite a clear margin. Different people had been pushing for different names. I remember that Michelle preferred “Rational Altruism”, the Leverage folks preferred “Strategic Altruism,” and I was pushing for ’”Effective Altruism”. But no-one had terribly strong views, so everyone was happy to go with the name we voted on. …
We hadn’t planned ‘effective altruism’ to take off in the way that it did. ‘Centre for Effective Altruism’ was intended not to have a public presence at all, and just be a legal entity. I had thought that effective altruism was too abstract an idea for it to really catch on, and had a disagreement with Mark Lee and Geoff Anders about this. Time proved them correct on that point!
Interesting, I’d never explicitly considered that Peter Singer (you should expand your moral circle and do as much good as you can) and GiveWell (given that you want to do good, how to do it?) started as totally different memeplexes and only merged later on. It makes sense in retrospect.
I feel like this is getting at some directionally correct stuff but, feels off.
EA was the union of a few disparate groups, roughly encapsulated by:
Peter Singer / Giving What We Can folk
Early Givewell
Early LessWrong
There are other specific subgroups. But, my guess is early Givewell was most loadbearing in there ending up being an “EA” identity, in that there were concrete recommendations of what to do with the money that stood up to some scrutiny. Otherwise it’d have just been more “A”, or, “weird transhumanists with weird goals.”
Givewell started out with a wider variety of cause areas, including education in America. It just turned out that it seemed way more obviously cost effective to do specific global health interventions than to try to fix education in America. (I realize education in America isn’t particularly “empowering the best”, but, the flow towards “helping worst off” seems to me like it wasn’t actually the initial focus)
I agree some memeplex accreted around that, which had some of the properties you describe.
But meanwhile:
It seems off to say “started off in global health, and pivoted to AI”, when all the AI stuff was there from the beginning at the very first pre-EA-Global events, and just eventually became clear that it was real, and important. The worldview that generated AI was not (exactly) the same one generating global health, they were just two clusters of worldview that were in conversation with each other from the beginning.
I agree with all the facts cited here, but I think it still understates the way that there was an intentional pivot.
The EA brand to broader world emphasized earning to give and effective global poverty charities in particular. That’s what most people who had heard of it associated with “effective altruism”. And most of the people who got involved before 2019 got involved with an EA bearing that brand.
I guess that in 2015, the average EAG-goer was mostly interested in GiveWell style effective charities, and gave a bit of difference to the more speculative x-risk stuff (because smart EAs seem to take it seriously), but mostly didn’t focus on it very much.
And while it’s true that AI risk was part of the discussion from the very beginning, there were explicit top-down pushes from the leadership to prioritize it and to give it more credibility.
(And more than that, I’m told that at least some of the leadership had the explicit strategy of building credibility and reputation with GiveWell-like stuff, and boosting the reputation of AI risk by association.)
Yep agree with all that. (I stand by my comment as mostly arguing directionally against Richard’s summary but seems fine to also argue directionally against mine)
Yes, I agree; my point is that people with the global health worldview ended up being convinced of a bunch of the high-level conclusions of the rationalist worldview, but without updating much away from the generators of the global health worldview.
The terminology is a little tricky here because they’re so entangled but I think it’s reasonable to talk about “EA” as a cluster as opposed to “rationalism” as a cluster even though a lot of people are in both.
E.g. if AI weren’t a big deal then rationalists would probably be doing cryonics or solving aging or something. Whereas if EAs weren’t into AI they’d probably be doing global health, factory farming, etc.
Strong disagree. We could have done those things, but the rationality movement didn’t have enough motive force or coordination capacity to do much, beyond AI safety.
Yes, because it funneled all of its best and brightest into AI safety?
We might be evaluating the hypothetical at different points. I’m thinking of the movement coalescing around the sequences except the message underlying the sequence is “you should solve ageing” rather than “you should solve alignment”.
Maybe I’m missing something. Why are you comparing to the that hypothetical world?
Richard is saying that in the hypothetical world in which AGI was proven to be hypothetically impossible or something of that nature, the cluster of people who can be referred as belonging to the rationalist—EA[1] set would be trying to solve aging and perfecting cryonics, whereas the cluster of people in the EA—rationalist set would be into global health and ending factory farming.
You had a (critique?) of rationalists in that they didn’t have motive force or coordination capacity to do much beyond AI safety, but Richard is saying that’s because AI safety took all the talent of the rationalist movement. If AI never existed, obviously, those rationalists would be doing something else.
Maybe you could trying to attack the hypothetical from a counterfactual angle? That the people in a hypothetical AI-less world wouldn’t have even coalesced around anything without AI safety, so there wouldn’t even be an organized community around cryonics and aging? Or that even in our current world, rationalists should have gone into cryonics and aging even with AI looming over our heads?
I think the idea that rationalists in an AI-less counterfactual world would have gone into cryonics and aging is not at all disproven by showing that rationalists in an AI world have not revolutionized cryonics and anti-aging. That doesn’t grok to me at all, I agree with Richard here.
There’s likely not that many people in the pure rationalist—EA set, but I’m referring to dispositions and norms here, the set of self-identified rationalists who are more further away from EA.
To add to your point, Jacy Reese Anthis in Some Early History of Effective Altruism wrote
On the history of the term “effective altruism”, Will MacAskill in 2014 dug through old emails and came up with the following stylised summary:
And then the vote came down to this shortlist (emphasis mine):
So predictably you have folks arguing e.g. Effective altruism is no longer the right name for the movement and so on.
Interesting, I’d never explicitly considered that Peter Singer (you should expand your moral circle and do as much good as you can) and GiveWell (given that you want to do good, how to do it?) started as totally different memeplexes and only merged later on. It makes sense in retrospect.