It seems off to say “started off in global health, and pivoted to AI”, when all the AI stuff was there from the beginning at the very first pre-EA-Global events, and just eventually became clear that it was real, and important. The worldview that generated AI was not (exactly) the same one generating global health, they were just two clusters of worldview that were in conversation with each other from the beginning.
I agree with all the facts cited here, but I think it still understates the way that there was an intentional pivot.
The EA brand to broader world emphasized earning to give and effective global poverty charities in particular. That’s what most people who had heard of it associated with “effective altruism”. And most of the people who got involved before 2019 got involved with an EA bearing that brand.
I guess that in 2015, the average EAG-goer was mostly interested in GiveWell style effective charities, and gave a bit of difference to the more speculative x-risk stuff (because smart EAs seem to take it seriously), but mostly didn’t focus on it very much.
And while it’s true that AI risk was part of the discussion from the very beginning, there were explicit top-down pushes from the leadership to prioritize it and to give it more credibility.
(And more than that, I’m told that at least some of the leadership had the explicit strategy of building credibility and reputation with GiveWell-like stuff, and boosting the reputation of AI risk by association.)
Yep agree with all that. (I stand by my comment as mostly arguing directionally against Richard’s summary but seems fine to also argue directionally against mine)
I agree with all the facts cited here, but I think it still understates the way that there was an intentional pivot.
The EA brand to broader world emphasized earning to give and effective global poverty charities in particular. That’s what most people who had heard of it associated with “effective altruism”. And most of the people who got involved before 2019 got involved with an EA bearing that brand.
I guess that in 2015, the average EAG-goer was mostly interested in GiveWell style effective charities, and gave a bit of difference to the more speculative x-risk stuff (because smart EAs seem to take it seriously), but mostly didn’t focus on it very much.
And while it’s true that AI risk was part of the discussion from the very beginning, there were explicit top-down pushes from the leadership to prioritize it and to give it more credibility.
(And more than that, I’m told that at least some of the leadership had the explicit strategy of building credibility and reputation with GiveWell-like stuff, and boosting the reputation of AI risk by association.)
Yep agree with all that. (I stand by my comment as mostly arguing directionally against Richard’s summary but seems fine to also argue directionally against mine)