Slightly hot take: Longtermist capacity/community building is pretty underdone at current margins and retreats (focused on AI safety, longtermism, or EA) are also underinvested in.
By “longtermist community building”, I mean rather than AI safety. I think retreats are generally underinvested in at the moment.
I’m also sympathetic to thinking that general undergrad and high school capacity building (AI safety, longtermist, or EA) is underdone, but this seems less clear-cut.
I think this underinvestment is due to a mix of mistakes on the part of Open Philanthropy (and Good Ventures)[1] and capacity building being lower status than it should be.
Here are some reasons why I think this work is good:
It’s very useful for there to be people who are actually trying really hard to do the right thing and they often come through these sorts of mechanisms. Another way to put this is that flexible, impact-obsessed people are very useful.
Retreats make things feel much more real to people and result in people being more agentic and approaching their choices more effectively.
Programs like MATS are good, but they get somewhat different people at a somewhat different part of the funnel, so they don’t (fully) substitute.
A large part of why I’m writing this is to try to make this work higher status and to encourage more of this work. Consider yourself to be encouraged and/or thanked if you’re working in this space or planning to work in this space.
I think these mistakes are: underfunding this work, Good Ventures being unwilling to fund some versions of this work, failing to encourage people to found useful orgs in this space, and hiring out many of the best people in this space to instead do (IMO less impactful) grantmaking.
If someone wants to give Lightcone money for this, we could probably fill a bunch of this gap. No definitive promises (and happy to talk to any donor for whom this would be cruxy about what we would be up for doing and what we aren’t), but we IMO have a pretty good track record of work in the space, and of course having Lighthaven helps. Also if someone else wants to do work in the space and run stuff at Lighthaven, happy to help in various ways.
I think the Sanity & Survival Summit that we ran in 2022 would be an obvious pointer to something I would like to run more of (I would want to change some things about the framing of the event, but I overall think that was pretty good).
Another thing I’ve been thinking about is a retreat on something like “high-integrity AI x-risk comms” where people who care a lot about x-risk and care a lot about communicating it accurately to a broader audience can talk to each other (we almost ran something like this in early 2023). Think Kelsey, Palisade, Scott Alexander, some people from Redwood, some of the MIRI people working on this, maybe some people from the labs. Not sure how well it would work, but it’s one of the things I would most like to attend (and to what degree that’s a shared desire would come out quickly in user interviews)
Though my general sense is that it’s a mistake to try to orient things like this too much around a specific agenda. You mostly want to leave it up to the attendees to figure out what they want to talk to each other about, and do a bunch of surveying and scoping of who people want to talk to each other more, and then just facilitate a space and a basic framework for those conversations and meetings to happen.
Another thing I’ve been thinking about is a retreat on something like “high-integrity AI x-risk comms” where people who care a lot about x-risk and care a lot about communicating it accurately to a broader audience can talk to each other.
I think this is a great idea that would serve an urgent need. I’d urge to you do it in the near future.
Agree with both the OP and Habryka’s pitch. The Meetup Organizers Retreat hosted at Lighthaven in 2022 was a huge inflection point for my personal involvement with the community.
I think there are some really big advantages to having people who are motivated by longtermism and doing good in a scope-sensitive way, rather than just by trying to prevent AI takeover even more broadly “help with AI safety”.
AI safety field building has been popular in part because there is a very broad set of perspectives from which it makes sense to worry about technical problems related to societal risks from powerful AI. (See e.g. Simplify EA Pitches to “Holy Shit, X-Risk”. This kind of field building gets you lots of people who are worried about AI takeover risk, or more broadly, problems related to powerful AI. But it doesn’t get you people who have a lot of other parts of the EA/longtermist worldview, like:
Being scope-sensitive
Being altruistic/cosmopolitan
Being concerned about the moral patienthood of a wide variety of different minds
Being interested in philosophical questions about acausal trade
People who do not have the longtermist worldview and who work on AI safety are useful allies and I’m grateful to have them, but they have some extreme disadvantages compared to people who are on board with more parts of my worldview. And I think it would be pretty sad to have the proportion of people working on AI safety who have the longtermist perspective decline further.
It feels weird to me to treat longtermism as an ingroup/outgroup divider. I guess I think of myself as not really EA/longtermist. I mostly care about the medium-term glorious transhumanist future. I don’t really base my actions on the core longtermist axiom; I only care about the unimaginably vast number of future moral patients indirectly, through caring about humanity being able to make and implement good moral decisions a hundred years from now.
The main thing I look at to determine whether someone is value-aligned with me is whether they care about making the future go well (in a vaguely ambitious transhumanist coded way), as opposed to personal wealth or degrowth whatever.
Yeah, maybe I’m using the wrong word here. I do think there is a really important difference between people who are scope-sensitively altruistically motivated and who are in principle willing to make decisions based on abstract reasoning about the future (which I probably include you in), and people who aren’t.
I have the impression that neither “short” nor “medium”-termist EAs (insofar as those are the labels they use for themselves) care much about 100 years from now. With ~30-50 years being what seems what the typical “medium”-termist EA cares about. So if you care about 100 years, and take “weird” ideas seriously, I think at least I would consider that long-termist. But it has been a while since I’ve consistently read the EA forum.
I think the general category of AI safety capacity building isn’t underdone (there’s quite a lot of it) while I think stuff aiming more directly on longtermism (and AI futurism etc) is underdone. Mixing the two is reasonable tbc, and some of the best stuff focuses on AI safety while mixing in longtermism/futurism/etc. But, lots of the AI safety capacity building is pretty narrow in practice.
While I think the general category of AI safety capacity building isn’t underdone, I do think that (AI safety) retreats in particular are under invested in.
Slightly hot take: Longtermist capacity/community building is pretty underdone at current margins and retreats (focused on AI safety, longtermism, or EA) are also underinvested in. By “longtermist community building”, I mean rather than AI safety. I think retreats are generally underinvested in at the moment. I’m also sympathetic to thinking that general undergrad and high school capacity building (AI safety, longtermist, or EA) is underdone, but this seems less clear-cut.
I think this underinvestment is due to a mix of mistakes on the part of Open Philanthropy (and Good Ventures)[1] and capacity building being lower status than it should be.
Here are some reasons why I think this work is good:
It’s very useful for there to be people who are actually trying really hard to do the right thing and they often come through these sorts of mechanisms. Another way to put this is that flexible, impact-obsessed people are very useful.
Retreats make things feel much more real to people and result in people being more agentic and approaching their choices more effectively.
Programs like MATS are good, but they get somewhat different people at a somewhat different part of the funnel, so they don’t (fully) substitute.
A large part of why I’m writing this is to try to make this work higher status and to encourage more of this work. Consider yourself to be encouraged and/or thanked if you’re working in this space or planning to work in this space.
I think these mistakes are: underfunding this work, Good Ventures being unwilling to fund some versions of this work, failing to encourage people to found useful orgs in this space, and hiring out many of the best people in this space to instead do (IMO less impactful) grantmaking.
If someone wants to give Lightcone money for this, we could probably fill a bunch of this gap. No definitive promises (and happy to talk to any donor for whom this would be cruxy about what we would be up for doing and what we aren’t), but we IMO have a pretty good track record of work in the space, and of course having Lighthaven helps. Also if someone else wants to do work in the space and run stuff at Lighthaven, happy to help in various ways.
I’d be interested to hear what kind of things you’d want to do with funding; this does seem like a potentially good use of funds
I think the Sanity & Survival Summit that we ran in 2022 would be an obvious pointer to something I would like to run more of (I would want to change some things about the framing of the event, but I overall think that was pretty good).
Another thing I’ve been thinking about is a retreat on something like “high-integrity AI x-risk comms” where people who care a lot about x-risk and care a lot about communicating it accurately to a broader audience can talk to each other (we almost ran something like this in early 2023). Think Kelsey, Palisade, Scott Alexander, some people from Redwood, some of the MIRI people working on this, maybe some people from the labs. Not sure how well it would work, but it’s one of the things I would most like to attend (and to what degree that’s a shared desire would come out quickly in user interviews)
Though my general sense is that it’s a mistake to try to orient things like this too much around a specific agenda. You mostly want to leave it up to the attendees to figure out what they want to talk to each other about, and do a bunch of surveying and scoping of who people want to talk to each other more, and then just facilitate a space and a basic framework for those conversations and meetings to happen.
I think this is a great idea that would serve an urgent need. I’d urge to you do it in the near future.
Agree with both the OP and Habryka’s pitch. The Meetup Organizers Retreat hosted at Lighthaven in 2022 was a huge inflection point for my personal involvement with the community.
Strongly agreed on this point, it’s pretty hard to substitute for the effect of being immersed in a social environment like that
why longtermist, as opposed to AI safety?
I think there are some really big advantages to having people who are motivated by longtermism and doing good in a scope-sensitive way, rather than just by trying to prevent AI takeover even more broadly “help with AI safety”.
AI safety field building has been popular in part because there is a very broad set of perspectives from which it makes sense to worry about technical problems related to societal risks from powerful AI. (See e.g. Simplify EA Pitches to “Holy Shit, X-Risk”. This kind of field building gets you lots of people who are worried about AI takeover risk, or more broadly, problems related to powerful AI. But it doesn’t get you people who have a lot of other parts of the EA/longtermist worldview, like:
Being scope-sensitive
Being altruistic/cosmopolitan
Being concerned about the moral patienthood of a wide variety of different minds
Being interested in philosophical questions about acausal trade
People who do not have the longtermist worldview and who work on AI safety are useful allies and I’m grateful to have them, but they have some extreme disadvantages compared to people who are on board with more parts of my worldview. And I think it would be pretty sad to have the proportion of people working on AI safety who have the longtermist perspective decline further.
It feels weird to me to treat longtermism as an ingroup/outgroup divider. I guess I think of myself as not really EA/longtermist. I mostly care about the medium-term glorious transhumanist future. I don’t really base my actions on the core longtermist axiom; I only care about the unimaginably vast number of future moral patients indirectly, through caring about humanity being able to make and implement good moral decisions a hundred years from now.
The main thing I look at to determine whether someone is value-aligned with me is whether they care about making the future go well (in a vaguely ambitious transhumanist coded way), as opposed to personal wealth or degrowth whatever.
Yeah, maybe I’m using the wrong word here. I do think there is a really important difference between people who are scope-sensitively altruistically motivated and who are in principle willing to make decisions based on abstract reasoning about the future (which I probably include you in), and people who aren’t.
I have the impression that neither “short” nor “medium”-termist EAs (insofar as those are the labels they use for themselves) care much about 100 years from now. With ~30-50 years being what seems what the typical “medium”-termist EA cares about. So if you care about 100 years, and take “weird” ideas seriously, I think at least I would consider that long-termist. But it has been a while since I’ve consistently read the EA forum.
I think the general category of AI safety capacity building isn’t underdone (there’s quite a lot of it) while I think stuff aiming more directly on longtermism (and AI futurism etc) is underdone. Mixing the two is reasonable tbc, and some of the best stuff focuses on AI safety while mixing in longtermism/futurism/etc. But, lots of the AI safety capacity building is pretty narrow in practice.
While I think the general category of AI safety capacity building isn’t underdone, I do think that (AI safety) retreats in particular are under invested in.