Context: Post #4 in my sequence of private Lightcone Infrastructure memos edited for public consumption
This principle is more about how I want people at Lightcone to relate to community governance than it is about our internal team culture.
As part of our jobs at Lightcone we often are in charge of determining access to some resource, or membership in some group (ranging from LessWrong to the AI Alignment Forum to the Lightcone Offices). Through that, I have learned that one of the most important things to do when building things like this is to try to tell people as early as possible if you think they are not a good fit for the community; for both trust within the group, and for the sake of the integrity and success of the group itself.
E.g. when you spot a LessWrong commenter that seems clearly not on track to ever be a good contributor long-term, or someone in the Lightcone Slack clearly seeming like not a good fit, you should aim to off-ramp them as soon as possible, and generally put marginal resources into finding out whether someone is a good long-term fit early, before they invest substantially in the group.
There are two related reasons that push towards this principle:
First, and this is the less important reason, it usually benefits the person you are asking to leave. They likely have a substantial opportunity cost, and little is achieved by stringing them along in a situation you know is unsustainable. People form relationships within communities and groups like ours, and if you know you will tear them apart, better to do so early.
Second, and this is unfortunately the more important reason, is that it is much easier to destroy than to create, and most intense enemies seem to be former members who invested too much before they eventually were asked to leave or cut off some other way. As far as I can tell, it requires a certain kind of intense experience to cause someone to want to invest many hundreds of hours into destroying something, and “feeling betrayed after having made some place their home” is among the top one of those.
In at least the rationality community’s history, most of the people who I think have most actively tried to destroy it, have been former members who invested quite a lot:
David Gerard
Eugine Nier
Émile Torres
Ziz (and most of the Zizians)
Two cases that are a bit less clear, but still show some structural similarities: Peter Thiel, Sam Altman.
Case of someone who does sure seem to hate AI safety people and EAs and probably also rationalists, but who as far as I know never meaningfully was part of it: David Sacks
So, when you are in charge of some kind of group boundary, do the following:
Inasmuch as possible, set up the group to evaluate early whether any long-term members are a good fit
If you know someone isn’t going to work out, try to pay the cost to kick them out early instead of late
Try to prevent “slums” forming where people who don’t meet your group’s standard congregate (this generally gets more likely the later you kick out people)
Tell people as early as possible it’s not going to work out
Context: Post #4 in my sequence of private Lightcone Infrastructure memos edited for public consumption
This principle is more about how I want people at Lightcone to relate to community governance than it is about our internal team culture.
As part of our jobs at Lightcone we often are in charge of determining access to some resource, or membership in some group (ranging from LessWrong to the AI Alignment Forum to the Lightcone Offices). Through that, I have learned that one of the most important things to do when building things like this is to try to tell people as early as possible if you think they are not a good fit for the community; for both trust within the group, and for the sake of the integrity and success of the group itself.
E.g. when you spot a LessWrong commenter that seems clearly not on track to ever be a good contributor long-term, or someone in the Lightcone Slack clearly seeming like not a good fit, you should aim to off-ramp them as soon as possible, and generally put marginal resources into finding out whether someone is a good long-term fit early, before they invest substantially in the group.
There are two related reasons that push towards this principle:
First, and this is the less important reason, it usually benefits the person you are asking to leave. They likely have a substantial opportunity cost, and little is achieved by stringing them along in a situation you know is unsustainable. People form relationships within communities and groups like ours, and if you know you will tear them apart, better to do so early.
Second, and this is unfortunately the more important reason, is that it is much easier to destroy than to create, and most intense enemies seem to be former members who invested too much before they eventually were asked to leave or cut off some other way. As far as I can tell, it requires a certain kind of intense experience to cause someone to want to invest many hundreds of hours into destroying something, and “feeling betrayed after having made some place their home” is among the top one of those.
In at least the rationality community’s history, most of the people who I think have most actively tried to destroy it, have been former members who invested quite a lot:
David Gerard
Eugine Nier
Émile Torres
Ziz (and most of the Zizians)
Two cases that are a bit less clear, but still show some structural similarities: Peter Thiel, Sam Altman.
Case of someone who does sure seem to hate AI safety people and EAs and probably also rationalists, but who as far as I know never meaningfully was part of it: David Sacks
So, when you are in charge of some kind of group boundary, do the following:
Inasmuch as possible, set up the group to evaluate early whether any long-term members are a good fit
If you know someone isn’t going to work out, try to pay the cost to kick them out early instead of late
Try to prevent “slums” forming where people who don’t meet your group’s standard congregate (this generally gets more likely the later you kick out people)