So what I want to propose is that we define much more clearly what it takes to be taken seriously around here.
I mostly agree with your description of the problem, and I sympathise with your past self. However, I also think you understate the extent to which the EA and rationality communities are based around individual friendships. That makes things much messier than they might be in a corporation, and make definitions like the one you propose much harder.
On the other hand, it also means that there’s another sense in which “we can all be high-status”: within our respective local communities. I’m curious how you feel about that, because that was quite adequate for me for a long time, especially as a student.
On a broader level, one actionable idea I’ve been thinking about is to talk less about existential risk being “talent constrained”, so that people who can’t get full-time jobs in the field don’t feel like they’re not talented. A more accurate term in my eyes is “field-building constrained”.
On the other hand, it also means that there’s another sense in which “we can all be high-status”: within our respective local communities. I’m curious how you feel about that, because that was quite adequate for me for a long time, especially as a student.
This is what we’ve built with LessWrong Netherlands. We call it the Home Bayes and it’s a group of 15ish people with tight bonds and formal membership. It works like a charm.
On a broader level, one actionable idea I’ve been thinking about is to talk less about existential risk being “talent constrained”, so that people who can’t get full-time jobs in the field don’t feel like they’re not talented. A more accurate term in my eyes is “field-building constrained”.
I’m glad someone else had this idea.
Coming from my own startup with plenty of talent around but so far not a lot of funding, I think the problem isn’t initiative. It’s getting the funding to the right initiatives. This is why 80K has listed grantmaking as one of their highest impact careers, because the money is there, but given the CEA assumption that random cause has 0 expected value, they have to single out the good ones, and that’s happening so slowly that a lot of ideas are stranding before they even got “whitelisted”.
I mostly agree with your description of the problem, and I sympathise with your past self. However, I also think you understate the extent to which the EA and rationality communities are based around individual friendships. That makes things much messier than they might be in a corporation, and make definitions like the one you propose much harder.
On the other hand, it also means that there’s another sense in which “we can all be high-status”: within our respective local communities. I’m curious how you feel about that, because that was quite adequate for me for a long time, especially as a student.
On a broader level, one actionable idea I’ve been thinking about is to talk less about existential risk being “talent constrained”, so that people who can’t get full-time jobs in the field don’t feel like they’re not talented. A more accurate term in my eyes is “field-building constrained”.
Yes, yes. All of this.
This is what we’ve built with LessWrong Netherlands. We call it the Home Bayes and it’s a group of 15ish people with tight bonds and formal membership. It works like a charm.
I’m glad someone else had this idea.
Coming from my own startup with plenty of talent around but so far not a lot of funding, I think the problem isn’t initiative. It’s getting the funding to the right initiatives. This is why 80K has listed grantmaking as one of their highest impact careers, because the money is there, but given the CEA assumption that random cause has 0 expected value, they have to single out the good ones, and that’s happening so slowly that a lot of ideas are stranding before they even got “whitelisted”.