If you’re looking for people interested in personal strategies for individuals (e.g. earning to give), I think most of them are on the Effective Altruism Forum rather than LessWrong. The network effect means that everyone interested in a topic tend to cluster in one forum, even if they are given two choices initially.
Another speculative explanation, is that
maybe the upvote system allows the group of people interested in one particular topic (e.g. technical research, e.g. conceptual theorization) to upvote every post on that topic without running out of upvotes. This rewards people to repeatedly write posts on the most popular topics since it’s much easier to have net positive upvotes that way.
PS: I agree that earning to give is reasonable
I’m considering this myself right now :)
I mostly agree with you that hiring experts and having a great impact is feasible. Many of the technical alignment researchers who lament “money isn’t what we need, what we need is to be on the right direction instead of having so much fake research!” fail to realize that their own salaries are also coming from the flawed but nonetheless vital funding sources. If it wasn’t for the flawed funding sources, they would have nothing at all.
Some of them might be wealthy enough to fund themselves, but that’s effectively still making money to hire experts (the expert is themselves).
And yes, some people use AI safety careers as a stepping stone to AI capabilities careers. But realistically, the whole world spends less than $0.2 billion on AI safety and hundreds of billions on AI capabilities. AI safety salaries are negligible here. One might argue that the non-monetary moral motivation in working on AI safety, has caused people to end up working on AI capabilities. But in this case increasing AI safety salaries should reduce this throughput rather than increase it.
But Raemon is so right about the great danger of being a net negative. Don’t follow an “ends justify the means” strategy like Sam Bankman Fried, beware of your ego convincing you that AI is safer so long as you’re they guy in charge (like Sam Altman or Elon Musk). These biases are insidious, because we are machines programmed by evolution, not to seek truth for the sake of truth, but to
Arrive at the truth when it increases inclusive fitness
Arrive at beliefs which get us to do evil while honestly believing we are doing good (when it increases inclusive fitness)
Arrive at the said beliefs, despite wholly believing we are seeking the truth
Isn’t the most upvoted curated post right now about winning? A case for courage, when speaking of AI danger is talking about strategy, not technical research.
If you’re looking for people interested in personal strategies for individuals (e.g. earning to give), I think most of them are on the Effective Altruism Forum rather than LessWrong. The network effect means that everyone interested in a topic tend to cluster in one forum, even if they are given two choices initially.
Another speculative explanation, is that
maybe the upvote system allows the group of people interested in one particular topic (e.g. technical research, e.g. conceptual theorization) to upvote every post on that topic without running out of upvotes. This rewards people to repeatedly write posts on the most popular topics since it’s much easier to have net positive upvotes that way.
PS: I agree that earning to give is reasonable
I’m considering this myself right now :)
I mostly agree with you that hiring experts and having a great impact is feasible. Many of the technical alignment researchers who lament “money isn’t what we need, what we need is to be on the right direction instead of having so much fake research!” fail to realize that their own salaries are also coming from the flawed but nonetheless vital funding sources. If it wasn’t for the flawed funding sources, they would have nothing at all.
Some of them might be wealthy enough to fund themselves, but that’s effectively still making money to hire experts (the expert is themselves).
And yes, some people use AI safety careers as a stepping stone to AI capabilities careers. But realistically, the whole world spends less than $0.2 billion on AI safety and hundreds of billions on AI capabilities. AI safety salaries are negligible here. One might argue that the non-monetary moral motivation in working on AI safety, has caused people to end up working on AI capabilities. But in this case increasing AI safety salaries should reduce this throughput rather than increase it.
But Raemon is so right about the great danger of being a net negative. Don’t follow an “ends justify the means” strategy like Sam Bankman Fried, beware of your ego convincing you that AI is safer so long as you’re they guy in charge (like Sam Altman or Elon Musk). These biases are insidious, because we are machines programmed by evolution, not to seek truth for the sake of truth, but to
Arrive at the truth when it increases inclusive fitness
Arrive at beliefs which get us to do evil while honestly believing we are doing good (when it increases inclusive fitness)
Arrive at the said beliefs, despite wholly believing we are seeking the truth