Regarding power laws, I am attempting to make a strong claim about how reality works—reality, not methodology. It is the case that the things we value are power law distributed—whether it’s health interventions, successful startup ideas, or altruistic causes, it turns out that selecting the right one is where most of the variance is.
As a result, one’s ability to do good will indeed be very noisy—this is why many good funders take hits-based approaches. For example, Peter Thiel is known for asking interesting people for their three weirdest ideas, and funding at least one of them. He funded MIRI early, though I expect he probably was quite unsure of it at the time, so I consider him to have picked up some strong hits.
That’s also my feeling wrt the EA post on SSC. I’m generally not happy with the low variance approaches taken within EA and feel sad at how few new ideas are being tested, but I think that to say the number of orgs doing weird things wrt figuring out what matters, is to push the number in the wrong direction.
Sure, I’m basically happy with this modulo taking “power law” with the appropriate grains of salt (e.g. replacing with log normal or some other heavy tailed distribution as appropriate).
Regarding power laws, I am attempting to make a strong claim about how reality works—reality, not methodology. It is the case that the things we value are power law distributed—whether it’s health interventions, successful startup ideas, or altruistic causes, it turns out that selecting the right one is where most of the variance is.
As a result, one’s ability to do good will indeed be very noisy—this is why many good funders take hits-based approaches. For example, Peter Thiel is known for asking interesting people for their three weirdest ideas, and funding at least one of them. He funded MIRI early, though I expect he probably was quite unsure of it at the time, so I consider him to have picked up some strong hits.
That’s also my feeling wrt the EA post on SSC. I’m generally not happy with the low variance approaches taken within EA and feel sad at how few new ideas are being tested, but I think that to say the number of orgs doing weird things wrt figuring out what matters, is to push the number in the wrong direction.
Sure, I’m basically happy with this modulo taking “power law” with the appropriate grains of salt (e.g. replacing with log normal or some other heavy tailed distribution as appropriate).