Reading between the lines on the responses, it sounds like op doesn’t have the ability to evaluate grants effectively and has attribute substituted itself to doing things that superficially look like evaluation and selecting internally for people who are unable to distinguish between appearance and actuality. This sounds like a founder effect, downstream of Dustin and Cari being unable to evaluate. This seems like it rhymes with the VC world having a similar dynamic where people on the outside assume it’s about funding cutting edge highly uncertain projects but, after lots of wasted effort, those interested in such high variance projects eventually conclude that VC mostly selects for low variance with a bias towards insiders.
That is to say: investors recognize that they don’t have expertise in selecting unusual projects, so they hire people to ostensibly specialize in evaluating unusual projects, but their own taste in selecting the evaluators means that the evaluators eventually select/are selected for pleasing the investors.
To be specific: some combination of op/gv acts like its opportunity cost for capital is quite high, and it’s unclear why. One hypothesis is ‘since we’re unable to evaluate grants, if we’re profligate with money we will be resource pumped even more than we already are.’
My impression for several years has been that the effort people trying to do interesting work put into trying to engage with ea was wasted, and led to big emotional let downs that impacted their productivity.
There continue to be almost no weirdness dollars available. Temporary availability of weirdness dollars seem to get eaten by those who are conventionally attractive but put on quirky glasses and muss up their hair to appear weird. Like geek protagonists in movies. There’s no escaping the taste of the founder in the long run.
What, no, Oli says OP would do a fine job and make grants in rationality community-building, AI welfare, right-wing policy stuff, invertebrate welfare, etc. but it’s constrained by GV.
[Disagreeing since this is currently the top comment and people might read it rather than listen to the podcast.]
I don’t currently believe this, and don’t think I said so. I do think the GV constraints are big, but also my overall assessment of the net-effect of Open Phil actions is net bad, even if you control for GV, though the calculus gets a lot messier and I am much less confident. Some of that is because of the evidential update from how they handled the GV situation, but also IMO Open Phil has made many other quite grievous mistakes.
My guess is an Open Phil that was continued to be run by Holden would probably be good for the world. I have many disagreements with Holden, and it’s definitely still a high variance situation, but I’ve historically been impressed with his judgement on many issues that I’ve seen OP mess up in recent years.
Last year I read through the past ~4 years of OpenPhil grants, was briefly reassured by seeing a bunch of good grants, then noticed that almost all of the ones which went to places which seemed to be doing work which might plausibly help with superintelligence were before Holden left. Then I was much less reassured.
Reading between the lines on the responses, it sounds like op doesn’t have the ability to evaluate grants effectively and has attribute substituted itself to doing things that superficially look like evaluation and selecting internally for people who are unable to distinguish between appearance and actuality. This sounds like a founder effect, downstream of Dustin and Cari being unable to evaluate. This seems like it rhymes with the VC world having a similar dynamic where people on the outside assume it’s about funding cutting edge highly uncertain projects but, after lots of wasted effort, those interested in such high variance projects eventually conclude that VC mostly selects for low variance with a bias towards insiders.
That is to say: investors recognize that they don’t have expertise in selecting unusual projects, so they hire people to ostensibly specialize in evaluating unusual projects, but their own taste in selecting the evaluators means that the evaluators eventually select/are selected for pleasing the investors.
To be specific: some combination of op/gv acts like its opportunity cost for capital is quite high, and it’s unclear why. One hypothesis is ‘since we’re unable to evaluate grants, if we’re profligate with money we will be resource pumped even more than we already are.’
My impression for several years has been that the effort people trying to do interesting work put into trying to engage with ea was wasted, and led to big emotional let downs that impacted their productivity.
There continue to be almost no weirdness dollars available. Temporary availability of weirdness dollars seem to get eaten by those who are conventionally attractive but put on quirky glasses and muss up their hair to appear weird. Like geek protagonists in movies. There’s no escaping the taste of the founder in the long run.
What, no, Oli says OP would do a fine job and make grants in rationality community-building, AI welfare, right-wing policy stuff, invertebrate welfare, etc. but it’s constrained by GV.
[Disagreeing since this is currently the top comment and people might read it rather than listen to the podcast.]
I don’t currently believe this, and don’t think I said so. I do think the GV constraints are big, but also my overall assessment of the net-effect of Open Phil actions is net bad, even if you control for GV, though the calculus gets a lot messier and I am much less confident. Some of that is because of the evidential update from how they handled the GV situation, but also IMO Open Phil has made many other quite grievous mistakes.
My guess is an Open Phil that was continued to be run by Holden would probably be good for the world. I have many disagreements with Holden, and it’s definitely still a high variance situation, but I’ve historically been impressed with his judgement on many issues that I’ve seen OP mess up in recent years.
Last year I read through the past ~4 years of OpenPhil grants, was briefly reassured by seeing a bunch of good grants, then noticed that almost all of the ones which went to places which seemed to be doing work which might plausibly help with superintelligence were before Holden left. Then I was much less reassured.
Reasonable, I don’t know much about the situation