I think williawa’s characterization of how people reacted to bioanchors basically matches my experience and I’m skeptical of the claim that OpenPhil was very politically controlling of EA with respect to timelines.
And, I agree with the claim that that Eliezer often implies people interpreted bioanchors in some way they didn’t. (I also think bioanchors looks pretty reasonable in retrospect, but this is a separate claim.)
OpenPhil was on the board of CEA and fired it’s Executive Director and to this day has never said why; it made demands about who was allowed to have power inside of the Atlas Fellowship and who was allowed to teach there; it would fund MIRI by 1/3rd the full amount for (explicitly stated) signaling reasons; in most cases it was not be open about why it would or wouldn’t grant things (often even with grantees!) that left me just having to use my sense of ‘fashion’ to predict who would get grants and how much; I’ve heard rumors I put credence on that it wouldn’t fund AI advocacy stuff in order to stay in the good books of the AI labs… there was really a lot of opaque politicking by OpenPhil, that would of course have a big effect on how people were comfortable behaving and thinking around AI!
It’s silly to think that a politically controlling entity would have to punish ppl for stepping out of line with one particular thing, in order for people to conform on that particular thing. Many people will compliment a dictator’s clothes even when he didn’t specifically ask for that.
Which claim?
I think williawa’s characterization of how people reacted to bioanchors basically matches my experience and I’m skeptical of the claim that OpenPhil was very politically controlling of EA with respect to timelines.
And, I agree with the claim that that Eliezer often implies people interpreted bioanchors in some way they didn’t. (I also think bioanchors looks pretty reasonable in retrospect, but this is a separate claim.)
OpenPhil was on the board of CEA and fired it’s Executive Director and to this day has never said why; it made demands about who was allowed to have power inside of the Atlas Fellowship and who was allowed to teach there; it would fund MIRI by 1/3rd the full amount for (explicitly stated) signaling reasons; in most cases it was not be open about why it would or wouldn’t grant things (often even with grantees!) that left me just having to use my sense of ‘fashion’ to predict who would get grants and how much; I’ve heard rumors I put credence on that it wouldn’t fund AI advocacy stuff in order to stay in the good books of the AI labs… there was really a lot of opaque politicking by OpenPhil, that would of course have a big effect on how people were comfortable behaving and thinking around AI!
It’s silly to think that a politically controlling entity would have to punish ppl for stepping out of line with one particular thing, in order for people to conform on that particular thing. Many people will compliment a dictator’s clothes even when he didn’t specifically ask for that.