This is not a response to your central point, but I feel like you somewhat unfairly criticize EAs for stuff like bioanchors often. You often say stuff that makes it seem like bioanchors was released, all EAs bought it wholesale, bioanchors shows we can be confident AI won’t arrive before 2040 or something, and thus all EAs were convinced we don’t need to worry much about AI for a few decades.
But like, I consider myself and EA, I never put much weight on bioanchors. I read the report and found it interesting, I think its useful enough (mostly as a datapoint for other arguments you might make) that I don’t think was a waste of time. But not much more than that. It didn’t really change my views on what should be done. Or the likelihood of AGI being developed at which points in time except on the margins. I mean thats how most people I know read that report. But I feel like you accuse people involved of having far less humility and masking way stronger stronger claims than they are.
Notably, bioanchors doesn’t say that we should be confident AI won’t arrive before 2040! Here’s Ajeya’s distribution in the report (which was finished in about July 2020).
Yeah, to be clear, I don’t think that, and I think most people didn’t think that, but Eliezer has sometimes said stuff that made it seem like he thought people think that. I was remembering a quote from 2:49:00 at this podcast
...effective altruists were devoting some funding to this issue basically because I brow beat them into it. That’s how I would tell the story. And a whole bunch of them, like their theory of AI three years ago was that we probably had about 30 more years in which to work on this problem because of an elaborate argument about how large an AI needed AI model needed to be by analogy to human neurons and it would be trained via the following scaling law which would require this many gpus which at the rate of Moore’s Law and this like attempted rate of software progress began 30 years and I was like:
this entire thing falls apart at the very first joint where you’re trying to make an analogy between the AI models and and the number of human neurons this entire thing is bogus it’s been tried before in all these historical examples none of which were correct either and the effect of altruists like can’t tell that I’m speaking sense and that the 30-year projection is has no grasp on reality if they can’t tell the difference between a good and bad argument there until you know like stuff starts to blow up
now how do you tell who’s making progress in alignment I can stand around being like no no that’s wrong that’s wrong too this is particularly going to fail you you know like this is how it will fail when you try it but as far as they know they’re inventing Brilliant Solutions...
Indicating bioanchors make a stronger statement than it is, and that EAs are much more dogmatic about that report than most EAs are. Although to be fair, he did say probably here.
Upvote-disagree. I think you’re missing an understanding of how influential it was in OpenPhil circles, and how politically controlling OpenPhil has been of EA.
This seems very wrong to me from my experience in 2022 (though maybe the situation was very different in 2021? Or maybe there is some other social circle that I wasn’t exposed to which had these properties?).
I think williawa’s characterization of how people reacted to bioanchors basically matches my experience and I’m skeptical of the claim that OpenPhil was very politically controlling of EA with respect to timelines.
And, I agree with the claim that that Eliezer often implies people interpreted bioanchors in some way they didn’t. (I also think bioanchors looks pretty reasonable in retrospect, but this is a separate claim.)
OpenPhil was on the board of CEA and fired it’s Executive Director and to this day has never said why; it made demands about who was allowed to have power inside of the Atlas Fellowship and who was allowed to teach there; it would fund MIRI by 1/3rd the full amount for (explicitly stated) signaling reasons; in most cases it was not be open about why it would or wouldn’t grant things (often even with grantees!) that left me just having to use my sense of ‘fashion’ to predict who would get grants and how much; I’ve heard rumors I put credence on that it wouldn’t fund AI advocacy stuff in order to stay in the good books of the AI labs… there was really a lot of opaque politicking by OpenPhil, that would of course have a big effect on how people were comfortable behaving and thinking around AI!
It’s silly to think that a politically controlling entity would have to punish ppl for stepping out of line with one particular thing, in order for people to conform on that particular thing. Many people will compliment a dictator’s clothes even when he didn’t specifically ask for that.
This is not a response to your central point, but I feel like you somewhat unfairly criticize EAs for stuff like bioanchors often. You often say stuff that makes it seem like bioanchors was released, all EAs bought it wholesale, bioanchors shows we can be confident AI won’t arrive before 2040 or something, and thus all EAs were convinced we don’t need to worry much about AI for a few decades.
But like, I consider myself and EA, I never put much weight on bioanchors. I read the report and found it interesting, I think its useful enough (mostly as a datapoint for other arguments you might make) that I don’t think was a waste of time. But not much more than that. It didn’t really change my views on what should be done. Or the likelihood of AGI being developed at which points in time except on the margins. I mean thats how most people I know read that report. But I feel like you accuse people involved of having far less humility and masking way stronger stronger claims than they are.
Notably, bioanchors doesn’t say that we should be confident AI won’t arrive before 2040! Here’s Ajeya’s distribution in the report (which was finished in about July 2020).
Yeah, to be clear, I don’t think that, and I think most people didn’t think that, but Eliezer has sometimes said stuff that made it seem like he thought people think that. I was remembering a quote from 2:49:00 at this podcast
Indicating bioanchors make a stronger statement than it is, and that EAs are much more dogmatic about that report than most EAs are. Although to be fair, he did say probably here.
Upvote-disagree. I think you’re missing an understanding of how influential it was in OpenPhil circles, and how politically controlling OpenPhil has been of EA.
This seems very wrong to me from my experience in 2022 (though maybe the situation was very different in 2021? Or maybe there is some other social circle that I wasn’t exposed to which had these properties?).
Which claim?
I think williawa’s characterization of how people reacted to bioanchors basically matches my experience and I’m skeptical of the claim that OpenPhil was very politically controlling of EA with respect to timelines.
And, I agree with the claim that that Eliezer often implies people interpreted bioanchors in some way they didn’t. (I also think bioanchors looks pretty reasonable in retrospect, but this is a separate claim.)
OpenPhil was on the board of CEA and fired it’s Executive Director and to this day has never said why; it made demands about who was allowed to have power inside of the Atlas Fellowship and who was allowed to teach there; it would fund MIRI by 1/3rd the full amount for (explicitly stated) signaling reasons; in most cases it was not be open about why it would or wouldn’t grant things (often even with grantees!) that left me just having to use my sense of ‘fashion’ to predict who would get grants and how much; I’ve heard rumors I put credence on that it wouldn’t fund AI advocacy stuff in order to stay in the good books of the AI labs… there was really a lot of opaque politicking by OpenPhil, that would of course have a big effect on how people were comfortable behaving and thinking around AI!
It’s silly to think that a politically controlling entity would have to punish ppl for stepping out of line with one particular thing, in order for people to conform on that particular thing. Many people will compliment a dictator’s clothes even when he didn’t specifically ask for that.