You can do better by saying “I don’t know” than by saying a bunch of wrong stuff. My long reply to Cotra was, “You don’t know, I don’t know, your premises are clearly false, and if you insist on my being Bayesian and providing a direction of predictable error when I claim predictable error then fine your timelines are too long.”
I think an important point is that people can be wrong about timelines in both directions. Anthropic’s official public prediction is that they expect “country of geniuses in a data center” by early 2027. I heard that previously Dario predicted AGI to come even earlier, by 2024 (though I can’t find any source for this now and would be grateful if someone found a source or corrected me that I’m misremembering). Situational Awareness predicts AGI by 2027. The AI safety community’s most successful public output is called AI 2027. These are not fringe figures but some of the most prominent voices in the broader AI safety community. If their timelines turn out to be much too short (as I currently expect), then I think Ajeya’s predictions deserve credit for pushing against these voices, and not only blame for stating a too long timeline.
And I feel it’s not really true that you were just saying “I don’t know” and not implying some predictions yourself. You had the 20230 bet with Bryan. You had the tweet about children not living to see kindergarten. You strongly pushed back against the 2050 timelines, but as far as I know the only time you pushed back agains the very aggressive timelines was your kindergarten tweet, which still implies 2028 timelines. You are now repeatedly calling people who believed the 2050 timelines total fools, which would be an imo very unfair thing to do if AGI arrived after 2037, so I think this implies high confidence on your part that it will come before 2037.
To be clear, I think it’s fine, and often inevitable, to imply things about your timelines beliefs by e.g. what you do and don’t push back against. But I think it’s not fair to claim that you only said “I don’t know”, I think your writing was (perhaps unintentionally?) implying an implicit belief that an AI capable of destroying humanity will come with a median of 2028-2030. I think this would have been a fine prediction to make, but if AI capable of destroying humanity comes after 2037 (which I think is close to 50-50), then I think your implicit predictions will fare worse than Ajeya’s explicit predictions.
I looked at “AI 2027” as a title and shook my head about how that was sacrificing credibility come 2027 on the altar of pretending to be a prophet and picking up some short-term gains at the expense of more cooperative actors. I didn’t bother pushing back because I didn’t expect that to have any effect. I have been yelling at people to shut up about trading their stupid little timelines as if they were astrological signs for as long as that’s been a practice (it has now been replaced by trading made-up numbers for p(doom)).
Huh. I’m fairly confident that we would have chosen a different title if you complained about it to us. We even interviewed you early in the process to get advice on the project, remember? For a while I was arguing for “What Superintelligence Looks Like” as my preferred title, and this would have given me more ammo.
(To be clear, I’m not claiming that we ran the title “AI 2027” by you. I don’t think we had chosen a title yet at the time we talked; we just called it “our scenario.” My claim is that we were genuinely interested in your feedback & if you had intervened prior to launch to tell us to change the title, we probably would have. We weren’t dead-set on the title anyway; it wasn’t even my top choice.)
I think your timelines were too aggressive but I wouldn’t worry about the title too much. If by the end of 2027, AI progress is significant enough that no one thinks it’s on track to staying a “normal technology” then I don’t think anyone would hold the 2027 title against you. And if that’s not the case, then titling it AI 2029 wouldn’t have helped.
We did a survey to choose the name, so I have data on this! Apparently my top choice was “What 2027 Looks Like,” my second was “Crunchtime 2027“ and my third choice was “What Superintelligence Looks Like.” With the benefit of hindsight I think only my third choice would have actually been better.
Note that we did the survey after having already talked about it a bunch; IIRC my original top choice was “What 2027 looks like” with “What superintelligence looks like” runner-up, but I had been convinced by discussions that 27 should be in the title and that the title should be short. I ended up using “What superintelligence looks like” as a sort of unofficial subtitle, see here: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1
”What 2027 looks like” was such an appealing title to me because this whole project was inspired by the success of “What 2026 looks like,” a blog post I wrote in 2021 that held up pretty well and which I published without bringing the story to its conclusion. I saw this project as (sort of) fulfilling a promise I made back then to finish the story.
To be clear, the URL for “What Superintelligence Looks Like” that was listed in that survey was “superintelligence2027.com″, so that one also had the year in the name!
I understand the epistemic health concerns, but I think “AI 2027” was great since I don’t think the alternatives would have gained as much attention and it does cleanly summarize the scenario. Even if actual timelines are longer (which imo they probably are) my guess is it is still a net positive as long as readers properly understood the dangers and thought the sequence of events were believable enough.
The scenario really doesn’t focus very much on describing what superintelligence looks like! It has like 7 paragraphs on this? Almost all of it is about trends about when powerful AI will arrive.
And then separately, “What superintelligence looks like” is claiming a much more important answer space than “I think something big will happen with AI in 2027, and here is a scenario about that”.
What you say makes perfect sense; yet, somehow something still feels bad about “AI 2027”. I’m not sure what, so I’m not sure if my sense is good/true/fair. Maybe my sense is about the piece rather than the title. At a vague guess, it’s something about “hype”. Like, “AI 2027” is somehow in accordance with hype—using it, or adding to it, or something. But maybe the crux is just that I think the timelines are overconfident, or that it’s just bad to describe stuff like this in detail (because it’s pumping in narrativium without adding enough info), or something. I’m not sure.
IMO clearly if someone believes that timelines are that short, then it makes sense for them to say so loudly and publicly, both so that they can stand corrected when it doesn’t happen, and so that people can take the problem with appropriate urgency, so I disagree. And my guess is I have a non-trivial amount of influence on how AI 2027 was done, as well as future projects by the AI Futures Team, and am pretty open to arguments in the space even by your lights, so that such pushback would not have been in vain.
Agree with what Habryka said. Also, Daniel, I, and other AIFP people would update and care about being cooperative / feedback. If anyone is interested in giving feedback on our new scenario about a positive vision post-AGI (about either the content, or the name/branding), please email me.
Also to reiterate: AI 2027 was obviously not a confident prediction of AGI in 2027, it was a scenario where AI happened in 2027, which seems like a plausible and IMO ~modal timeline, and we clearly stated this on the website.
if you insist on my being Bayesian and providing a direction of predictable error when I claim predictable error then fine your timelines are too long.
That doesn’t sound like the correct response though. You should just say “I predict this isn’t the reason AGI will come late, if AGI comes late”. It’s much less legible / operationalized, but if that’s what you think you know in the context, why add on extra stuff?
When somebody at least pretending to humility says, “Well, I think this here estimator is the best thing we have for anchoring a median estimate”, and I stroll over and proclaim, “Well I think that’s invalid”, I do think there is a certain justice in them demanding of me, “Well, would you at least like to say then in what direction my expectation seems to you to be predictably mistaken?”
Cotra’s model contained estimates which are as obvious BS as anchoring the size of a TRANSFORMATIVE neural net to the GENOME or the training compute to the entire evolution of life on Earth. I don’t think that I understand how Cotra even came up with these two ideas. What I do understand is how Cotra came up with the estimates like 1e31 FLOP or the lifetime anchor of 1e24 FLOP which are likely the only plausible ones in the report. As far as I understand, THESE assumptions would imply that creating the TAI is easy.
You can do better by saying “I don’t know” than by saying a bunch of wrong stuff. My long reply to Cotra was, “You don’t know, I don’t know, your premises are clearly false, and if you insist on my being Bayesian and providing a direction of predictable error when I claim predictable error then fine your timelines are too long.”
I think an important point is that people can be wrong about timelines in both directions. Anthropic’s official public prediction is that they expect “country of geniuses in a data center” by early 2027. I heard that previously Dario predicted AGI to come even earlier, by 2024 (though I can’t find any source for this now and would be grateful if someone found a source or corrected me that I’m misremembering). Situational Awareness predicts AGI by 2027. The AI safety community’s most successful public output is called AI 2027. These are not fringe figures but some of the most prominent voices in the broader AI safety community. If their timelines turn out to be much too short (as I currently expect), then I think Ajeya’s predictions deserve credit for pushing against these voices, and not only blame for stating a too long timeline.
And I feel it’s not really true that you were just saying “I don’t know” and not implying some predictions yourself. You had the 20230 bet with Bryan. You had the tweet about children not living to see kindergarten. You strongly pushed back against the 2050 timelines, but as far as I know the only time you pushed back agains the very aggressive timelines was your kindergarten tweet, which still implies 2028 timelines. You are now repeatedly calling people who believed the 2050 timelines total fools, which would be an imo very unfair thing to do if AGI arrived after 2037, so I think this implies high confidence on your part that it will come before 2037.
To be clear, I think it’s fine, and often inevitable, to imply things about your timelines beliefs by e.g. what you do and don’t push back against. But I think it’s not fair to claim that you only said “I don’t know”, I think your writing was (perhaps unintentionally?) implying an implicit belief that an AI capable of destroying humanity will come with a median of 2028-2030. I think this would have been a fine prediction to make, but if AI capable of destroying humanity comes after 2037 (which I think is close to 50-50), then I think your implicit predictions will fare worse than Ajeya’s explicit predictions.
I looked at “AI 2027” as a title and shook my head about how that was sacrificing credibility come 2027 on the altar of pretending to be a prophet and picking up some short-term gains at the expense of more cooperative actors. I didn’t bother pushing back because I didn’t expect that to have any effect. I have been yelling at people to shut up about trading their stupid little timelines as if they were astrological signs for as long as that’s been a practice (it has now been replaced by trading made-up numbers for p(doom)).
Huh. I’m fairly confident that we would have chosen a different title if you complained about it to us. We even interviewed you early in the process to get advice on the project, remember? For a while I was arguing for “What Superintelligence Looks Like” as my preferred title, and this would have given me more ammo.
Noted as a possible error on my part.
(To be clear, I’m not claiming that we ran the title “AI 2027” by you. I don’t think we had chosen a title yet at the time we talked; we just called it “our scenario.” My claim is that we were genuinely interested in your feedback & if you had intervened prior to launch to tell us to change the title, we probably would have. We weren’t dead-set on the title anyway; it wasn’t even my top choice.)
I think your timelines were too aggressive but I wouldn’t worry about the title too much. If by the end of 2027, AI progress is significant enough that no one thinks it’s on track to staying a “normal technology” then I don’t think anyone would hold the 2027 title against you. And if that’s not the case, then titling it AI 2029 wouldn’t have helped.
Thanks Boaz, that’s encouraging to hear.
Out of curiosity, what was your top choice?
We did a survey to choose the name, so I have data on this! Apparently my top choice was “What 2027 Looks Like,” my second was “Crunchtime 2027“ and my third choice was “What Superintelligence Looks Like.” With the benefit of hindsight I think only my third choice would have actually been better.
Note that we did the survey after having already talked about it a bunch; IIRC my original top choice was “What 2027 looks like” with “What superintelligence looks like” runner-up, but I had been convinced by discussions that 27 should be in the title and that the title should be short. I ended up using “What superintelligence looks like” as a sort of unofficial subtitle, see here: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1
”What 2027 looks like” was such an appealing title to me because this whole project was inspired by the success of “What 2026 looks like,” a blog post I wrote in 2021 that held up pretty well and which I published without bringing the story to its conclusion. I saw this project as (sort of) fulfilling a promise I made back then to finish the story.
To be clear, the URL for “What Superintelligence Looks Like” that was listed in that survey was “superintelligence2027.com″, so that one also had the year in the name!
lol oh yeah.
I understand the epistemic health concerns, but I think “AI 2027” was great since I don’t think the alternatives would have gained as much attention and it does cleanly summarize the scenario. Even if actual timelines are longer (which imo they probably are) my guess is it is still a net positive as long as readers properly understood the dangers and thought the sequence of events were believable enough.
(IMU[ninformed]O, “What superintelligence looks like” is a significantly less epistemically toxic title for that piece than “AI 2027″.)
The scenario really doesn’t focus very much on describing what superintelligence looks like! It has like 7 paragraphs on this? Almost all of it is about trends about when powerful AI will arrive.
And then separately, “What superintelligence looks like” is claiming a much more important answer space than “I think something big will happen with AI in 2027, and here is a scenario about that”.
What you say makes perfect sense; yet, somehow something still feels bad about “AI 2027”. I’m not sure what, so I’m not sure if my sense is good/true/fair. Maybe my sense is about the piece rather than the title. At a vague guess, it’s something about “hype”. Like, “AI 2027” is somehow in accordance with hype—using it, or adding to it, or something. But maybe the crux is just that I think the timelines are overconfident, or that it’s just bad to describe stuff like this in detail (because it’s pumping in narrativium without adding enough info), or something. I’m not sure.
Insofar as zero is significantly smaller than epsilon, yes.
I think it’s good to push back!
IMO clearly if someone believes that timelines are that short, then it makes sense for them to say so loudly and publicly, both so that they can stand corrected when it doesn’t happen, and so that people can take the problem with appropriate urgency, so I disagree. And my guess is I have a non-trivial amount of influence on how AI 2027 was done, as well as future projects by the AI Futures Team, and am pretty open to arguments in the space even by your lights, so that such pushback would not have been in vain.
Agree with what Habryka said. Also, Daniel, I, and other AIFP people would update and care about being cooperative / feedback. If anyone is interested in giving feedback on our new scenario about a positive vision post-AGI (about either the content, or the name/branding), please email me.
Also to reiterate: AI 2027 was obviously not a confident prediction of AGI in 2027, it was a scenario where AI happened in 2027, which seems like a plausible and IMO ~modal timeline, and we clearly stated this on the website.
That doesn’t sound like the correct response though. You should just say “I predict this isn’t the reason AGI will come late, if AGI comes late”. It’s much less legible / operationalized, but if that’s what you think you know in the context, why add on extra stuff?
When somebody at least pretending to humility says, “Well, I think this here estimator is the best thing we have for anchoring a median estimate”, and I stroll over and proclaim, “Well I think that’s invalid”, I do think there is a certain justice in them demanding of me, “Well, would you at least like to say then in what direction my expectation seems to you to be predictably mistaken?”
Cotra’s model contained estimates which are as obvious BS as anchoring the size of a TRANSFORMATIVE neural net to the GENOME or the training compute to the entire evolution of life on Earth. I don’t think that I understand how Cotra even came up with these two ideas. What I do understand is how Cotra came up with the estimates like 1e31 FLOP or the lifetime anchor of 1e24 FLOP which are likely the only plausible ones in the report. As far as I understand, THESE assumptions would imply that creating the TAI is easy.