(To be clear, I’m not claiming that we ran the title “AI 2027” by you. I don’t think we had chosen a title yet at the time we talked; we just called it “our scenario.” My claim is that we were genuinely interested in your feedback & if you had intervened prior to launch to tell us to change the title, we probably would have. We weren’t dead-set on the title anyway; it wasn’t even my top choice.)
I think your timelines were too aggressive but I wouldn’t worry about the title too much. If by the end of 2027, AI progress is significant enough that no one thinks it’s on track to staying a “normal technology” then I don’t think anyone would hold the 2027 title against you. And if that’s not the case, then titling it AI 2029 wouldn’t have helped.
We did a survey to choose the name, so I have data on this! Apparently my top choice was “What 2027 Looks Like,” my second was “Crunchtime 2027“ and my third choice was “What Superintelligence Looks Like.” With the benefit of hindsight I think only my third choice would have actually been better.
Note that we did the survey after having already talked about it a bunch; IIRC my original top choice was “What 2027 looks like” with “What superintelligence looks like” runner-up, but I had been convinced by discussions that 27 should be in the title and that the title should be short. I ended up using “What superintelligence looks like” as a sort of unofficial subtitle, see here: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1
”What 2027 looks like” was such an appealing title to me because this whole project was inspired by the success of “What 2026 looks like,” a blog post I wrote in 2021 that held up pretty well and which I published without bringing the story to its conclusion. I saw this project as (sort of) fulfilling a promise I made back then to finish the story.
To be clear, the URL for “What Superintelligence Looks Like” that was listed in that survey was “superintelligence2027.com″, so that one also had the year in the name!
I understand the epistemic health concerns, but I think “AI 2027” was great since I don’t think the alternatives would have gained as much attention and it does cleanly summarize the scenario. Even if actual timelines are longer (which imo they probably are) my guess is it is still a net positive as long as readers properly understood the dangers and thought the sequence of events were believable enough.
The scenario really doesn’t focus very much on describing what superintelligence looks like! It has like 7 paragraphs on this? Almost all of it is about trends about when powerful AI will arrive.
And then separately, “What superintelligence looks like” is claiming a much more important answer space than “I think something big will happen with AI in 2027, and here is a scenario about that”.
What you say makes perfect sense; yet, somehow something still feels bad about “AI 2027”. I’m not sure what, so I’m not sure if my sense is good/true/fair. Maybe my sense is about the piece rather than the title. At a vague guess, it’s something about “hype”. Like, “AI 2027” is somehow in accordance with hype—using it, or adding to it, or something. But maybe the crux is just that I think the timelines are overconfident, or that it’s just bad to describe stuff like this in detail (because it’s pumping in narrativium without adding enough info), or something. I’m not sure.
(To be clear, I’m not claiming that we ran the title “AI 2027” by you. I don’t think we had chosen a title yet at the time we talked; we just called it “our scenario.” My claim is that we were genuinely interested in your feedback & if you had intervened prior to launch to tell us to change the title, we probably would have. We weren’t dead-set on the title anyway; it wasn’t even my top choice.)
I think your timelines were too aggressive but I wouldn’t worry about the title too much. If by the end of 2027, AI progress is significant enough that no one thinks it’s on track to staying a “normal technology” then I don’t think anyone would hold the 2027 title against you. And if that’s not the case, then titling it AI 2029 wouldn’t have helped.
Thanks Boaz, that’s encouraging to hear.
Out of curiosity, what was your top choice?
We did a survey to choose the name, so I have data on this! Apparently my top choice was “What 2027 Looks Like,” my second was “Crunchtime 2027“ and my third choice was “What Superintelligence Looks Like.” With the benefit of hindsight I think only my third choice would have actually been better.
Note that we did the survey after having already talked about it a bunch; IIRC my original top choice was “What 2027 looks like” with “What superintelligence looks like” runner-up, but I had been convinced by discussions that 27 should be in the title and that the title should be short. I ended up using “What superintelligence looks like” as a sort of unofficial subtitle, see here: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1
”What 2027 looks like” was such an appealing title to me because this whole project was inspired by the success of “What 2026 looks like,” a blog post I wrote in 2021 that held up pretty well and which I published without bringing the story to its conclusion. I saw this project as (sort of) fulfilling a promise I made back then to finish the story.
To be clear, the URL for “What Superintelligence Looks Like” that was listed in that survey was “superintelligence2027.com″, so that one also had the year in the name!
lol oh yeah.
I understand the epistemic health concerns, but I think “AI 2027” was great since I don’t think the alternatives would have gained as much attention and it does cleanly summarize the scenario. Even if actual timelines are longer (which imo they probably are) my guess is it is still a net positive as long as readers properly understood the dangers and thought the sequence of events were believable enough.
(IMU[ninformed]O, “What superintelligence looks like” is a significantly less epistemically toxic title for that piece than “AI 2027″.)
The scenario really doesn’t focus very much on describing what superintelligence looks like! It has like 7 paragraphs on this? Almost all of it is about trends about when powerful AI will arrive.
And then separately, “What superintelligence looks like” is claiming a much more important answer space than “I think something big will happen with AI in 2027, and here is a scenario about that”.
What you say makes perfect sense; yet, somehow something still feels bad about “AI 2027”. I’m not sure what, so I’m not sure if my sense is good/true/fair. Maybe my sense is about the piece rather than the title. At a vague guess, it’s something about “hype”. Like, “AI 2027” is somehow in accordance with hype—using it, or adding to it, or something. But maybe the crux is just that I think the timelines are overconfident, or that it’s just bad to describe stuff like this in detail (because it’s pumping in narrativium without adding enough info), or something. I’m not sure.
Insofar as zero is significantly smaller than epsilon, yes.