…people who were worried because of listening to this algorithm should chill out and re-evaluate.
And communication strategies based on appealing to such people’s reliance on those algorithms should also re-evaluate.
E.g., why did folk write AI 2027? Did they honestly think the timeline was that short? Were they trying to convey a picture that would scare people with something on a short enough timeline that they could feel it?
If the latter, we might be doing humanity a disservice, both by exhausting people from something akin to adrenal fatigue, and also as a result of crying wolf.
Yes, I honestly thought the timeline was that short. I now think it’s 50% by end of 2028; over the last year my timelines have lengthened by about a year.
He makes some obvious points everyone already knows about bottlenecks etc. but then doesn’t explain why all that adds up to a decade or more, instead of of a year, or a month, or a century. In our takeoff speeds forecast we try to give a quantitative estimate that takes into account all the bottlenecks etc.
E.g., why did folk write AI 2027? Did they honestly think the timeline was that short?
Isn’t it more like “I think there’s a 10% chance of transformative AI by 2027, and that is like 100x higher than what it looks like most people think, so people really need to think thru that timeline”?
Like, I generally put my median year at 2030-2032; if we make it to 2028, the situation will still feel like “oh jeez we probably only have a few years left”, unless we made it to 2028 thru a mechanism that clearly blocks transformative AI showing up in 2032. (Like, a lot is hinging on what “feels basically like today” means.)
Isn’t it more like “I think there’s a 10% chance of transformative AI by 2027, and that is like 100x higher than what it looks like most people think, so people really need to think thru that timeline”?
That might be. It sounds really plausible. I don’t know why they wrote it!
But all the same: I don’t think most people know what 10% likelihood of a severe outcome is like or how to think about it sensibly. My read is that the vast majority of people need to treat 10% likelihood of doom as either “It’s not going to happen” (because 10% is small) or “It’s guaranteed to happen” (because it’s a serious outcome if it does happen, and it’s plausible). So, amplifying the public awareness of this possibility seems more to me like moving awareness of the scenario from “Nothing existential is going to happen” to “This specific thing is the default thing to expect.”
So I expect that unless something is done to… I don’t know, magically educate the population on statistical thinking, or propagate a public message that it’s roughly right but its timeline is wrong? then the net effect will be that either (a) AI 2027 will have been collectively forgotten by 2028 in roughly the same way that, say, Trudeau’s use of the Emergencies Act has been forgotten; or (b) the predictions failing to pan out will be used as reason to dismiss other AI doom predictions that are apparently considered more likely.
The main benefit I see is if some key folk are made to think about AI doom scenarios in general as a result of AI 2027, and start to work out how to deal with other scenarios.
But I don’t know. That’s been part of this community’s strategy for over two decades. Get key people thinking about AI risk. And I’m not too keen on the results I’ve seen from that strategy so far.
And communication strategies based on appealing to such people’s reliance on those algorithms should also re-evaluate.
E.g., why did folk write AI 2027? Did they honestly think the timeline was that short? Were they trying to convey a picture that would scare people with something on a short enough timeline that they could feel it?
If the latter, we might be doing humanity a disservice, both by exhausting people from something akin to adrenal fatigue, and also as a result of crying wolf.
Yes, I honestly thought the timeline was that short. I now think it’s 50% by end of 2028; over the last year my timelines have lengthened by about a year.
Well extrapolating that it sounds like things are fine. :P
It has indeed been really nice, psychologically, to have timelines that are lengthening again. 2020 to 2024 that was not the case.
You wrote AI 2027 in April… what changed in such a short amount of time?
If your timelines lengthened over the last year, do you think writing AI 2027 was an honest reflection of your opinions at the time?
The draft of AI 2027 was done in December, then we had months of editing and rewriting in response to feedback. For more on what changed, see various comments I made online such as this one: https://www.lesswrong.com/posts/cxuzALcmucCndYv4a/daniel-kokotajlo-s-shortform?commentId=dq6bpAHeu5Cbbiuyd
We said right on the front page of AI 2027 in a footnote that our actual AGI timelines medians were somewhat longer than 2027:
I also mentioned my slightly longer timelines in various interviews about it, including the first one with Kevin Roose.
OpenAI researcher Jason Wei recently stated that there will be many bottlenecks to recursive self improvement (experiments, data), thoughts?
https://x.com/_jasonwei/status/1939762496757539297z
He makes some obvious points everyone already knows about bottlenecks etc. but then doesn’t explain why all that adds up to a decade or more, instead of of a year, or a month, or a century. In our takeoff speeds forecast we try to give a quantitative estimate that takes into account all the bottlenecks etc.
Isn’t it more like “I think there’s a 10% chance of transformative AI by 2027, and that is like 100x higher than what it looks like most people think, so people really need to think thru that timeline”?
Like, I generally put my median year at 2030-2032; if we make it to 2028, the situation will still feel like “oh jeez we probably only have a few years left”, unless we made it to 2028 thru a mechanism that clearly blocks transformative AI showing up in 2032. (Like, a lot is hinging on what “feels basically like today” means.)
I think Daniel also just has shorter timelines than most (which is correlated for wanting to more urgently communicate that knowledge).
That might be. It sounds really plausible. I don’t know why they wrote it!
But all the same: I don’t think most people know what 10% likelihood of a severe outcome is like or how to think about it sensibly. My read is that the vast majority of people need to treat 10% likelihood of doom as either “It’s not going to happen” (because 10% is small) or “It’s guaranteed to happen” (because it’s a serious outcome if it does happen, and it’s plausible). So, amplifying the public awareness of this possibility seems more to me like moving awareness of the scenario from “Nothing existential is going to happen” to “This specific thing is the default thing to expect.”
So I expect that unless something is done to… I don’t know, magically educate the population on statistical thinking, or propagate a public message that it’s roughly right but its timeline is wrong? then the net effect will be that either (a) AI 2027 will have been collectively forgotten by 2028 in roughly the same way that, say, Trudeau’s use of the Emergencies Act has been forgotten; or (b) the predictions failing to pan out will be used as reason to dismiss other AI doom predictions that are apparently considered more likely.
The main benefit I see is if some key folk are made to think about AI doom scenarios in general as a result of AI 2027, and start to work out how to deal with other scenarios.
But I don’t know. That’s been part of this community’s strategy for over two decades. Get key people thinking about AI risk. And I’m not too keen on the results I’ve seen from that strategy so far.