the median narrative is probably around 2030 or 2031. (At least according to me. Eli Lifland is smarter than me and says December 2028, so idk.)
Notably, this is Eli’s forecast for “superhuman coder” which could be substantially before AIs are capable enough for takeover to be plausible.
I think Eli’s median for “AIs which dominates top human experts at virtually all cognitive tasks” is around 2031, but I’m not sure.
(Note that median of superhuman coder by 2029 and median of “dominates human experts” by 2031 doesn’t imply a median of 2 years between these event because these distributions aren’t symmetric and instead have a long right tail.)
My median for superhuman coder is roughly 2030, and yeah for TEDAI roughly 2031. We included our all-things-considered views in a table in the timelines forecast, which are a bit longer than our within-model views.
Ultimately we circulated the benchmarks-and-gaps figure as the primary one because it’s close to our all-things-considered views and we didn’t have time to make a similar figure for our all-things-considered forecast. Perhaps this was a mistake as per @Max Harms’s point of appearing to have faster timelines than we do (though Daniel’s is a bit faster than my benchmarks-and-gaps distribution with a median in early 2028 instead of late 2028).
[Responding to a related point from the OP] An important takeaway from this is that we should expect people to look back on this scenario and think it was too fast (because it didn’t account for [unlikely event that happened anyway]). I don’t know any way around this; it’s largely going to be a result of people not being prediction-literate. Still, best to write down the prediction of the backlash in advance, and I wish AI 2027 had done this more visibly. (It’s tucked away in a few places, such as footnote #1.)
Yeah seems plausible we should have signaled this more strongly, though it may have been tough to do so without undermining our own credibility too much in the eyes of many readers, given the norms around caveats are quite different in non-rationalist spaces. It being footnote 1 is already decently prominent.
Notably, this is Eli’s forecast for “superhuman coder” which could be substantially before AIs are capable enough for takeover to be plausible.
I think Eli’s median for “AIs which dominates top human experts at virtually all cognitive tasks” is around 2031, but I’m not sure.
(Note that median of superhuman coder by 2029 and median of “dominates human experts” by 2031 doesn’t imply a median of 2 years between these event because these distributions aren’t symmetric and instead have a long right tail.)
My median for superhuman coder is roughly 2030, and yeah for TEDAI roughly 2031. We included our all-things-considered views in a table in the timelines forecast, which are a bit longer than our within-model views.
Ultimately we circulated the benchmarks-and-gaps figure as the primary one because it’s close to our all-things-considered views and we didn’t have time to make a similar figure for our all-things-considered forecast. Perhaps this was a mistake as per @Max Harms’s point of appearing to have faster timelines than we do (though Daniel’s is a bit faster than my benchmarks-and-gaps distribution with a median in early 2028 instead of late 2028).
Yeah seems plausible we should have signaled this more strongly, though it may have been tough to do so without undermining our own credibility too much in the eyes of many readers, given the norms around caveats are quite different in non-rationalist spaces. It being footnote 1 is already decently prominent.
This makes sense. Sorry for getting that detail wrong!