(I wrote up my long timelines take as a new post. It’s somewhat tongue-in-cheek in only myopically making a few more legible points (for example, even slowly growing funding will match the 2,000x scaleout earlier than 2050 if compute price-performance continues on trend). But the overall framing is that there won’t obviously be something left that’s going to predictably blow up on a schedule if we survive 2028, and the danger of 2022-2028 will be matched only by the much more diluted danger of 2028-2050, with the usual basic progress in methods of AI training becoming more important than the current breakneck pace of compute scaling.)
(I wrote up my long timelines take as a new post. It’s somewhat tongue-in-cheek in only myopically making a few more legible points (for example, even slowly growing funding will match the 2,000x scaleout earlier than 2050 if compute price-performance continues on trend). But the overall framing is that there won’t obviously be something left that’s going to predictably blow up on a schedule if we survive 2028, and the danger of 2022-2028 will be matched only by the much more diluted danger of 2028-2050, with the usual basic progress in methods of AI training becoming more important than the current breakneck pace of compute scaling.)