Just for the record:
the labs are planning to automate AI development; we’re already seeing AIs automating many simple AI research tasks [show them a demo of that];
This one I wouldn’t claim to know that someone else doesn’t know this, and it’s what I would assume;
given the METR trendline it’s likely that there will be full or near-full automation in 2027 to 2029;
Pretty skeptical of “full” or “near-full”, in that I would still expect things to be largely bottlenecked on human judgement and not making much (say, greater than 2x or something?) progress compared to currently; in other words, I expect Amdahl. There’s also a big ambiguity and/or assumption about what you’re automating (if you successfully automate a process which doesn’t invent AGI, then you’ve done something but you haven’t set off an intelligence explosion or similar).
and (while it is hard to forecast with any precision) is likely that there will be strategically superhuman AI agents weeks to months after that point.
This is where I think no one has any especially strong reason to think so, or at least, no one has told me so far even in private, let alone publicly (and therefore consensus views seem quite mistaken).
There’s also a piece here of expressing “I didn’t go to the right because I was biased in its favor, I has the opposite bias and it took a lot of evidence to shift me over, which is meta-evidence in favor of my position”
(I also don’t generally see “The left went crazy and drove me to the far right”—usually it’s said by someone who went over the line from moderate left to moderate right. People who jump between extremes usually have different arguments)