By contrast, some reasons to be skeptical that AI will be automating more than a few percent of the economy by 2033 (still absent AI R&D feedback loops and/or catastrophe):
I currently expect substantial AI R&D acceleration from AIs which are capable of cheap and fast superhuman performance at arbitrary easy-and-cheap-to-check domains (especially if these AIs are very superhuman). Correspondingly I think “absent AI R&D feedback loops” might be doing a lot of the work here. Minimally, I think full automation of research engineering would yield a large acceleration (e.g. 3x faster AI progress), though this requires high performance on (some) non-formal domains. I think if AIs were (very) superhuman at easy (and cheap) to check stuff, you’d probably be able to use them to do a lot of coding tasks, some small scale research tasks that might transfer well enough, and there would be a decent chance of enough transfer to extend substantially beyond this.
I think I still agree with the bottom line “it’s plausible that for several years in the late 2020s/early 2030s, we will have AI that is vastly superhuman at formal domains including math, but still underperforms humans at most white-collar jobs”. And I agree more strongly if you change “underperforms humans at most white-collar jobs” to “can’t yet fully automate AI R&D”.
I think one can make a stronger claim that the Curry-Howard isomorphism mean a superhuman (constructive?) mathematician would near-definitionally be a superhuman (functional?) programmer as well.
I currently expect substantial AI R&D acceleration from AIs which are capable of cheap and fast superhuman performance at arbitrary easy-and-cheap-to-check domains (especially if these AIs are very superhuman). Correspondingly I think “absent AI R&D feedback loops” might be doing a lot of the work here. Minimally, I think full automation of research engineering would yield a large acceleration (e.g. 3x faster AI progress), though this requires high performance on (some) non-formal domains. I think if AIs were (very) superhuman at easy (and cheap) to check stuff, you’d probably be able to use them to do a lot of coding tasks, some small scale research tasks that might transfer well enough, and there would be a decent chance of enough transfer to extend substantially beyond this.
I think I still agree with the bottom line “it’s plausible that for several years in the late 2020s/early 2030s, we will have AI that is vastly superhuman at formal domains including math, but still underperforms humans at most white-collar jobs”. And I agree more strongly if you change “underperforms humans at most white-collar jobs” to “can’t yet fully automate AI R&D”.
I think one can make a stronger claim that the Curry-Howard isomorphism mean a superhuman (constructive?) mathematician would near-definitionally be a superhuman (functional?) programmer as well.