I agree that late in the singularity, AI workflows may be so different from humans’ that we learn very little from extrapolating from human returns to software R&D, but I expect that early in the takeoff, AIs may look significantly more like large-scale human labor (especially if they are still largely managed by humans). If existing returns are insufficient for a takeoff, that should update us against a software-only takeoff in general because it makes initial compounding less likely.
I also expect to observe relevant returns in the near future as AIs increasingly automate AI R&D (many of the points in the above post would include this). Early automation may give us some evidence on dynamics mid-takeoff.
My point is more that if there isn’t some hidden cognitive bottleneck (that humans plausibly don’t perceive as such since it was always there, relatively immutable), then there wouldn’t be a software-only singularity, it would take long enough that significantly more hardware will have time to come online even after AIs can act as human replacements in the labor market. There will be returns according to factors that can be tracked earlier, but they won’t produce a superintelligence and take over the trees and such before tens of trillions of dollars in AI company revenues buy 100x more compute that’s now designed by these AIs but still largely follows existing methods and trends.
early in the takeoff, AIs may look significantly more like large-scale human labor (especially if they are still largely managed by humans). If existing returns are insufficient for a takeoff, that should update us against a software-only takeoff
In my model, the relevant part of a software-only takeoff only starts once the AIs become capable of accumulating or scaling these cognitive factors of production that can’t be notably scaled for humanity in the relevant timeframes. Thus observing humanity, or transfering learnings across the analogy between human labor and early AI labor, won’t inform these (plausibly originally hidden and unknown) cognitive factors. Only looking at the AIs that do start scaling or accumulating these factors becomes informative.
Even worse, if early human labor replacement capable AIs don’t trigger software-only singularity, it doesn’t mean some later advancement won’t. So all that can be observed is that software-only singularity doesn’t start for greater and greater levels of capability, each advancement can only rule out that it’s sufficient, but it can’t rule out that the next one might be sufficient. The probability goes down, but plausibly it should take a while for it to go way down, and at that point a possible singularity won’t be clearly software-only anymore.
I agree that late in the singularity, AI workflows may be so different from humans’ that we learn very little from extrapolating from human returns to software R&D, but I expect that early in the takeoff, AIs may look significantly more like large-scale human labor (especially if they are still largely managed by humans). If existing returns are insufficient for a takeoff, that should update us against a software-only takeoff in general because it makes initial compounding less likely.
I also expect to observe relevant returns in the near future as AIs increasingly automate AI R&D (many of the points in the above post would include this). Early automation may give us some evidence on dynamics mid-takeoff.
My point is more that if there isn’t some hidden cognitive bottleneck (that humans plausibly don’t perceive as such since it was always there, relatively immutable), then there wouldn’t be a software-only singularity, it would take long enough that significantly more hardware will have time to come online even after AIs can act as human replacements in the labor market. There will be returns according to factors that can be tracked earlier, but they won’t produce a superintelligence and take over the trees and such before tens of trillions of dollars in AI company revenues buy 100x more compute that’s now designed by these AIs but still largely follows existing methods and trends.
In my model, the relevant part of a software-only takeoff only starts once the AIs become capable of accumulating or scaling these cognitive factors of production that can’t be notably scaled for humanity in the relevant timeframes. Thus observing humanity, or transfering learnings across the analogy between human labor and early AI labor, won’t inform these (plausibly originally hidden and unknown) cognitive factors. Only looking at the AIs that do start scaling or accumulating these factors becomes informative.
Even worse, if early human labor replacement capable AIs don’t trigger software-only singularity, it doesn’t mean some later advancement won’t. So all that can be observed is that software-only singularity doesn’t start for greater and greater levels of capability, each advancement can only rule out that it’s sufficient, but it can’t rule out that the next one might be sufficient. The probability goes down, but plausibly it should take a while for it to go way down, and at that point a possible singularity won’t be clearly software-only anymore.