Software-only singularity is about superintelligence, about getting qualitatively smarter than humanity, and plausibly depends on there being a cognitive factor of production that humanity is almost unable to scale or accumulate, but AIs can. Looking at how humans are doing on that factor prior to its scaling getting unlocked for AIs wouldn’t be useful. Knowing how long it took for COVID-19 to start, counting from some arbitrary prior year like 2010, when it didn’t exist yet, won’t tell you how quickly the infection will spread once it exists (starts scaling).
This could be just sample efficiency, being able to come up with good designs or theories with much less feedback (experiments, including compute-expensive ones, or prior theory). But it could also be things like training novel cognitive skills in themselves that are not directly useful, but build up over time like basic science to produce something much more effective a million steps later. Or automated invention of conceptual theory (rather than proving technical results in framings humans have already come up with language for) at such a speed that it would’ve taken humans 1000 years to get there (without experiments), so that anything you might observe over the next 5 years about human progress would be utterly uninformative about how useful orders of magnitude more of theoretical progress would be for AI design.
I agree that late in the singularity, AI workflows may be so different from humans’ that we learn very little from extrapolating from human returns to software R&D, but I expect that early in the takeoff, AIs may look significantly more like large-scale human labor (especially if they are still largely managed by humans). If existing returns are insufficient for a takeoff, that should update us against a software-only takeoff in general because it makes initial compounding less likely.
I also expect to observe relevant returns in the near future as AIs increasingly automate AI R&D (many of the points in the above post would include this). Early automation may give us some evidence on dynamics mid-takeoff.
My point is more that if there isn’t some hidden cognitive bottleneck (that humans plausibly don’t perceive as such since it was always there, relatively immutable), then there wouldn’t be a software-only singularity, it would take long enough that significantly more hardware will have time to come online even after AIs can act as human replacements in the labor market. There will be returns according to factors that can be tracked earlier, but they won’t produce a superintelligence and take over the trees and such before tens of trillions of dollars in AI company revenues buy 100x more compute that’s now designed by these AIs but still largely follows existing methods and trends.
early in the takeoff, AIs may look significantly more like large-scale human labor (especially if they are still largely managed by humans). If existing returns are insufficient for a takeoff, that should update us against a software-only takeoff
In my model, the relevant part of a software-only takeoff only starts once the AIs become capable of accumulating or scaling these cognitive factors of production that can’t be notably scaled for humanity in the relevant timeframes. Thus observing humanity, or transfering learnings across the analogy between human labor and early AI labor, won’t inform these (plausibly originally hidden and unknown) cognitive factors. Only looking at the AIs that do start scaling or accumulating these factors becomes informative.
Even worse, if early human labor replacement capable AIs don’t trigger software-only singularity, it doesn’t mean some later advancement won’t. So all that can be observed is that software-only singularity doesn’t start for greater and greater levels of capability, each advancement can only rule out that it’s sufficient, but it can’t rule out that the next one might be sufficient. The probability goes down, but plausibly it should take a while for it to go way down, and at that point a possible singularity won’t be clearly software-only anymore.
Software-only singularity is about superintelligence, about getting qualitatively smarter than humanity, and plausibly depends on there being a cognitive factor of production that humanity is almost unable to scale or accumulate, but AIs can. Looking at how humans are doing on that factor prior to its scaling getting unlocked for AIs wouldn’t be useful. Knowing how long it took for COVID-19 to start, counting from some arbitrary prior year like 2010, when it didn’t exist yet, won’t tell you how quickly the infection will spread once it exists (starts scaling).
This could be just sample efficiency, being able to come up with good designs or theories with much less feedback (experiments, including compute-expensive ones, or prior theory). But it could also be things like training novel cognitive skills in themselves that are not directly useful, but build up over time like basic science to produce something much more effective a million steps later. Or automated invention of conceptual theory (rather than proving technical results in framings humans have already come up with language for) at such a speed that it would’ve taken humans 1000 years to get there (without experiments), so that anything you might observe over the next 5 years about human progress would be utterly uninformative about how useful orders of magnitude more of theoretical progress would be for AI design.
I agree that late in the singularity, AI workflows may be so different from humans’ that we learn very little from extrapolating from human returns to software R&D, but I expect that early in the takeoff, AIs may look significantly more like large-scale human labor (especially if they are still largely managed by humans). If existing returns are insufficient for a takeoff, that should update us against a software-only takeoff in general because it makes initial compounding less likely.
I also expect to observe relevant returns in the near future as AIs increasingly automate AI R&D (many of the points in the above post would include this). Early automation may give us some evidence on dynamics mid-takeoff.
My point is more that if there isn’t some hidden cognitive bottleneck (that humans plausibly don’t perceive as such since it was always there, relatively immutable), then there wouldn’t be a software-only singularity, it would take long enough that significantly more hardware will have time to come online even after AIs can act as human replacements in the labor market. There will be returns according to factors that can be tracked earlier, but they won’t produce a superintelligence and take over the trees and such before tens of trillions of dollars in AI company revenues buy 100x more compute that’s now designed by these AIs but still largely follows existing methods and trends.
In my model, the relevant part of a software-only takeoff only starts once the AIs become capable of accumulating or scaling these cognitive factors of production that can’t be notably scaled for humanity in the relevant timeframes. Thus observing humanity, or transfering learnings across the analogy between human labor and early AI labor, won’t inform these (plausibly originally hidden and unknown) cognitive factors. Only looking at the AIs that do start scaling or accumulating these factors becomes informative.
Even worse, if early human labor replacement capable AIs don’t trigger software-only singularity, it doesn’t mean some later advancement won’t. So all that can be observed is that software-only singularity doesn’t start for greater and greater levels of capability, each advancement can only rule out that it’s sufficient, but it can’t rule out that the next one might be sufficient. The probability goes down, but plausibly it should take a while for it to go way down, and at that point a possible singularity won’t be clearly software-only anymore.