a story that’s in conflict with itself
The story involves phase changes. Just scaling is what’s likely to be available to human developers in the short term (a few years), it’s not enough for superintelligence. Autonomous agency secures funding for a bit more scaling. If this proves sufficient to get smart autonomous chatbots, they then provide speed to very quickly reach the more elusive AI research needed for superintelligence.
It’s not a little speed, it’s a lot of speed, serial speedup of about 100x plus running in parallel. This is not as visible today, because current chatbots are not capable of doing useful work with serial depth, so the serial speedup is not in practice distinct from throughput and cost. But with actually useful chatbots it turns decades to years, software and theory from distant future become quickly available, non-software projects get to be designed in perfect detail faster than they can be assembled.
See “Zero Sum” is a misnomer, shifting and rescaling of utility functions breaks formulations that simply ask to take a sum of payoffs, but we can rescue the concept to mean that all outcomes/strategies of the game are Pareto efficient.
“Positive sum” seems to be about Kaldor-Hicks efficiency, strategies where in principle there is a post-game redistribution of resources that would turn the strategies Pareto efficient, but there is no commitment or possibly even practical feasibility to actually perform the redistribution. This hypothetical redistribution step takes care of comparing utilities of different players. A whole game/interaction/project would then be “positive-sum” if each outcome/strategy is equivalent to some Pareto efficient strategy via a redistribution.