First, the headline claim in your posts is not usually “AI can’t takeoff overnight in software”, it’s “AI can’t reach extreme superhuman levels at all, because humans are already near the cap”.
Where do I have this headline? I certainly don’t believe that—see the speculation here on implications of reversible computing for cold dark ET.
If you were arguing primarily against software takeoff, then presumably you wouldn’t need all this discussion about hardware at all (e.g. in the “Brain Hardware Efficiency” section of your Contra Yudkowsky post), it would just be a discussion of software efficiency.
The thermodynamic efficiency claims is some part of EY’s model and a specific weakness. Even if pure software improvement on current hardware was limited, in EY’s model the AGI could potentially bootstrap a new nanotech assembler based datacenter.
And your arguments about software efficiency are far weaker,
The argument for brain software efficiency in essence is how my model correctly predicted the success of prosaic scaling well in advance, and the scaling laws and the brain efficiency combined suggest limited room for software efficiency improvement (but not non-zero, I anticipate some).
If a slightly-smarter-than-human AI (or multiple such AIs working together, more realistically) could design dramatically better hardware on which to run itself and scale up, that would be an approximately-sufficient condition for takeoff.
Indeed, and I have presented a reasonably extensive review on the literature indicating this is very unlikely in any near term time frame. If you believe my analysis is in err comment there.
Where do I have this headline? I certainly don’t believe that—see the speculation here on implications of reversible computing for cold dark ET.
The thermodynamic efficiency claims is some part of EY’s model and a specific weakness. Even if pure software improvement on current hardware was limited, in EY’s model the AGI could potentially bootstrap a new nanotech assembler based datacenter.
The argument for brain software efficiency in essence is how my model correctly predicted the success of prosaic scaling well in advance, and the scaling laws and the brain efficiency combined suggest limited room for software efficiency improvement (but not non-zero, I anticipate some).
Indeed, and I have presented a reasonably extensive review on the literature indicating this is very unlikely in any near term time frame. If you believe my analysis is in err comment there.