there’s significant weight on logarithmically diminishing returns such that the things that are strong than us never get so much stronger that we have no hope of understanding what they’re doing
If autonomous research level AGIs are still 2 OOMs faster than humans, that leads to massive scaling of hardware within years even if they are not smarter, at which point it’s minds the size of cities. So the probable path to weak takeoff is a slow AGI that doesn’t get faster on hardware of the near future, and being slow it won’t soon help scale hardware.
If autonomous research level AGIs are still 2 OOMs faster than humans, that leads to massive scaling of hardware within years even if they are not smarter, at which point it’s minds the size of cities. So the probable path to weak takeoff is a slow AGI that doesn’t get faster on hardware of the near future, and being slow it won’t soon help scale hardware.