There are premises of a frame and the arguments within the frame actually presented. Stating disagreement with the premises is different from discussing the arguments, in the ITT mode where you try to channel the frame.
It seems clear to me that Hanson doesn’t expect SquiggleBots, and he wasn’t presenting arguments on that point, it’s a foundational assumption of his whole frame. It might have a justification in his mind, but it’s out of scope for the talk. There are some clues, like multiple instances of expecting what I would consider philosophical stagnation even in the glorious grabby mode, or maybe unusual confidence in robustness of claims that are currently rather informal, in the face of scrutiny by the Future. This seems to imply not expecting superintelligence that’s strong in the senses I expect it to be strong, capable of sorting out all the little things and not just of taking on galaxy-scale projects.
One point that I think survives his premises when transcribed into a more LW-native frame is value drift/evolution/selection being an important general phenomenon that applies to societies with no AIs, and not addressed by AI alignment for societies with AIs. A superintelligence might sort it out, like it might fix aging. But regardless of that, not noticing that aging is a problem would be a similar oversight as not noticing that value drift is a problem, or that it’s a thing at all.
There are premises of a frame and the arguments within the frame actually presented. Stating disagreement with the premises is different from discussing the arguments, in the ITT mode where you try to channel the frame.
It seems clear to me that Hanson doesn’t expect SquiggleBots, and he wasn’t presenting arguments on that point, it’s a foundational assumption of his whole frame. It might have a justification in his mind, but it’s out of scope for the talk. There are some clues, like multiple instances of expecting what I would consider philosophical stagnation even in the glorious grabby mode, or maybe unusual confidence in robustness of claims that are currently rather informal, in the face of scrutiny by the Future. This seems to imply not expecting superintelligence that’s strong in the senses I expect it to be strong, capable of sorting out all the little things and not just of taking on galaxy-scale projects.
One point that I think survives his premises when transcribed into a more LW-native frame is value drift/evolution/selection being an important general phenomenon that applies to societies with no AIs, and not addressed by AI alignment for societies with AIs. A superintelligence might sort it out, like it might fix aging. But regardless of that, not noticing that aging is a problem would be a similar oversight as not noticing that value drift is a problem, or that it’s a thing at all.