I would define hard takeoff as “progress in cognitive ability from pretty-low-impact AI to astronomically high-impact AI is discontinuous, and fast in absolute terms”.
Unlocking a technology that lets you kill other powerful optimizers (e.g., nanotech) doesn’t necessarily require fast or discontinuous improvements to systems’ cognition. E.g., humans invented nuclear weapons just via accumulating knowledge over time; the invention wasn’t caused by us surgically editing the human brain a few years prior to improve its reasoning. (Though software improvements like ‘use scientific reasoning’, centuries prior, were obviously necessary.)
I would define hard takeoff as “progress in cognitive ability from pretty-low-impact AI to astronomically high-impact AI is discontinuous, and fast in absolute terms”.
Unlocking a technology that lets you kill other powerful optimizers (e.g., nanotech) doesn’t necessarily require fast or discontinuous improvements to systems’ cognition. E.g., humans invented nuclear weapons just via accumulating knowledge over time; the invention wasn’t caused by us surgically editing the human brain a few years prior to improve its reasoning. (Though software improvements like ‘use scientific reasoning’, centuries prior, were obviously necessary.)