It’s an argument about long reasoning traces having sufficient representational capacity to bootstrap general intelligence, not forecasting that the bootstrapping will actually occur. It’s about a necessary condition for straightforward scaling to have a chance of getting there, at an unknown level of scale.
It’s an argument about long reasoning traces having sufficient representational capacity to bootstrap general intelligence, not forecasting that the bootstrapping will actually occur. It’s about a necessary condition for straightforward scaling to have a chance of getting there, at an unknown level of scale.