It’s an argument about long reasoning traces having sufficient representational capacity to bootstrap general intelligence, not forecasting that the bootstrapping will actually occur. It’s about a necessary condition for straightforward scaling to have a chance of getting there, at an unknown level of scale.
It might go that way, but I don’t see strong reasons to expect it.
It’s an argument about long reasoning traces having sufficient representational capacity to bootstrap general intelligence, not forecasting that the bootstrapping will actually occur. It’s about a necessary condition for straightforward scaling to have a chance of getting there, at an unknown level of scale.