Those implications are only correct if we remain at subhuman data-efficiency for an extended period. In AI 2027 the AIs reach superhuman data-efficiency by roughly the end of 2027 (it’s part of the package of being superintelligent) so there isn’t enough time for the implications you describe to happen. Basically in our story, the intelligence explosion gets started in early 2027 with very data-inefficient AIs, but then it reaches superintelligence by the end of the year, solving data-efficiency along the way.
In that case, “2027-level AGI agents are not yet data efficient but are capable of designing successors that solve the data efficiency bottleneck despite that limitation” seems pretty cruxy.
I probably want to bet against that. I will spend some time this weekend contemplating how that could be operationalized, and particularly trying to think of something where we could get evidence before 2027.
Those implications are only correct if we remain at subhuman data-efficiency for an extended period. In AI 2027 the AIs reach superhuman data-efficiency by roughly the end of 2027 (it’s part of the package of being superintelligent) so there isn’t enough time for the implications you describe to happen. Basically in our story, the intelligence explosion gets started in early 2027 with very data-inefficient AIs, but then it reaches superintelligence by the end of the year, solving data-efficiency along the way.
In that case, “2027-level AGI agents are not yet data efficient but are capable of designing successors that solve the data efficiency bottleneck despite that limitation” seems pretty cruxy.
I probably want to bet against that. I will spend some time this weekend contemplating how that could be operationalized, and particularly trying to think of something where we could get evidence before 2027.