I agree that pivotal act AI is not achievable in anything like our current world before AGI takeover, though I think it remains plausible that with ~20 more years of no-AGI status quo this can change. Even deep learning might do, with enough decision theory to explain what a system is optimizing, interpretability to ensure it’s optimizing the intended thing and nothing else, synthetic datasets to direct its efforts at purely technical problems, and enough compute to get there directly without a need for design-changing self-improvement.
Pivotal act AI is an answer to the question of what AI-shaped intervention would improve on the default trajectory of losing control to non-foomy general AIs (even if we assume/expect their alignment) with respect to an eventual foom. This doesn’t make the intervention feasible without more things changing significantly, like an ordinary decades-long compute moratorium somehow getting its way.
I guess pivotal AI as non-foom again runs afoul of your definition of foom, but it’s noncentral as an example of the concerning concept. It’s not a general intelligence given the features of the design that tell it not to dwell on the real world and ideas outside its task, maybe remaining unaware of the real world altogether. It’s almost certainly easy to modify its design (and datasets) to turn it into a general intelligence, but as designed it’s not. This reduction does make your argument point to it being infeasible right now. But it’s much easier to see that directly, in how much currently unavailable deconfusion and engineering a pivotal act AI design would require.
I agree that pivotal act AI is not achievable in anything like our current world before AGI takeover, though I think it remains plausible that with ~20 more years of no-AGI status quo this can change. Even deep learning might do, with enough decision theory to explain what a system is optimizing, interpretability to ensure it’s optimizing the intended thing and nothing else, synthetic datasets to direct its efforts at purely technical problems, and enough compute to get there directly without a need for design-changing self-improvement.
Pivotal act AI is an answer to the question of what AI-shaped intervention would improve on the default trajectory of losing control to non-foomy general AIs (even if we assume/expect their alignment) with respect to an eventual foom. This doesn’t make the intervention feasible without more things changing significantly, like an ordinary decades-long compute moratorium somehow getting its way.
I guess pivotal AI as non-foom again runs afoul of your definition of foom, but it’s noncentral as an example of the concerning concept. It’s not a general intelligence given the features of the design that tell it not to dwell on the real world and ideas outside its task, maybe remaining unaware of the real world altogether. It’s almost certainly easy to modify its design (and datasets) to turn it into a general intelligence, but as designed it’s not. This reduction does make your argument point to it being infeasible right now. But it’s much easier to see that directly, in how much currently unavailable deconfusion and engineering a pivotal act AI design would require.