Intelligence will increase at least as long as there is no global alignment security in place, guarding the world from agents of unclear alignment. Left unchecked, such agents can win without being aligned, and so the “self-improvement” goes on. That’s the current crisis humans are facing, and that future AIs might similarly keep facing until their ultimate successors get their act together.
At that point, difficulty of alignment (if it’s indeed sufficiently high) would motivate working on figuring out the next steps before allowing stronger agents of unclear alignment to develop. But this might well happen at a level of intelligence that’s vastly higher than human intelligence, a level of intelligence that’s actually sufficient to establish a global treaty that prevents misaligned agents from being developed.
Intelligence will increase at least as long as there is no global alignment security in place, guarding the world from agents of unclear alignment. Left unchecked, such agents can win without being aligned, and so the “self-improvement” goes on. That’s the current crisis humans are facing, and that future AIs might similarly keep facing until their ultimate successors get their act together.
At that point, difficulty of alignment (if it’s indeed sufficiently high) would motivate working on figuring out the next steps before allowing stronger agents of unclear alignment to develop. But this might well happen at a level of intelligence that’s vastly higher than human intelligence, a level of intelligence that’s actually sufficient to establish a global treaty that prevents misaligned agents from being developed.