Compare your proposal with the AI-2027 scenario or with IABIED. The Race Branch of AI-2027 had Agent-4 succeed at solving alignment to Agent-4 and sandbag on research usable for replacing Agent-4 before Agent-4 solved alignment to itself. When should the AIsfrom the scenario (e.g. Agent-2 or Agent-3) have been more strategically competent? Or do you mean that not even Agent-4 can align Agent-5 to itself? Or that IABIED had an incompetent AI create Sable?
Compare your proposal with the AI-2027 scenario or with IABIED. The Race Branch of AI-2027 had Agent-4 succeed at solving alignment to Agent-4 and sandbag on research usable for replacing Agent-4 before Agent-4 solved alignment to itself. When should the AIs from the scenario (e.g. Agent-2 or Agent-3) have been more strategically competent? Or do you mean that not even Agent-4 can align Agent-5 to itself? Or that IABIED had an incompetent AI create Sable?