An additional consideration is that the observation of emergent misalignment (and emergent alignment, on the flip side) suggest that an aligned AGI Night-watchman would likely necessarily have a bunch of values and goals beyond being a good Night-watchman. Thus, it wouldn’t be able to be trusted to be politically neutral. I think this would be clear to any signatories deciding whether to appoint a given AI as the new Night-watchman. Thus, I think something like the checks and balances division of labor you describe would be necessary.
An additional consideration is that the observation of emergent misalignment (and emergent alignment, on the flip side) suggest that an aligned AGI Night-watchman would likely necessarily have a bunch of values and goals beyond being a good Night-watchman. Thus, it wouldn’t be able to be trusted to be politically neutral. I think this would be clear to any signatories deciding whether to appoint a given AI as the new Night-watchman. Thus, I think something like the checks and balances division of labor you describe would be necessary.