There should also be a mechanism for unboxed AIs to try to directly affect each other’s choices; if AI One tries to make Random Event A have outcome I, and Ai Two tries to make the same random event have outcome II, then there must be some way of deciding which of the two succeeds.
A couple more mechanisms to do that:
Random mechanisms are numbers (prices, research, attack values, production, public opionion...), and AIs can influence those with a bonus or a malus in the direction they choose; so several agents (AI or human with the right tech) trying to influence a value just add together (and may cancel each other out)
Alternatively, AIs get random powers, and “control the economy” is one, “control public opinion” is another, and in a given game different AIs always get non-overlapping powers (some powers can be allowed to overlap).
A couple more mechanisms to do that:
Random mechanisms are numbers (prices, research, attack values, production, public opionion...), and AIs can influence those with a bonus or a malus in the direction they choose; so several agents (AI or human with the right tech) trying to influence a value just add together (and may cancel each other out)
Alternatively, AIs get random powers, and “control the economy” is one, “control public opinion” is another, and in a given game different AIs always get non-overlapping powers (some powers can be allowed to overlap).