I think this is only going to work in very limited scenarios. What kind of event is X? Is the uncertainty about it logical or indexical? If it’s logical then given enough computing resources, each agent can condition its action on the occurrence of the other agent’s event. If it’s indexical then the agents can cooperate by each agent physically creating an event of other agent’s type in its own physical vicinity.
I am confused by the apparent assumption you can control the utility functions of both agents. If you only worry about cooperation between agents you construct yourself then it’s possible you can use this kind of method to prevent it, at least as long as the agents don’t break out of the box. However if you worry about cooperation of your AI with agents in other universes then the method seems ineffective.
I think this is only going to work in very limited scenarios. What kind of event is X? Is the uncertainty about it logical or indexical? If it’s logical then given enough computing resources, each agent can condition its action on the occurrence of the other agent’s event. If it’s indexical then the agents can cooperate by each agent physically creating an event of other agent’s type in its own physical vicinity.
I am confused by the apparent assumption you can control the utility functions of both agents. If you only worry about cooperation between agents you construct yourself then it’s possible you can use this kind of method to prevent it, at least as long as the agents don’t break out of the box. However if you worry about cooperation of your AI with agents in other universes then the method seems ineffective.