(ISTM the optimal amount of information to reveal would be zero in the zero-sum-game limit and everything you know (neglecting the cost of communication itself etc.) in the identical-payoff-matrices limit.)
Interestingly, ISTM that is itself a Prisoner’s Dilemma: the agent that doesn’t reveal it’s (true) preferences has a much, much better chance of manipulating an agent that does.
Interestingly, ISTM that is itself a Prisoner’s Dilemma: the agent that doesn’t reveal it’s (true) preferences has a much, much better chance of manipulating an agent that does.