Yes, that is the sort of example I meant. Though of course this particular example does not prove that the game of Catan, in particular, has situations like this.
Based on his other reply, I expect James would want to point out that there is an equivalent equilibrium where player A, instead of saying “button N is blue”, says “either button N is blue or no button is”, which produces the same outcome without technically lying.
I’m coming to think that there should be some other distinction we can draw that rhymes with the truthful/lying distinction but that talks about consequences instead of semantics, and therefore can’t be dodged by relabeling the signals. Still thinking about it.
I would describe a critical try as one where the act of trying is likely to prevent further attempts. Launching an ASI is a critical try because the ASI itself could likely stop you from launching more ASIs later on (e.g. by killing you).
If it’s possible to send out missions to intercept the asteroid before it arrives, then it seems to me that the asteroid is better understood as a time limit than as a critical try. You could set the parameters of the asteroid scenario in such a way that you have time for exactly one try, but you could also set the parameters so that you have time to send up a mission to deflect the asteroid, observe its results, and then make a second try before the asteroid arrives. You could also set the parameters such that you have time for zero tries! The key consideration is how fast you can work vs how much time you have.
Contrariwise, if you assume that you are stopping the asteroid with a shield that is close to the earth, such that no matter how fast you build the shield you have to wait for the asteroid to arrive before you can see how well it works, then I’d call that a critical try, because the part of the plan where you wait for the asteroid to arrive severely depletes a critical resource (time) and makes that resource unavailable for later attempts. (Note similarity to the Maginot Line.)
By similar reasoning, I’d say your #2 (global warming) is also more of a time limit, but your #3 (creating a new type of human that potentially kills you) is a critical try (though compared to launching an ASI, it’s more likely to get a middle-ground outcome).