I have a question on a topic sufficiently adjacent I reckon worth asking here of those likely to read the thread.
It seems that warning shots are more likely unsuccessful because of winner’s curse: that the first models to take a shot will be those who have most badly overestimated their chances, and in turn this correlates with worse intellectual capabilities.
Has there been any illuminating discussion on this and its downstream consequences? E.g. how shots and aftermath are likely in practice to be perceived in general, by the better-informed, and—in the context of this post—by competing AIs? What dynamics result?
Guess: it also helps to go meta.
I am a reader, not a writer. But I sure seem to have read and enjoyed an unusual number of posts about experiences of writing.