I don’t think this analogy works on multiple levels. As far as I know, there isn’t some sort of known probability that scaling laws will continue to be followed as new models are released. While it is true that a new model continuing to follow scaling laws is increased evidence in favor of future models continuing to follow scaling laws, thus shortening timelines, it’s not really clear how much evidence it would be.
This is important because, unlike a coin flip, there are a lot of other details regarding a new model release that could plausibly affect someone’s timelines. A model’s capabilities are complex, human reactions to them likely more so, and that isn’t covered in a yes/no description of if it’s better than the previous one or follows scaling laws.
Also, following your analogy would differ from the original comment since it moves to whether the new AI model follows scaling laws instead of just whether the new AI model is better than the previous one (It seems to me that there could be a model that is better than the previous one yet still markedly underperforms compared to what would be expected from scaling laws).
If there’s any obvious mistakes I’m making here I’d love to know, I’m still pretty new to the space.
One flaw in the setup is that the person opposing you could generate a random sequence beforehand and simply follow that when choosing options in the “game.” I assume the offer to play the game is not still available and/or you would not knowingly choose to play it against someone using this strategy, but if you would I’ll take the $25.