You are overstating the case by a large margin.
[Saying “I don’t know”] is still the rational thing to say when, in fact, you don’t know.
Saying “I don’t know” may be, to a large degree, the true state of your belief when you use probability theory. But in this case it’s not the rational thing to say when you use decision theory. “I don’t know” is true, but it is a non-answer to the question, and doesn’t get you points. It’s a different matter whether this point system is effective or moral, but as long as it’s there, that’s what you play by.
You can easily do worse than maximum entropy if you guess at random.
If you base your guess correctly on an incomplete model of reality, which you’ve constructed correctly from past observations, you can never do worse, on average, than maximum entropy. More evidence can never lead to less information (as per the Data Processing Inequality).
Furthermore, “getting it right” [...] does not necessarily mean that you possess any anticipation-controllers.
On the contrary, it mean exactly that. Being rewarded for predictive powers improves your model of the world, whereas “I don’t know” is an excuse for not knowing.
In fact, the mechanism employed by the teacher, for all its flaws, achieves 3 important goals:
It motivates students to pay attention, raises their level of alertness, activates their brains.
It rewards students to engage their past observations to generate the most accurate belief they can about the right answer. In the process, they build a better model of the world, and they make their unknown unknowns a little more known.
By forcing students to generate a belief and commit to it before the correct answer is revealed, their hindsight bias is reduced.
Predictions are justified not by becoming a reality, but by the likelihood of their becoming a reality [1]. When this likelihood is hard to estimate, we can take their becoming a reality as weak evidence that the likelihood is high. But in the end, after counting all the evidence, it’s really only the likelihood itself that matters.
If I place a bet on you to win and this happens, I’ll happily collect my prize, but still feel that I put my money on the wrong athlete. My prior and the signal are rich enough for me to deduce that your victory, although factual, was unlikely. If I believed that you’re likely to win, then my belief wasn’t “true for the wrong reasons”, it was simply false. If I believed that “you will win” (no probability qualifier), then in the many universes where you didn’t I’m in Bayes Hell.
Conversely in the other example, your winning itself is again not the best evidence for its own likelihood. Your scoring 14 touches is. My belief that you’re likely to win is true and justified for the right reasons: you’re clearly the better athlete.
[1] Where likelihood is measured either given what I know, or what I could know, or what anybody could know—depending on why we’re asking the question in the first place.