True, but that’s usually very artificial context. Often when someone claims they know the probabilities accurately enough, they are mistaken or lying.
There is one other explanations for the results of those experiments.
In a real world, it’s quite uncommon that somebody tells you exact probabilities—no you need to infer them from the situation around you. And we the people, we pretty much suck at assigning numeric values to probabilities. When I say 99%, it probably means something like 90%. When I say 90%, I’d guess 70% corresponds to that.
But that doesn’t mean that people behave irrationally. If you view the proposed scenarios through the described lens, it’s more like:
a) Certainty of million or ~60% chance on getting 5 millions.
b) Slightly higher probability of getting a million but the difference is much smaller than the actual error in the estimation of probabilities themselves.
With this in mind, the actual behaviour of people makes much more sense.
And what about this argument:
As the civilisation progresses, it becomes increasingly cheaper to destroy the world to the point where any lunatic can do so. It might be so that physical laws make it much harder to protect against destruction than actually destroy—this actually seems to be the case with nuclear weapons.
Certainly, currently there are at least 1 in million people in this world who would choose to destroy it all if they could.
It might be so that we achieve this level of knowledge before we make it to travel across solar systems.
Very simple. To prove it for arbitrary number of values, you just need to prove that h_i being true increases its expected “probability to be assigned” after measurement for each i.
If you define T as h_i and F as NOT h_i, you just reduced the problem to two values version.
There is actually much easier and intuitive proof.
For simplicity, let’s assume H takes only two values T(true) and F(false).
Now, let’s assume that God know that H = T, but observer (me) doesn’t know it. If I now make measurement of some dependent variable D with value d_i, I’all either:
1. Update my probability of T upwards if d_i is more probable under T than in general.
2. Update my probability of T downwards if d_i is less probable under T than in general.
3. Don’t change my probability of T at all if d_is is same as in general.
(In general here means without the knowledge whether T or F happened, i.e. assuming prior probabilities of observer)
Law of conservation of expected evidence tells us that in general (assuming prior probabilities), expected change in assigned probability for T is 0. However, if H=T, than those events that update probability of T upwards are more likely under T than in general, and those which update probability of T downwards are less likely. Thus expected change in assigned probability for T > 0 if T is true.