It might be good to see an example worked out correctly; all we see here is an incorrect example.
Erhannis Kirran
I feel like this page, in particular the example, is suggesting that “if the probability of X is > 50%, act as though X were true.” This is clearly (to me) untrue, and/or unwise. For instance, suppose you have a box, and a priori there is a 90% chance it contains $10, and a 10% chance it contains a letterbomb. The odds are clearly in favor of $10, but I’d question the sanity of anybody who tried to open the box (without precautions, extra checks, etc.). AFAICT, even when X is probably true, cost/benefit can change what the best course of action is. It seems like this deserves at least a mention.
Actually, both pairs differ by the same factor of about 1.85256. (Same for base 100k.) (In the first case, the evidence is around 9, rather than 8, which likely accounts for the difference you got.) I’m actually a little surprised this is true—log-form is used in order to deal with things additively, rather than multiplicatively, so I wouldn’t have guessed without calculation that changing the base would preserve proportions. (And I’d still want to do some math to confirm this in general.)
Note: easier to grasp than natural logarithms. Both log base e and log base 10 behave in basically the same way, and have the same arguable pitfall. However, note that this behavior is, I think, on purpose, and intended to convey the idea that 10x more probable is 10x more probable, no matter where you are on the scale of improbability. (You just have to remember where you are on the scale before making your final judgements.) Consider the opposite problem raised by the 0-1 scale, as discussed earlier in the page: 0.00001% isn’t that far from 1%, same as between 10.00001% and 11%; but intuitively, clearly 0.00001% is much more different from 1% than 10.00001% is from 11%. This is the non-intuitiveness that the log-form is intended to fix, I think.
I feel like this highlights a property I have come to suspect, and I think may be under-emphasized: Bayesian reasoning does not tell you how probable your hypothesis is. It tells you how probable it is compared to the other options you’ve put on the table. Thinking of a new explanation can radically change things. … It looks like https://arbital.com/p/227/ has good information relevant to this.