Agreed—true propositions (or good actions) typically have more pro reasons than con reasons, and false propositions (or bad actions) typically have more con reasons. So when you start out with a high probability you should expect to mostly discover more pro reasons (although in a fraction of cases where it is actually false you will instead find a large number of con reasons). When you assign a 50% chance to a proposition, you should still expect the new information to cluster in the same direction, you just don’t know which direction that will be. So 10 straight pro reasons isn’t really a 1/1024 event since they’re aren’t independent.
Skewed distributions also seem like the typical case. If your prior probability is not 50% then one tail of the distribution for changes in probability will be longer than the other, so the distribution will be skewed (especially since it must balance out the long tail with probability mass on the other side to obey conservation of expected evidence). The natural units for strength of evidence are log-odds, rather than probability, which gives you another way to think about why the changes in probability are skewed when you don’t start at 50% (the same increase in log-odds gives a smaller probability increase at high probabilities).
Part of the trick in avoiding confirmation bias (and the like) is figuring out which reasons should be independent. I’ll second JGWeissman’s recommendation of Eliezer’s post for its discussion of that issue.
I agree and have added a clarification at the bottom of the post.
I’d recommend rewriting the post to make it clearer. You go back and forth between beliefs and actions in the post, without always making it clear whether you’re describing similarities or differences between them, and you give examples for your exceptions but not for your general principle (until the clarification).
I added some more caveats in the text.