What’s going on with this failure of Bayes to converge?

Link post

There are circumstances (which might only occur with infinitesimal probability, which would be a relief) under which a perfect Bayesian reasoner with an accurate model and reasonable priors – that is to say, somebody doing everything right – will become more and more convinced of a very wrong conclusion, approaching certainty as they gather more data.

(click through the notes on that post to see some previous discussion)

I have two major questions:

1. Is this exposition correctly capturing Freedman’s counterexample?

2. If using a uniform prior sometimes breaks, what prior should I be using, and, more importantly, how do I arrive at that prior?