I will go ahead and answer your first three questions
Objective Bayesians might have “standard operating procedures” for common problems, but I bet you that I can construct realistic problems where two Objective Bayesians will disagree on how to proceed. At the very least the Objective Bayesians need an “Objective Bayesian manifesto” spelling out what are the canonical procedures.
For the “coin-flipping” example, see my response to RichardKennaway where I ask whether you would still be content to treat the problem as coin-flipping if you had strong prior infromation on g(x).
MaxENT is not invariant to parameterization, and I’m betting that there are examples where it works poorly. Far from being a “universal principle” it ends up being yet another heuristic joining the ranks of asymptotic optimality, minimax, minimax relative to oracle, etc. Not to say these are bad principles—each of them is very useful, but when and where to use them is still subjective.
That would be great if you could implement a Solomonoff prior. It is hard to say whether implementing an approximate algorithmic prior which doesn’t produce garbage is easier or harder than encoding the sum total of human scientific knowledge and heuristics into a Bayesian model, but I’m willing to bet that it is. (This third bet is not a serious bet, the first two are.)
I will go ahead and answer your first three questions
Objective Bayesians might have “standard operating procedures” for common problems, but I bet you that I can construct realistic problems where two Objective Bayesians will disagree on how to proceed. At the very least the Objective Bayesians need an “Objective Bayesian manifesto” spelling out what are the canonical procedures. For the “coin-flipping” example, see my response to RichardKennaway where I ask whether you would still be content to treat the problem as coin-flipping if you had strong prior infromation on g(x).
MaxENT is not invariant to parameterization, and I’m betting that there are examples where it works poorly. Far from being a “universal principle” it ends up being yet another heuristic joining the ranks of asymptotic optimality, minimax, minimax relative to oracle, etc. Not to say these are bad principles—each of them is very useful, but when and where to use them is still subjective.
That would be great if you could implement a Solomonoff prior. It is hard to say whether implementing an approximate algorithmic prior which doesn’t produce garbage is easier or harder than encoding the sum total of human scientific knowledge and heuristics into a Bayesian model, but I’m willing to bet that it is. (This third bet is not a serious bet, the first two are.)