Eliezer stated his point more precisely in the original post:
As a general principle, on any problem for which you know that a particular unrandomized algorithm is unusually stupid—so that a randomized algorithm seems wiser—you should be able to use the same knowledge to produce a superior derandomized algorithm.
I’d recommend engaging with that formulation of his point, rather than with Silas’s summary (which is what you’ve quoted).
My best guess at which uranium atom will decay next is the uniform distribution over all the atoms. (Unless of course some of them are being bombarded or otherwise are asymmetric cases). If you focus your guess on a random one of the atoms, then you’ll do worse (in terms of Bayesian log-score) than my deterministic choice of maxentropy prior.
My best guess is a uniform distribution over all the atoms. No randomness involved. If you do select one atom at random to focus your guess on, you’ll do worse than my maxentropy prior.
How can you improve guessing which uranium atom will blow up next?
Eliezer stated his point more precisely in the original post:
I’d recommend engaging with that formulation of his point, rather than with Silas’s summary (which is what you’ve quoted).
My best guess at which uranium atom will decay next is the uniform distribution over all the atoms. (Unless of course some of them are being bombarded or otherwise are asymmetric cases). If you focus your guess on a random one of the atoms, then you’ll do worse (in terms of Bayesian log-score) than my deterministic choice of maxentropy prior.
Give me a deterministic algorithm that performs worse than random on that problem and and I will show you how.
My best guess is a uniform distribution over all the atoms. No randomness involved. If you do select one atom at random to focus your guess on, you’ll do worse than my maxentropy prior.