Hmmm. It seems that I should add “as long as you are able to reassign all priors of 1 to priors of 0.999999999, and all priors of 0 to priors of 0.000000001” to my list of exceptions. (It won’t fix the agent immediately, but it will place the agent in a situation of being able to fix itself, given sufficient observations and updates).
Perhaps a perfect agent should occasionally—very occasionally—perturb a random selection of its own priors by some very small factor (10^-10 or smaller) in order to avoid such a potential mathematical dead end?
Hmmm. It seems that I should add “as long as you are able to reassign all priors of 1 to priors of 0.999999999, and all priors of 0 to priors of 0.000000001” to my list of exceptions. (It won’t fix the agent immediately, but it will place the agent in a situation of being able to fix itself, given sufficient observations and updates).
That’s not the only problem. An agent that assigns equal probability to all possible experiences will never update.
Oh, that’s sneaky.
Perhaps a perfect agent should occasionally—very occasionally—perturb a random selection of its own priors by some very small factor (10^-10 or smaller) in order to avoid such a potential mathematical dead end?
Nice try, but random perturbations won’t help here.
I think that this re-emphasises the importance of good priors.