Impractical, as it happens. I eventually solved the problem by going home, changing into painting clothes, cleaning brushes, arranging tools and stirring paint. At that point it started raining heavily. So I undid all that in the rain, changed back into dry clothes, went back to the coffee shop and am now reading Less Wrong again. I think I just failed rationality for ever.
I don’t think it’s possible to fail rationality “for ever”, as long as you are in a state where you can make observations, record memories, formulate goals, plan and take actions. Though you do seem to have been a bit unfortunate in the timing of the precipitation.
Hmmm. It seems that I should add “as long as you are able to reassign all priors of 1 to priors of 0.999999999, and all priors of 0 to priors of 0.000000001” to my list of exceptions. (It won’t fix the agent immediately, but it will place the agent in a situation of being able to fix itself, given sufficient observations and updates).
Perhaps a perfect agent should occasionally—very occasionally—perturb a random selection of its own priors by some very small factor (10^-10 or smaller) in order to avoid such a potential mathematical dead end?
Impractical, as it happens. I eventually solved the problem by going home, changing into painting clothes, cleaning brushes, arranging tools and stirring paint. At that point it started raining heavily. So I undid all that in the rain, changed back into dry clothes, went back to the coffee shop and am now reading Less Wrong again. I think I just failed rationality for ever.
I don’t think it’s possible to fail rationality “for ever”, as long as you are in a state where you can make observations, record memories, formulate goals, plan and take actions. Though you do seem to have been a bit unfortunate in the timing of the precipitation.
You may already know this, but the phrase “fail x forever” is a thing.
Merely humanly impossible. If you are a more pure agent just assign probability “1” to enough things and you’ll be set.
Hmmm. It seems that I should add “as long as you are able to reassign all priors of 1 to priors of 0.999999999, and all priors of 0 to priors of 0.000000001” to my list of exceptions. (It won’t fix the agent immediately, but it will place the agent in a situation of being able to fix itself, given sufficient observations and updates).
That’s not the only problem. An agent that assigns equal probability to all possible experiences will never update.
Oh, that’s sneaky.
Perhaps a perfect agent should occasionally—very occasionally—perturb a random selection of its own priors by some very small factor (10^-10 or smaller) in order to avoid such a potential mathematical dead end?
Nice try, but random perturbations won’t help here.
I think that this re-emphasises the importance of good priors.