We were responding to your specific objection. Does my post accurately represent it? Does it dissolve it?
Yup, the first sentence of your first post describes my post, but I disagree with the second sentence. I don’t think it’s dissolved yet.
As for the rest—I expect we’d recognize a Bayesian turing machine from its behavior, or from observing that it acts according to the correct probabilities for things. Not sure on this part.
I think you’re saying that the Turing machine that reasons by Bayesian inference (i.e. repeated application of the product rule of probability theory) would turn out to be the best one. This seems like a sensible conjecture, but I wonder whether we can prove it. I don’t think this fact emerges trivially from Cox’s theorem or anything other work that I’m aware of.
The other approach is to define the Bayesian Turing machine as the best one. That’s also quite reasonable, but now we have a completely new definition of “Bayesian”, and the question of whether we can prove that it has a connection back to the old definition that involves repeated application of the product rule.
Also: actual Turing Machines and actual Utility Functions are terrible at their jobs.
I think you’re saying that the Turing machine that reasons by Bayesian inference (i.e. repeated application of the product rule of probability theory) would turn out to be the best one.
What I meant to say was that if, per your hypothetical, we designed the best utility optimizer, we could see if it was “Bayesian” by checking whether it acts in accordance with the Bayesian probabilities (a la the Turing test). I guess?
The only real point of my post was that your stated objection to demonstrate that Turing machines clearly couldn’t perform Bayesian updates wasn’t really meaningful, in that the same objection could be leveled at any system more granular than the universe itself. On the other hand, I do expect that Turing-equivalent systems will have trouble with calculating priors.
I suppose you could have one Turing machine for each Planck duration in your lifetime (really this isn’t so much to ask, as each machine is infinite in size anyway), each returning its calculation one after the other?
Yup, the first sentence of your first post describes my post, but I disagree with the second sentence. I don’t think it’s dissolved yet.
I think you’re saying that the Turing machine that reasons by Bayesian inference (i.e. repeated application of the product rule of probability theory) would turn out to be the best one. This seems like a sensible conjecture, but I wonder whether we can prove it. I don’t think this fact emerges trivially from Cox’s theorem or anything other work that I’m aware of.
The other approach is to define the Bayesian Turing machine as the best one. That’s also quite reasonable, but now we have a completely new definition of “Bayesian”, and the question of whether we can prove that it has a connection back to the old definition that involves repeated application of the product rule.
Agreed.
What I meant to say was that if, per your hypothetical, we designed the best utility optimizer, we could see if it was “Bayesian” by checking whether it acts in accordance with the Bayesian probabilities (a la the Turing test). I guess?
The only real point of my post was that your stated objection to demonstrate that Turing machines clearly couldn’t perform Bayesian updates wasn’t really meaningful, in that the same objection could be leveled at any system more granular than the universe itself. On the other hand, I do expect that Turing-equivalent systems will have trouble with calculating priors.
I suppose you could have one Turing machine for each Planck duration in your lifetime (really this isn’t so much to ask, as each machine is infinite in size anyway), each returning its calculation one after the other?