Objective Bayesians say that “if two different people have the same information, B, then they will assign the same plausibility (A|B)”, right? If they didn’t say this, wouldn’t they just be subjective Bayesians?
So how is this possible without the plausibility (A|B) being uniquely determined by A and B?
If two different bayeseans have the same priors and the same evidence, they will agree. If they have mutual knowledge of their rationality and common priors, their posteriors will converge. Neither of these is the same as “having the same information B” when the item in question is A|B (setting B to 1, so any prior for B is irrelevant).
Essentially, he says that it is impossible for two people with the same information to have different priors and instead should use the same “objective prior”. Same idea for evidence as well
Hmmm… I see what you mean but I am not sure if that is the understanding of the Jaynes-Cox school of thought. Please see the picture below ⬇️ - it is from pages 44-45 of Professor Jaynes’ book. Have I misunderstood what Professor Jaynes is saying?
It’s easy to get tripped up here, because authors are describing theoretical perfect agents, but saying “people” to sound somewhat accessible. My old intro to physics book started with an assertion that “in this text, we will assume that all elephants are perfectly spherical, frictionless, and uniformly dense”. This was good for calculating orbital mechanics or collisions, but very bad for understanding anything about pachyderms.
1) people NEVER have the same information. They have different experiences, and can only imperfectly communicate those experiences with each other. They don’t actually do bayesean updates—there’s a bunch of heuristics and summaries that go on in our cognition.
2) Hypotheses about universal common priors are pretty shaky. Selection bias in the universe of considered options is just one way that what you probably think of as “prior” is actually a posterior belief from very early learning.
Ahhh… that makes a lot of sense -Thank you! A couple of things that I still find a bit confusing:
‘It’s easy to get tripped up here, because authors are describing uniformly-dense spherical objects, but calling them “elephants” to make it sound more accessible.’ - So what is the difference between objective Bayesianism and subjective Bayesianism? And do you have any references to show that what you describe is the view of the objective Bayesian school of thought? Although your explanation makes a lot of sense, it does seems to contradict the obvious meaning of the text that I quoted above, which is the bible of objective Bayesianism, so I would appreciate some references that show that the author is actually ‘describing uniformly-dense spherical objects, but calling them “elephants” to make it sound more accessible.’
Professor Jaynes says “It is ‘objectivity’ in this sense that is needed for a scientifically respectable theory of inference.”—How can scientists make claims like “everyone should prefer hypothesis 1 over hypothesis 2 because of the evidence” when they can only talk about the plausibility of the hypotheses given the information that they have which is obviously different to the information that everyone else has? Does every individual have to verify the claims of scientists independently given their own information?
‘Hypotheses about universal common priors are pretty shaky.’ - Are you saying that “a priori” probability distributions don’t exist? This seems to contradict the objective Bayesian viewpoint (please see the quotation below ⬇️ from the Wikipedia page on Uninformative priors)
Some attempts have been made at finding a priori probabilities, i.e. probability distributions in some sense logically required by the nature of one’s state of uncertainty; these are a subject of philosophical controversy, with Bayesians being roughly divided into two schools: “objective Bayesians”, who believe such priors exist in many useful situations, and “subjective Bayesians” who believe that in practice priors usually represent subjective judgements of opinion that cannot be rigorously justified (Williamson 2010). Perhaps the strongest arguments for objective Bayesianism were given by Edwin T. Jaynes, based mainly on the consequences of symmetries and on the principle of maximum entropy.
I should probably have stated earlier that I’m more interested in practical and human-level (and medium-term artificial agents, with far more calculating power than humans, but still each a tiny subset of the actual universe), than in academic or theoretical distinctions.
I am not well-positioned to explain or defend the idea of “objective” probability. There may be such a thing in toy situations, but I haven’t seen any path from micro to macro that makes me believe it’s feasible for anything real.
I see… Thanks a lot for your help anyway. Much appreciated. I’m actually quite new to this forum so I would really appreciate it if someone could point me to the seasoned objective Bayesians here
Thanks!
Objective Bayesians say that “if two different people have the same information, B, then they will assign the same plausibility (A|B)”, right? If they didn’t say this, wouldn’t they just be subjective Bayesians?
So how is this possible without the plausibility (A|B) being uniquely determined by A and B?
If two different bayeseans have the same priors and the same evidence, they will agree. If they have mutual knowledge of their rationality and common priors, their posteriors will converge. Neither of these is the same as “having the same information B” when the item in question is A|B (setting B to 1, so any prior for B is irrelevant).
And the same concept of, and weighting of, evidence.
Yes, “same evidence” in this context implies that it is usable in the same bayesean updates in the same way.
Please see this section about Professor Jaynes’ view of priors from Wikipedia: https://en.wikipedia.org/wiki/Prior_probability#Uninformative_priors
Essentially, he says that it is impossible for two people with the same information to have different priors and instead should use the same “objective prior”. Same idea for evidence as well
Hmmm… I see what you mean but I am not sure if that is the understanding of the Jaynes-Cox school of thought. Please see the picture below ⬇️ - it is from pages 44-45 of Professor Jaynes’ book. Have I misunderstood what Professor Jaynes is saying?
It’s easy to get tripped up here, because authors are describing theoretical perfect agents, but saying “people” to sound somewhat accessible. My old intro to physics book started with an assertion that “in this text, we will assume that all elephants are perfectly spherical, frictionless, and uniformly dense”. This was good for calculating orbital mechanics or collisions, but very bad for understanding anything about pachyderms.
1) people NEVER have the same information. They have different experiences, and can only imperfectly communicate those experiences with each other. They don’t actually do bayesean updates—there’s a bunch of heuristics and summaries that go on in our cognition.
2) Hypotheses about universal common priors are pretty shaky. Selection bias in the universe of considered options is just one way that what you probably think of as “prior” is actually a posterior belief from very early learning.
Ahhh… that makes a lot of sense -Thank you! A couple of things that I still find a bit confusing:
‘It’s easy to get tripped up here, because authors are describing uniformly-dense spherical objects, but calling them “elephants” to make it sound more accessible.’ - So what is the difference between objective Bayesianism and subjective Bayesianism? And do you have any references to show that what you describe is the view of the objective Bayesian school of thought? Although your explanation makes a lot of sense, it does seems to contradict the obvious meaning of the text that I quoted above, which is the bible of objective Bayesianism, so I would appreciate some references that show that the author is actually ‘describing uniformly-dense spherical objects, but calling them “elephants” to make it sound more accessible.’
Professor Jaynes says “It is ‘objectivity’ in this sense that is needed for a scientifically respectable theory of inference.”—How can scientists make claims like “everyone should prefer hypothesis 1 over hypothesis 2 because of the evidence” when they can only talk about the plausibility of the hypotheses given the information that they have which is obviously different to the information that everyone else has? Does every individual have to verify the claims of scientists independently given their own information?
‘Hypotheses about universal common priors are pretty shaky.’ - Are you saying that “a priori” probability distributions don’t exist? This seems to contradict the objective Bayesian viewpoint (please see the quotation below ⬇️ from the Wikipedia page on Uninformative priors)
I should probably have stated earlier that I’m more interested in practical and human-level (and medium-term artificial agents, with far more calculating power than humans, but still each a tiny subset of the actual universe), than in academic or theoretical distinctions.
I am not well-positioned to explain or defend the idea of “objective” probability. There may be such a thing in toy situations, but I haven’t seen any path from micro to macro that makes me believe it’s feasible for anything real.
I see… Thanks a lot for your help anyway. Much appreciated. I’m actually quite new to this forum so I would really appreciate it if someone could point me to the seasoned objective Bayesians here