Aumann agreements are pure fiction; they have no real-world applications. The main problem isn’t that no one is a pure Bayesian. There are 3 bigger problems:
The Bayesians have to divide the world up into symbols in exactly the same way. Since humans (and any intelligent entity that isn’t a lookup table) compress information based on their experience, this can’t be contemplated until the day when we derive more of our mind’s sensory experience from others than from ourselves.
Bayesian inference is slow; pure Bayesians would likely be outcompeted by groups that used faster, less-precise reasoning methods, which are not guaranteed to reach agreement. It is unlikely that this limitation can ever be overcome.
In the name of efficiency, different reasoners would be highly orthogonal, having different knowledge, different knowledge compression schemes and concepts, etc.; reducing the chances of reaching agreement. (In other words: If two reasoners always agree, you can eliminate one of them.)
“Pure fiction” and “no real world application” seem overly strong. Unless you are talking about individuals actually reaching complete agreement, in which case the point is surely true, but relatively trivial.
The interesting question (real world application) is surely how much more we should align our beliefs at the margin.
Also, whether there are any decent quality signals we can use to increase others’ perceptions that we are Bayesian, which would then enable us to use each others’ information more effectively.
Aumann agreements are pure fiction; they have no real-world applications. The main problem isn’t that no one is a pure Bayesian. There are 3 bigger problems:
The Bayesians have to divide the world up into symbols in exactly the same way. Since humans (and any intelligent entity that isn’t a lookup table) compress information based on their experience, this can’t be contemplated until the day when we derive more of our mind’s sensory experience from others than from ourselves.
Bayesian inference is slow; pure Bayesians would likely be outcompeted by groups that used faster, less-precise reasoning methods, which are not guaranteed to reach agreement. It is unlikely that this limitation can ever be overcome.
In the name of efficiency, different reasoners would be highly orthogonal, having different knowledge, different knowledge compression schemes and concepts, etc.; reducing the chances of reaching agreement. (In other words: If two reasoners always agree, you can eliminate one of them.)
This would probably have to wait until May.
“Pure fiction” and “no real world application” seem overly strong. Unless you are talking about individuals actually reaching complete agreement, in which case the point is surely true, but relatively trivial.
The interesting question (real world application) is surely how much more we should align our beliefs at the margin.
Also, whether there are any decent quality signals we can use to increase others’ perceptions that we are Bayesian, which would then enable us to use each others’ information more effectively.