It wasn’t me, but I think that your response is much better than Robin’s, because instead of an unsupported flat out contradiction, you described what the theorem is actually about.
As near as I can tell, what the theorem says is that, provided two people have common knowledge (meaning not only that they know, but also that they know the other one knows, and that the other one knows they know, ad infinitum) that they are Bayesian rationalists with the same priors, that if they both give each other probability estimates for an event, and they don’t then change their estimates, then their estimates must have been equal. It doesn’t actually say how they should come to an agreement if their initial estimates differ, or even that they will.
Indeed, Aumann’s original proof was not constructive. However, it has sincebeen proved that the protocol “state your current posterior, update on the other agent’s statement, repeat” will converge to agreement.
It wasn’t me, but I think that your response is much better than Robin’s, because instead of an unsupported flat out contradiction, you described what the theorem is actually about.
As near as I can tell, what the theorem says is that, provided two people have common knowledge (meaning not only that they know, but also that they know the other one knows, and that the other one knows they know, ad infinitum) that they are Bayesian rationalists with the same priors, that if they both give each other probability estimates for an event, and they don’t then change their estimates, then their estimates must have been equal. It doesn’t actually say how they should come to an agreement if their initial estimates differ, or even that they will.
Indeed, Aumann’s original proof was not constructive. However, it has since been proved that the protocol “state your current posterior, update on the other agent’s statement, repeat” will converge to agreement.