No, I’m not talking about an agent with perfect knowledge. I’m talking about a perfect updater. A perfect Bayesian updater comes to the best possible decisions given the available information. Giving such a perfect updater new information never makes it’s decisions worse because by definition it always makes the best possible decision given the information. This is a different question from whether it’s probability estimates move closer or further from ‘the truth’ as judged from some external perspective where more information is available.
The concern with imperfect updaters like humans is that giving them more information leads them further away from the theoretical best decision given the information available to them, not that it leads them further away from ‘the truth’. In other words, giving people more information can lead them to make worse decisions (less like the decisions of a perfect Bayesian updater) which may or may not mean their opinions become more aligned with the truth.
These are both concerns, and if we could replace humans with perfect Bayesian updaters, we’d notice the only remaining concern a lot more—namely, that given more (true) information can cause the updater to move away from the objective truth we are trying to reach (the truth that is only knowable with perfect information).
Who would decide which information to withhold in that case? The only way you could be qualified to judge what information to withhold would be if you yourself had perfect information, in which case there’d really be no need for the jury and you could just pass judgement yourself. The only way for a perfect updater to get closer to the truth is for it to seek out more information.
I don’t think a formal proof is needed. An agent with imperfect knowledge does not, by definition, know what ‘the truth’ is. It may be able to judge the impact of extra information on another agent and whether that information will move the other agent closer or further from the first agent’s own probability estimates but it cannot know whether that has the result of moving the second agent’s probability estimates closer to ‘the truth’ because it does not know ‘the truth’.
Point taken. If we assume the Court-agent can effectively communicate all of its knowledge to the Jury-agent, then the Jury can make decisions at least as good as the Court’s. Or the Jury could communicate all of its knowledge to the Court and then we wouldn’t need a Jury. You’re right about this.
But as long as we’re forced to have separate Court and Jury who cannot communicate all their knowledge to one another—perhaps they can only communicate all the knowledge directly relevant to the trial at hand, or there are bandwidth constraints, or the Judge cannot itself appear as witness to provide new information to the Court—then my point stands.
No, I’m not talking about an agent with perfect knowledge. I’m talking about a perfect updater. A perfect Bayesian updater comes to the best possible decisions given the available information. Giving such a perfect updater new information never makes it’s decisions worse because by definition it always makes the best possible decision given the information. This is a different question from whether it’s probability estimates move closer or further from ‘the truth’ as judged from some external perspective where more information is available.
The concern with imperfect updaters like humans is that giving them more information leads them further away from the theoretical best decision given the information available to them, not that it leads them further away from ‘the truth’. In other words, giving people more information can lead them to make worse decisions (less like the decisions of a perfect Bayesian updater) which may or may not mean their opinions become more aligned with the truth.
These are both concerns, and if we could replace humans with perfect Bayesian updaters, we’d notice the only remaining concern a lot more—namely, that given more (true) information can cause the updater to move away from the objective truth we are trying to reach (the truth that is only knowable with perfect information).
Who would decide which information to withhold in that case? The only way you could be qualified to judge what information to withhold would be if you yourself had perfect information, in which case there’d really be no need for the jury and you could just pass judgement yourself. The only way for a perfect updater to get closer to the truth is for it to seek out more information.
That’s a strong claim. Is there a formal proof of this?
I don’t think a formal proof is needed. An agent with imperfect knowledge does not, by definition, know what ‘the truth’ is. It may be able to judge the impact of extra information on another agent and whether that information will move the other agent closer or further from the first agent’s own probability estimates but it cannot know whether that has the result of moving the second agent’s probability estimates closer to ‘the truth’ because it does not know ‘the truth’.
Point taken. If we assume the Court-agent can effectively communicate all of its knowledge to the Jury-agent, then the Jury can make decisions at least as good as the Court’s. Or the Jury could communicate all of its knowledge to the Court and then we wouldn’t need a Jury. You’re right about this.
But as long as we’re forced to have separate Court and Jury who cannot communicate all their knowledge to one another—perhaps they can only communicate all the knowledge directly relevant to the trial at hand, or there are bandwidth constraints, or the Judge cannot itself appear as witness to provide new information to the Court—then my point stands.