Anything an irrational agent can do due to an epistemic flaw a rational agent can do because it is the best thing for it to do.
(Agree?)
ETA: The reason I ask is that I think the below:
But I argue that from a group perspective, it’s sometimes better to have a spread of individual levels of confidence about the individually rational level. Perhaps paradoxically, disagreements among individuals can be good for the group.
… applies to humans and not pure rational agents. It seems to me that individuals can act as if they have a spread of confidence about the individually rational level without individually differing from the ideal subjectively-objective beliefs that epistemic rationality prescribes to them.
I am trying to establish whether you agree with my original assertion (I think you do) and then work out whether we agree that it follows that all the benefits from having individuals act as if they differ from the individually rational level is just as good as having them actually have the wrong confidence levels. If not then I am trying to understand why we disagree.
I get that you’re trying to establish some shared premise that we can work from, but, I’m not totally sure what you mean by your assertion even with the additional explanation, so let me just try to make an argument for my position, and you can tell me whether any part doesn’t makes sense to you.
Consider a group of 100 ideally rational agents, who for some reason cannot establish a government that is capable of collecting taxes or enforcing contracts at a low cost. They all think that some idea A has probability of .99 of being true, but it would be socially optimal for one individual to continue to scrutinize it for flaws. Suppose that’s because if it is flawed, then one individual would be able to detect it eventually at an expected cost of $1, and knowing that the flaw exists would be worth $10 each for everyone. Unfortunately no individual agent has an incentive to do this on its own, because it would decrease their individual expected utility, and they can’t solve the public goods problem due to large transaction costs.
On the other hand, if there was one agent who irrationally thought that A has only a probability of .8 of being true, then it would be willing to take on this task.
Wedifred’s remarks above seem obvious to me. Furthermore, your reply seems to consist of “for some reason a group cannot solve a coordination problem rationally, but if I suppose that they are allowed to take the same action that a rational group would perform for irrational reasons only, then the irrational group wins”.
The random assignment also requires coordination. The only reason an agent in the group would accept the chance that it has to do the work is that the other agents accept the chance for themselves.
But why are we worrying so much about this? We actually can coordinate.
Consider a group of 100 ideally rational agents, who for some reason cannot establish a government that is capable of collecting taxes or enforcing contracts at a low cost. They all think that some idea A has probability of .99 of being true, but it would be socially optimal for one individual to continue to scrutinize it for flaws. Suppose that’s because if it is flawed, then one individual would be able to detect it eventually at an expected cost of $1, and knowing that the flaw exists would be worth $10 each for everyone. Unfortunately no individual agent has an incentive to do this on its own, because it would decrease their individual expected utility, and they can’t solve the public goods problem due to large transaction costs.
Ok, so one of the agents being epistemically flawed may solve a group coordination problem. I like the counterfactual, could you flesh it out slightly to specify what payoff each individual gets for exploring ideas and contributing them to the collective?
Assertion for the purpose of establishing the nature of possible substantive disagreement:
Anything an irrational agent can do due to an epistemic flaw a rational agent can do because it is the best thing for it to do.
(Agree?)
ETA: The reason I ask is that I think the below:
… applies to humans and not pure rational agents. It seems to me that individuals can act as if they have a spread of confidence about the individually rational level without individually differing from the ideal subjectively-objective beliefs that epistemic rationality prescribes to them.
I am trying to establish whether you agree with my original assertion (I think you do) and then work out whether we agree that it follows that all the benefits from having individuals act as if they differ from the individually rational level is just as good as having them actually have the wrong confidence levels. If not then I am trying to understand why we disagree.
I get that you’re trying to establish some shared premise that we can work from, but, I’m not totally sure what you mean by your assertion even with the additional explanation, so let me just try to make an argument for my position, and you can tell me whether any part doesn’t makes sense to you.
Consider a group of 100 ideally rational agents, who for some reason cannot establish a government that is capable of collecting taxes or enforcing contracts at a low cost. They all think that some idea A has probability of .99 of being true, but it would be socially optimal for one individual to continue to scrutinize it for flaws. Suppose that’s because if it is flawed, then one individual would be able to detect it eventually at an expected cost of $1, and knowing that the flaw exists would be worth $10 each for everyone. Unfortunately no individual agent has an incentive to do this on its own, because it would decrease their individual expected utility, and they can’t solve the public goods problem due to large transaction costs.
On the other hand, if there was one agent who irrationally thought that A has only a probability of .8 of being true, then it would be willing to take on this task.
Wedifred’s remarks above seem obvious to me. Furthermore, your reply seems to consist of “for some reason a group cannot solve a coordination problem rationally, but if I suppose that they are allowed to take the same action that a rational group would perform for irrational reasons only, then the irrational group wins”.
Alternatively, they each roll an appropriately sided die and get on with the task if the die comes up one.
D20, naturally:
20 - do the task
1 - do nothing
For all others compare ‘expected task value’ to your ‘status saving throw’.
Better than random: They are each in a rotation that assigns such tasks to one agent as they come up.
That would require coordination.
The random assignment also requires coordination. The only reason an agent in the group would accept the chance that it has to do the work is that the other agents accept the chance for themselves.
But why are we worrying so much about this? We actually can coordinate.
Ok, so one of the agents being epistemically flawed may solve a group coordination problem. I like the counterfactual, could you flesh it out slightly to specify what payoff each individual gets for exploring ideas and contributing them to the collective?