It occurs to me that a karma system (such as that used on this website) has the potential to be an adequate check against the unilateralist’s curse
An assumption here is that people downvote infohazards on that basis. However, in fact we see that many communities have no problem sharing damaging and dangerous information—just look at Reddit.
The quoted sentence claims that karma systems are a check against the unilateralist’s curse specifically, not infohazards in general, as is made explicit in the final sentence of that paragraph (“Conversely, while a net-upvoted post might still be infohazardous [...]”).
I’ve been envisioning “unilateralist’s curse” as referring to situations where the average error in individual agents’ estimates of the value of the initiative (what I called E in the post, but Bostrom et al. calls the error d and says it’s from a cdfF(d)) is zero, and the harm comes from the fact that the variance in error terms makes someone unilaterally act/veto when they shouldn’t, in a way that could be corrected by “listening to their peers.” If the community as a whole is systematically biased about the value of the initiative, that seems like a different, and harder, problem.
This seems basically right if the community of possible actors is the same as the community of voters assigning karma. If the community of voters is different from, or much larger than, the community of actors, you might still encounter the unilateralist’s curse as seen from the perspective of the community of actors, especially if the latter is better-informed than the former.
An assumption here is that people downvote infohazards on that basis. However, in fact we see that many communities have no problem sharing damaging and dangerous information—just look at Reddit.
The quoted sentence claims that karma systems are a check against the unilateralist’s curse specifically, not infohazards in general, as is made explicit in the final sentence of that paragraph (“Conversely, while a net-upvoted post might still be infohazardous [...]”).
I’ve been envisioning “unilateralist’s curse” as referring to situations where the average error in individual agents’ estimates of the value of the initiative (what I called E in the post, but Bostrom et al. calls the error d and says it’s from a cdf F(d)) is zero, and the harm comes from the fact that the variance in error terms makes someone unilaterally act/veto when they shouldn’t, in a way that could be corrected by “listening to their peers.” If the community as a whole is systematically biased about the value of the initiative, that seems like a different, and harder, problem.
This seems basically right if the community of possible actors is the same as the community of voters assigning karma. If the community of voters is different from, or much larger than, the community of actors, you might still encounter the unilateralist’s curse as seen from the perspective of the community of actors, especially if the latter is better-informed than the former.