And now you seem to be saying that for me to set P(prop_i) = 0.35 (which is the species average), is in some way better? More accurate, presumably?
If you have no other information, it does reduce variance, while keeping bias the same. This reduces expected squared error, due to the bias-variance tradeoff.
Eliezer explicitly argues that this is not a good argument for averaging your opinions with a crowd. However, I don’t like his argument there very much. He argues that squared error is not necessarily the right notion of error, and provides an alternative error function as an example where you escape the conclusion.
I think what this means is that given only the two options, averaging your beliefs with those of other people is better than doing nothing at all. However, both are worse than a Bayesian update.
If you have no other information, it does reduce variance, while keeping bias the same. This reduces expected squared error, due to the bias-variance tradeoff.
Eliezer explicitly argues that this is not a good argument for averaging your opinions with a crowd. However, I don’t like his argument there very much. He argues that squared error is not necessarily the right notion of error, and provides an alternative error function as an example where you escape the conclusion.
However, he relies on giving a nonconvex error function. It seems to me that most of the time, the error function will be convex in practice, as shown in A Pragmatist’s Guide to Epistemic Utility by Ben Levinstein.
I think what this means is that given only the two options, averaging your beliefs with those of other people is better than doing nothing at all. However, both are worse than a Bayesian update.