This is a question in the info-cascade question series. There is a prize pool of up to $800 for answers to these questions. See the link above for full background on the problem (including a bibliography) as well as examples of responses we’d be especially excited to see.
In my (Jacob’s) work at Metaculus AI, I’m trying to build a centralised space for both finding forecasts as well as the reasoning underlying those forecasts. Having such a space might serve as a simple way for the AI community to avoid runway info-cascades.
However, we are also concerned with situations where new forecasters overweight the current crowd opinion in their forecasts, compared to the underlying evidence, and see this as major risk for the trustworthiness of forecasts to those working in AI safety and policy.
With this question, I am interested in previous attempts to tackle this problem, and how successful they have been. In particular:
What existing infrastructure has been historically effective for avoiding info-cascades in communities? (Examples could include short-selling to prevent bubbles in asset markets, or norms to share the causes rather than outputs of one’s beliefs)
What problems are not adequately addressed by such infrastructure?