[Question] How can we respond to info-cascades? [Info-cascade series]

This is a ques­tion in the info-cas­cade ques­tion se­ries. There is a prize pool of up to $800 for an­swers to these ques­tions. See the link above for full back­ground on the prob­lem (in­clud­ing a bibliog­ra­phy) as well as ex­am­ples of re­sponses we’d be es­pe­cially ex­cited to see.

___

In my (Ja­cob’s) work at Me­tac­u­lus AI, I’m try­ing to build a cen­tral­ised space for both find­ing fore­casts as well as the rea­son­ing un­der­ly­ing those fore­casts. Hav­ing such a space might serve as a sim­ple way for the AI com­mu­nity to avoid run­way info-cas­cades.

How­ever, we are also con­cerned with situ­a­tions where new fore­cast­ers over­weight the cur­rent crowd opinion in their fore­casts, com­pared to the un­der­ly­ing ev­i­dence, and see this as ma­jor risk for the trust­wor­thi­ness of fore­casts to those work­ing in AI safety and policy.

With this ques­tion, I am in­ter­ested in pre­vi­ous at­tempts to tackle this prob­lem, and how suc­cess­ful they have been. In par­tic­u­lar:

  • What ex­ist­ing in­fras­truc­ture has been his­tor­i­cally effec­tive for avoid­ing info-cas­cades in com­mu­ni­ties? (Ex­am­ples could in­clude short-sel­l­ing to pre­vent bub­bles in as­set mar­kets, or norms to share the causes rather than out­puts of one’s be­liefs)

  • What prob­lems are not ad­e­quately ad­dressed by such in­fras­truc­ture?