Noisy Reasoners

One of the more in­ter­est­ing pa­pers at this year’s AGI-12 con­fer­ence was Fin­ton Costello’s Noisy Rea­son­ers. I think it will be of in­ter­est to Less Wrong:

This pa­per ex­am­ines rea­son­ing un­der un­cer­tainty in the case where the AI rea­son­ing mechanism is it­self sub­ject to ran­dom er­ror or noise in its own pro­cesses. The main re­sult is a demon­stra­tion that sys­tem­atic, di­rected bi­ases nat­u­rally arise if there is ran­dom noise in a rea­son­ing pro­cess that fol­lows the nor­ma­tive rules of prob­a­bil­ity the­ory. A num­ber of re­li­able er­rors in hu­man rea­son­ing un­der un­cer­tainty can be ex­plained as the con­se­quence of these sys­tem­atic bi­ases due to noise. Since AI sys­tems are sub­ject to noise, we should ex­pect to see the same bi­ases and er­rors in AI rea­son­ing sys­tems based on prob­a­bil­ity the­ory.