TagLast edit: 2 Oct 2020 0:26 UTC by Ruby

In the con­text of Bayes’s The­o­rem, pri­ors re­fer gener­i­cally to the be­liefs an agent holds re­gard­ing a fact, hy­poth­e­sis or con­se­quence, be­fore be­ing pre­sented with ev­i­dence. Upon be­ing pre­sented with new ev­i­dence, the agent can mul­ti­ply their prior with a like­li­hood dis­tri­bu­tion to calcu­late a new (pos­te­rior) prob­a­bil­ity for their be­lief.


Sup­pose you had a bar­rel con­tain­ing some num­ber of red and white balls. You start with the be­lief that each ball was in­de­pen­dently as­signed red color (vs. white color) at some fixed prob­a­bil­ity. Fur­ther­more, you start out ig­no­rant of this fixed prob­a­bil­ity (the pa­ram­e­ter could be any­where be­tween 0 and 1). Each red ball you see then makes it more likely that the next ball will be red, fol­low­ing a Lapla­cian Rule of Suc­ces­sion. For ex­am­ple, see­ing 6 red balls out of 10 sug­gests that the ini­tial prob­a­bil­ity used for as­sign­ing the balls a red color was .6, and that there’s also a prob­a­bil­ity of .6 for the next ball be­ing red.

On the other hand, if you start out with the prior be­lief that the bar­rel con­tains ex­actly 10 red balls and 10 white balls, then each red ball you see makes it less likely that the next ball will be red (be­cause there are fewer red balls re­main­ing).

Thus our prior af­fects how we in­ter­pret the ev­i­dence. The first prior is an in­duc­tive prior—things that hap­pened be­fore are pre­dicted to hap­pen again with greater prob­a­bil­ity. The sec­ond prior is anti-in­duc­tive—the more red balls we see, the fewer we ex­pect to see in the fu­ture.

As a real life ex­am­ple, con­sider two lead­ers from differ­ent poli­ti­cal par­ties. Each one has his own be­liefs—pri­ors—about so­cial or­ga­ni­za­tion and the roles of peo­ple and gov­ern­ment in so­ciety. Th­ese differ­ences in pri­ors can be at­tributed to a wide range of fac­tors, rang­ing from their ed­u­ca­tional back­grounds to hered­i­tary differ­ences in per­son­al­ity. How­ever, nei­ther can show that his be­liefs are bet­ter than those of the other, un­less he can show that his pri­ors were gen­er­ated by sources which track re­al­ity bet­ter1.

Be­cause car­ry­ing out any rea­son­ing at all seems to re­quire a prior of some kind, ideal Bayesi­ans would need some sort of pri­ors from the mo­ment that they were born. The ques­tion of where an ideal Bayesian would get this prior from has oc­ca­sion­ally been a mat­ter of con­sid­er­able con­tro­versy in the philos­o­phy of prob­a­bil­ity.

Up­dat­ing prior probabilities

In in­for­mal dis­cus­sion, peo­ple of­ten talk about “up­dat­ing” their pri­ors. This is tech­ni­cally in­cor­rect, as one does not change their prior prob­a­bil­ity, but rather uses it to calcu­late a pos­te­rior prob­a­bil­ity. How­ever, as this pos­te­rior prob­a­bil­ity then be­comes the prior prob­a­bil­ity for the next in­fer­ence, talk­ing about “up­dat­ing one’s pri­ors” is of­ten a con­ve­nient short­hand.


Blog posts

See also


  1. Robin Han­son (2006). “Un­com­mon Pri­ors Re­quire Ori­gin Dis­putes”. The­ory and De­ci­sion 61 (4) 319–328. http://​​han­​​prior.pdf

Chap­ter 49: Prior Information

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
12 points
6 comments15 min readLW link

Pri­ors as Math­e­mat­i­cal Objects

Eliezer Yudkowsky12 Apr 2007 3:24 UTC
32 points
19 comments4 min readLW link

Pri­ors and Surprise

MichaelVassar3 Mar 2010 8:27 UTC
10 points
32 comments2 min readLW link

Pri­ors Are Useless

DragonGod21 Jun 2017 11:42 UTC
1 point
22 comments1 min readLW link

Cry­on­ics priors

AnthonyC20 Jan 2013 22:08 UTC
6 points
22 comments1 min readLW link

Re­vis­ing pri­ors and an­thropic reasoning

PhilGoetz6 Feb 2011 5:42 UTC
2 points
27 comments1 min readLW link

Sim­plic­ity pri­ors with re­flec­tive oracles

Benya_Fallenstein15 Nov 2014 6:39 UTC
1 point
0 comments6 min readLW link

Against im­proper priors

DanielLC26 Jul 2011 23:50 UTC
3 points
21 comments2 min readLW link

Against im­proper priors

DanielLC26 Jul 2011 23:50 UTC
3 points
21 comments2 min readLW link

A Priori

Eliezer Yudkowsky8 Oct 2007 21:02 UTC
49 points
134 comments4 min readLW link

The prior of a hy­poth­e­sis does not de­pend on its complexity

cousin_it26 Aug 2010 13:20 UTC
26 points
69 comments1 min readLW link

Learn­ing the prior

paulfchristiano5 Jul 2020 21:00 UTC
79 points
18 comments8 min readLW link

Learn­ing the prior and generalization

evhub29 Jul 2020 22:49 UTC
33 points
10 comments4 min readLW link

The uni­ver­sal prior is malign

paulfchristiano30 Nov 2016 22:31 UTC
13 points
0 comments1 min readLW link

Believ­ing oth­ers’ priors

rk22 Nov 2018 20:44 UTC
9 points
19 comments7 min readLW link

Separat­ing the roles of the­ory and di­rect em­piri­cal ev­i­dence in be­lief for­ma­tion: the ex­am­ples of min­i­mum wage and an­thro­pogenic global warming

VipulNaik25 Jun 2014 21:47 UTC
24 points
66 comments4 min readLW link

The Solomonoff Prior is Malign

Mark Xu14 Oct 2020 1:33 UTC
108 points
25 comments16 min readLW link
No comments.