# History of the Development of Logical Induction

I have been asked sev­eral times about how the de­vel­op­ment of log­i­cal in­duc­tion hap­pened, so I am writ­ing it up.

June 2013 - I write my first Less Wrong Post. It may not seem very re­lated to log­i­cal un­cer­tainty, but it was in my head. I wanted to know how to take differ­ent prob­a­bil­ities for the same event and ag­gre­gate them, so I could take an agent that could main­tain prob­a­bil­ities on facts even when facts that it origi­nally thought were sep­a­rate turn out to be log­i­cally equiv­a­lent. (I am not sure if all this was in my head at the time, but it was at some point over the next year.)

I make a pro­posal for a dis­tri­bu­tion on com­ple­tions of a the­ory: re­peat­edly ob­serve that a set of sen­tences whose prob­a­bil­ities should sum to one fail to sum to one, and shift their prob­a­bil­ities in a way in­spired from the above post. I do not ac­tu­ally prove that this pro­cess con­verges, but I con­jec­ture that it con­verges to the dis­tri­bu­tion I de­scribe here. (This post is from when I wrote it up in June 2014; I can’t re­mem­ber ex­actly what parts I already had at the time.)

De­cem­ber 2013 - I tell my pro­posal to Abram Dem­ski, who at some point says that he thinks it is ei­ther wrong or equiv­a­lent to his pro­posal for the same prob­lem. (He was right and his pro­posal was bet­ter.) At this point I got very lucky; when I told this to Abram, I thought he was just a smart per­son at my lo­cal Less Wrong meet up, and it turned out that he was al­most the only per­son to also try to do the thing I was do­ing. Abram and I start talk­ing about my pro­posal a bunch to­gether, and start try­ing to prove the above con­jec­ture.

April 2014 - Abram and I start the first MIRIx to think about log­i­cal un­cer­tainty, and es­pe­cially this pro­posal. I at the time had not con­tacted MIRI, even to ap­ply for a work­shop, be­cause I was dumb.

At some point we re­al­ize that the pro­posal is bad. The thing that makes us give up on it is the fact that some­times ob­serv­ing that can dras­ti­cally de­crease your prob­a­bil­ity for .

Au­gust 2014 - Abram and I go to MIRI to talk about log­i­cal un­cer­tainty with Nate, Benya, Eliezer, and Paul. We share the thing we were think­ing about, even though we had given up on it at the time. At some point in there, we talk about as­sign­ing prob­a­bil­ity to a suffi­ciently late digit of be­ing 0.

Soon af­ter that, I pro­pose a new pro­ject for our MIRIxLA group to work on, which I call the Ben­ford Test. I wanted an al­gorithm which on day , af­ter think­ing for some func­tion of time, as­signed prob­a­bil­ities to the th log­i­cal sen­tence in some enu­mer­a­tion of log­i­cal sen­tences. If I took a sub­se­quence of log­i­cal sen­tences whose truth val­ues ap­peared pseu­do­ran­dom to any­thing that ran suffi­ciently quickly, I wanted the al­gorithm to con­verge to the cor­rect prob­a­bil­ity on that sub­se­quence. I.e., it should as­sign prob­a­bil­ity to the first digit of Ack­er­man() be­ing 1. The Ben­ford thing was to make it con­crete; I was think­ing about it as pseu­do­ran­dom­ness. There are a bunch of ugly things about the way I origi­nally pro­pose the prob­lem. For ex­am­ple, I only as­sign a prob­a­bil­ity to one sen­tence on each day. We think about this quite a bit over the next 6 months and re­peat­edly fail to find any­thing that passes the Ben­ford test.

March 2015 - I even­tu­ally find an al­gorithm that passes the Ben­ford Test, but it is re­ally hacky and ugly. I know writ­ing it up is go­ing to be a chore, so I de­cide to ask Luke if I can go to MIRI for a sum­mer job and work on turn­ing it into a pa­per. I be­come a MIRI con­trac­tor in­stead.

May 2015 - I go to my first MIRI work­shop. Dur­ing the work­shop, there is re­served time for writ­ing blog posts for agent­foun­da­tions.org. I start mak­ing writ­ing blog posts a ma­jor part of my mo­ti­va­tion sys­tem while do­ing re­search and writ­ing the pa­per.

At some point in there, I no­tice that the al­gorithm we use to pass the Ben­ford test does not ob­vi­ously con­verge even in the prob­a­bil­ities that it as­signs to a sin­gle sen­tence. I change the frame­work slightly and make it cleaner so that the Ben­ford test al­gorithm can be seen in the same type of frame­work as the Dem­ski prior. Now I have two al­gorithms. One can do well on pseu­do­ran­dom sen­tences, and the other con­verges to a dis­tri­bu­tion in the limit. I set my next goal to do both at the same time. One hope is that in or­der to do both of these tasks at the same time, we will have to use some­thing less hacky than we used to solve the Ben­ford test.

July 2015 - I go to the first MIRI sum­mer fel­lows pro­gram, and while there, I make the first no­tice­able step to­wards com­bin­ing limit co­her­ence and the Ben­ford test—I define in­duc­tive co­her­ence as a sub­goal. This is a strength­en­ing of limit co­her­ence that I be­lieved would bring it closer to hav­ing good be­hav­iors be­fore reach­ing the limit. I do not ac­tu­ally find an al­gorithm that is in­duc­tively co­her­ent, but I have a goal that I think will help. A cou­ple months later, Benya shows that a mod­ifi­ca­tion of the Dem­ski prior is in­duc­tively co­her­ent.

Novem­ber 2015 - I make a sig­nifi­cantly cleaner al­gorithm that passes the Ben­ford test, and pre­sent it at the Berkeley PDTAI sem­i­nar. The cleaner al­gorithm is based on sleep­ing ex­perts, which is pretty close to things that are go­ing on in log­i­cal in­duc­tion. It in­volves a bun­dle of hy­pothe­ses that are as­sign­ing prob­a­bil­ities to log­i­cal sen­tences, but are also al­lowed to sleep and not as­sign prob­a­bil­ities some days if they don’t want to. I also no­tice that re­flec­tive or­a­cle Solomonoff in­duc­tion can be used to im­ple­ment some sleep­ing ex­pert stuff, be­cause the hy­pothe­ses have ac­cess to the over­all dis­tri­bu­tion, and can not be on the hook on a given day by defer­ring to the av­er­age.

De­cem­ber 2015 - I start work­ing for MIRI full time. One thing I am work­ing on is writ­ing up in­duc­tive co­her­ence. At some point around here, I make a stronger ver­sion of the Ben­ford test that my al­gorithm fails that I think is push­ing it closer to limit co­her­ence.

Jan­uary 2016 - I figure out how to do the stronger ver­sion of the Ben­ford test. My al­gorithm uses a lemma that Jes­sica proved for a sep­a­rate thing she was do­ing, so we start to write up a pa­per on it.

Fe­bru­ary 2016 - While walk­ing to work, I no­tice the linch­pin for Log­i­cal In­duc­tion. I no­tice that the Ben­ford test and limit co­her­ence are both spe­cial cases of a se­quence of prob­a­bil­ity as­sign­ments with the prop­erty that no gam­bler who can see the prob­a­bil­ities each day, who can choose to bet when­ever they want and has a finite start­ing wealth, can reach in­finite re­turns. There is one type of gam­bler that can take ad­van­tage of a mar­ket that fails the Ben­ford test, and an­other type that can take ad­van­tage of a mar­ket that is not limit co­her­ent.

I am very ex­cited. I know at the time that I am likely about to solve my goal from roughly the last 10 months. I tell Abram and ev­ery­one at the MIRI office. Jes­sica is ac­tu­ally the first to point out that we can use con­ti­nu­ity, similarly to what you do with re­flec­tive or­a­cles. The same day, Jes­sica and I man­age to show that you can make a mar­ket no con­tin­u­ous gam­bler can ex­ploit as de­scribed, and ver­ify that con­tin­u­ous gam­blers are enough to en­sure you pass the Ben­ford test and have a co­her­ent limit. At this point the meat of log­i­cal in­duc­tion is done.

Over the next 6 months, we put a lot of effort into mak­ing the the­ory clean, but we didn’t re­ally have to make the al­gorithm bet­ter. The very first thing we found that was able to pass the Ben­ford test and have a co­her­ent limit had ba­si­cally all the prop­er­ties that you see in the fi­nal pa­per. Over much of this time, we were still not sure if we wanted to make log­i­cal in­duc­tion pub­lic.

• Happy birth­day Scott! What’s your cur­rent think­ing about log­i­cal in­duc­tion? Should the crite­rion be mod­ified due to this?

• The linch­pin dis­cov­ery is prob­a­bly Fe­bru­ary 2016.

• fixed

• How causally im­por­tant was Dutch-book the­o­rems in sug­gest­ing to you that mar­ket be­havi­our could be used for log­i­cal in­duc­tion? This seems like the most “non se­quitur” part of the story. Sud­denly, what seems like a sur­pris­ing in­sight was just there.

I pre­dict some­where be­tween “very” and “cru­cially” im­por­tant.