My Kind of Reflection

In “Where Re­cur­sive Jus­tifi­ca­tion Hits Bot­tom”, I con­cluded that it’s okay to use in­duc­tion to rea­son about the prob­a­bil­ity that in­duc­tion will work in the fu­ture, given that it’s worked in the past; or to use Oc­cam’s Ra­zor to con­clude that the sim­plest ex­pla­na­tion for why Oc­cam’s Ra­zor works is that the uni­verse it­self is fun­da­men­tally sim­ple.

Now I am far from the first per­son to con­sider re­flec­tive ap­pli­ca­tion of rea­son­ing prin­ci­ples. Chris Hib­bert com­pared my view to Bartley’s Pan-Crit­i­cal Ra­tion­al­ism (I was won­der­ing whether that would hap­pen). So it seems worth­while to state what I see as the dis­t­in­guish­ing fea­tures of my view of re­flec­tion, which may or may not hap­pen to be shared by any other philoso­pher’s view of re­flec­tion.

• All of my philos­o­phy here ac­tu­ally comes from try­ing to figure out how to build a self-mod­ify­ing AI that ap­plies its own rea­son­ing prin­ci­ples to it­self in the pro­cess of rewrit­ing its own source code. So when­ever I talk about us­ing in­duc­tion to li­cense in­duc­tion, I’m re­ally think­ing about an in­duc­tive AI con­sid­er­ing a rewrite of the part of it­self that performs in­duc­tion. If you wouldn’t want the AI to rewrite its source code to not use in­duc­tion, your philos­o­phy had bet­ter not la­bel in­duc­tion as un­jus­tifi­able.

• One of the most pow­er­ful gen­eral prin­ci­ples I know for AI in gen­eral, is that the true Way gen­er­ally turns out to be nat­u­ral­is­tic—which for re­flec­tive rea­son­ing, means treat­ing tran­sis­tors in­side the AI, just as if they were tran­sis­tors found in the en­vi­ron­ment; not an ad-hoc spe­cial case. This is the real source of my in­sis­tence in “Re­cur­sive Jus­tifi­ca­tion” that ques­tions like “How well does my ver­sion of Oc­cam’s Ra­zor work?” should be con­sid­ered just like an or­di­nary ques­tion—or at least an or­di­nary very deep ques­tion. I strongly sus­pect that a cor­rectly built AI, in pon­der­ing mod­ifi­ca­tions to the part of its source code that im­ple­ments Oc­camian rea­son­ing, will not have to do any­thing spe­cial as it pon­ders—in par­tic­u­lar, it shouldn’t have to make a spe­cial effort to avoid us­ing Oc­camian rea­son­ing.

• I don’t think that “re­flec­tive co­her­ence” or “re­flec­tive con­sis­tency” should be con­sid­ered as a desider­a­tum in it­self. As I said in the Twelve Virtues and the Sim­ple Truth, if you make five ac­cu­rate maps of the same city, then the maps will nec­es­sar­ily be con­sis­tent with each other; but if you draw one map by fan­tasy and then make four copies, the five will be con­sis­tent but not ac­cu­rate. In the same way, no one is de­liber­ately pur­su­ing re­flec­tive con­sis­tency, and re­flec­tive con­sis­tency is not a spe­cial war­rant of trust­wor­thi­ness; the goal is to win. But any­one who pur­sues the goal of win­ning, us­ing their cur­rent no­tion of win­ning, and mod­ify­ing their own source code, will end up re­flec­tively con­sis­tent as a side effect—just like some­one con­tinu­ally striv­ing to im­prove their map of the world should find the parts be­com­ing more con­sis­tent among them­selves, as a side effect. If you put on your AI gog­gles, then the AI, rewrit­ing its own source code, is not try­ing to make it­self “re­flec­tively con­sis­tent”—it is try­ing to op­ti­mize the ex­pected util­ity of its source code, and it hap­pens to be do­ing this us­ing its cur­rent mind’s an­ti­ci­pa­tion of the con­se­quences.

• One of the ways I li­cense us­ing in­duc­tion and Oc­cam’s Ra­zor to con­sider “in­duc­tion” and “Oc­cam’s Ra­zor”, is by ap­peal­ing to E. T. Jaynes’s prin­ci­ple that we should always use all the in­for­ma­tion available to us (com­put­ing power per­mit­ting) in a calcu­la­tion. If you think in­duc­tion works, then you should use it in or­der to use your max­i­mum power, in­clud­ing when you’re think­ing about in­duc­tion.

• In gen­eral, I think it’s valuable to dis­t­in­guish a defen­sive pos­ture where you’re imag­in­ing how to jus­tify your philos­o­phy to a philoso­pher that ques­tions you, from an ag­gres­sive pos­ture where you’re try­ing to get as close to the truth as pos­si­ble. So it’s not that be­ing sus­pi­cious of Oc­cam’s Ra­zor, but us­ing your cur­rent mind and in­tel­li­gence to in­spect it, shows that you’re be­ing fair and defen­si­ble by ques­tion­ing your foun­da­tional be­liefs. Rather, the rea­son why you would in­spect Oc­cam’s Ra­zor is to see if you could im­prove your ap­pli­ca­tion of it, or if you’re wor­ried it might re­ally be wrong. I tend to de­p­re­cate mere du­tiful doubts.

• If you run around in­spect­ing your foun­da­tions, I ex­pect you to ac­tu­ally im­prove them, not just du­tifully in­ves­ti­gate. Our brains are built to as­sess “sim­plic­ity” in a cer­tain in­tu­itive way that makes Thor sound sim­pler than Maxwell’s Equa­tions as an ex­pla­na­tion for light­ning. But, hav­ing got­ten a bet­ter look at the way the uni­verse re­ally works, we’ve con­cluded that differ­en­tial equa­tions (which few hu­mans mas­ter) are ac­tu­ally sim­pler (in an in­for­ma­tion-the­o­retic sense) than heroic mythol­ogy (which is how most tribes ex­plain the uni­verse). This be­ing the case, we’ve tried to im­port our no­tions of Oc­cam’s Ra­zor into math as well.

• On the other hand, the im­proved foun­da­tions should still add up to nor­mal­ity; 2 + 2 should still end up equal­ling 4, not some­thing new and amaz­ing and ex­cit­ing like “fish”.

• I think it’s very im­por­tant to dis­t­in­guish be­tween the ques­tions “Why does in­duc­tion work?” and “Does in­duc­tion work?” The rea­son why the uni­verse it­self is reg­u­lar is still a mys­te­ri­ous ques­tion unto us, for now. Strange spec­u­la­tions here may be tem­porar­ily need­ful. But on the other hand, if you start claiming that the uni­verse isn’t ac­tu­ally reg­u­lar, that the an­swer to “Does in­duc­tion work?” is “No!“, then you’re wan­der­ing into 2 + 2 = 3 ter­ri­tory. You’re try­ing too hard to make your philos­o­phy in­ter­est­ing, in­stead of cor­rect. An in­duc­tive AI ask­ing what prob­a­bil­ity as­sign­ment to make on the next round is ask­ing “Does in­duc­tion work?“, and this is the ques­tion that it may an­swer by in­duc­tive rea­son­ing. If you ask “Why does in­duc­tion work?” then an­swer­ing “Be­cause in­duc­tion works” is cir­cu­lar logic, and an­swer­ing “Be­cause I be­lieve in­duc­tion works” is mag­i­cal think­ing.

• I don’t think that go­ing around in a loop of jus­tifi­ca­tions through the meta-level is the same thing as cir­cu­lar logic. I think the no­tion of “cir­cu­lar logic” ap­plies within the ob­ject level, and is some­thing that is definitely bad and for­bid­den, on the ob­ject level. For­bid­ding re­flec­tive co­her­ence doesn’t sound like a good idea. But I haven’t yet sat down and for­mal­ized the ex­act differ­ence—my re­flec­tive the­ory is some­thing I’m try­ing to work out, not some­thing I have in hand.