Lewis Car­roll, who was also a math­e­mat­i­cian, once wrote a short di­alogue called What the Tor­toise said to Achilles. If you have not yet read this an­cient clas­sic, con­sider do­ing so now.

The Tor­toise offers Achilles a step of rea­son­ing drawn from Eu­clid’s First Propo­si­tion:

(A) Things that are equal to the same are equal to each other.
(B) The two sides of this Tri­an­gle are things that are equal to the same.
(Z) The two sides of this Tri­an­gle are equal to each other.

Tor­toise: “And if some reader had not yet ac­cepted A and B as true, he might still ac­cept the se­quence as a valid one, I sup­pose?”

Achilles: “No doubt such a reader might ex­ist. He might say, ‘I ac­cept as true the Hy­po­thet­i­cal Propo­si­tion that, if A and B be true, Z must be true; but, I don’t ac­cept A and B as true.’ Such a reader would do wisely in aban­don­ing Eu­clid, and tak­ing to foot­ball.”

Tor­toise: “And might there not also be some reader who would say, ‘I ac­cept A and B as true, but I don’t ac­cept the Hy­po­thet­i­cal’?”

Achilles, un­wisely, con­cedes this; and so asks the Tor­toise to ac­cept an­other propo­si­tion:

(C) If A and B are true, Z must be true.

But, asks, the Tor­toise, sup­pose that he ac­cepts A and B and C, but not Z?

Then, says, Achilles, he must ask the Tor­toise to ac­cept one more hy­po­thet­i­cal:

(D) If A and B and C are true, Z must be true.

Dou­glas Hofs­tadter para­phrased the ar­gu­ment some time later:

Achilles: If you have [(A⋀B)→Z], and you also have (A⋀B), then surely you have Z.
Tor­toise: Oh! You mean <{(A⋀B)⋀[(A⋀B)→Z]}→Z>, don’t you?

As Hofs­tadter says, “What­ever Achilles con­sid­ers a rule of in­fer­ence, the Tor­toise im­me­di­ately flat­tens into a mere string of the sys­tem. If you use only the let­ters A, B, and Z, you will get a re­cur­sive pat­tern of longer and longer strings.”

By now you should rec­og­nize the anti-pat­tern Pass­ing the Re­cur­sive Buck; and though the coun­ter­spell is some­times hard to find, when found, it gen­er­ally takes the form The Buck Stops Im­me­di­ately.

The Tor­toise’s mind needs the dy­namic of adding Y to the be­lief pool when X and (X→Y) are pre­vi­ously in the be­lief pool. If this dy­namic is not pre­sent—a rock, for ex­am­ple, lacks it—then you can go on adding in X and (X→Y) and (X⋀(X→Y))→Y un­til the end of eter­nity, with­out ever get­ting to Y.

The phrase that once came into my mind to de­scribe this re­quire­ment, is that a mind must be cre­ated already in mo­tion. There is no ar­gu­ment so com­pel­ling that it will give dy­nam­ics to a static thing. There is no com­puter pro­gram so per­sua­sive that you can run it on a rock.

And even if you have a mind that does carry out modus po­nens, it is fu­tile for it to have such be­liefs as...

(A) If a tod­dler is on the train tracks, then pul­ling them off is fuz­zle.
(B) There is a tod­dler on the train tracks.

...un­less the mind also im­ple­ments:

Dy­namic: When the be­lief pool con­tains “X is fuz­zle”, send X to the ac­tion sys­tem.

(Added: Ap­par­ently this wasn’t clear… By “dy­namic” I mean a prop­erty of a phys­i­cally im­ple­mented cog­ni­tive sys­tem’s de­vel­op­ment over time. A “dy­namic” is some­thing that hap­pens in­side a cog­ni­tive sys­tem, not data that it stores in mem­ory and ma­nipu­lates. Dy­nam­ics are the ma­nipu­la­tions. There is no way to write a dy­namic on a piece of pa­per, be­cause the pa­per will just lie there. So the text im­me­di­ately above, which says “dy­namic”, is not dy­namic. If I wanted the text to be dy­namic and not just say “dy­namic”, I would have to write a Java ap­plet.)

Need­less to say, hav­ing the be­lief...

(C) If the be­lief pool con­tains “X is fuz­zle”, then “send ‘X’ to the ac­tion sys­tem” is fuz­zle.

...won’t help un­less the mind already im­ple­ments the be­hav­ior of trans­lat­ing hy­po­thet­i­cal ac­tions la­beled ‘fuz­zle’ into ac­tual mo­tor ac­tions.

By dint of care­ful ar­gu­ments about the na­ture of cog­ni­tive sys­tems, you might be able to prove...

(D) A mind with a dy­namic that sends plans la­beled “fuz­zle” to the ac­tion sys­tem, is more fuz­zle than minds that don’t.

...but that still won’t help, un­less the listen­ing mind pre­vi­ously pos­sessed the dy­namic of swap­ping out its cur­rent source code for al­ter­na­tive source code that is be­lieved to be more fuz­zle.

This is why you can’t ar­gue fuz­zle­ness into a rock.

Part of The Me­taethics Sequence

Next post: “The Be­drock of Fair­ness

Pre­vi­ous post: “The Mo­ral Void

• I think this just begs the ques­tion:

Dy­namic: When the be­lief pool con­tains “X is fuz­zle”, send X to the ac­tion sys­tem.
Ah, but the tor­toise would ar­gue that this isn’t enough. Sure, the be­lief pool may con­tain “X is fuz­zle,” and this dy­namic, but that doesn’t mean that X nec­es­sar­ily gets sent to the ac­tion sys­tem. In ad­di­tion, you need an­other dy­namic:

Dy­namic 2: When the be­lief pool con­tains “X is fuz­zle”, and there is a dy­namic say­ing “When the be­lief pool con­tains ‘X is fuz­zle’, send X to the ac­tion sys­tem”, then send X to the ac­tion sys­tem.

Or, to put it an­other way:

Dy­namic 2: When the be­lief pool con­tains “X is fuz­zle”, run Dy­namic 1.

Of course, then one needs Dy­namic 3 to tell you to run Dy­namic 2, ad in­fini­tum—and we’re back to the origi­nal prob­lem.

I think the real point of the di­alogue is that you can’t use rules of in­fer­ence to de­rive rules of in­fer­ence—even if you add them as ax­ioms! In some sense, then, rules of in­fer­ence are even more fun­da­men­tal than ax­ioms: they’re the ma­chines that you feed the ax­ioms into. Then one nat­u­rally starts to ask ques­tions about how you can “pro­gram” the ma­chines by feed­ing in cer­tain kinds of ax­ioms, and what hap­pens if you try to feed a pro­gram’s de­scrip­tion to it­self, var­i­ous para­doxes of self-refer­ence, etc. This is where the con­nec­tion to Gödel and Tur­ing comes in—and prob­a­bly why Hofs­tadter in­cluded this fable.

Cheers, Ari

• Ari, dy­nam­ics don’t say things; they do things.

• The phrase that once came into my mind to de­scribe this re­quire­ment, is that a mind must be cre­ated already in mo­tion. There is no ar­gu­ment so com­pel­ling that it will give dy­nam­ics to a static thing. There is no com­puter pro­gram so per­sua­sive that you can run it on a rock.
To add to my pre­vi­ous com­ment, I think there’s a more rigor­ous way to ex­press this point. (The “mo­tion” anal­ogy seems pretty vague.)

A non-uni­ver­sal Tur­ing ma­chine can’t simu­late a uni­ver­sal Tur­ing ma­chine. (If it could, it would be uni­ver­sal af­ter all—a con­tra­dic­tion.) In other words, there are com­put­ers that can self-pro­gram and those that can’t, and no amount of pro­gram­ming can change the lat­ter into the former.

Cheers, Ari

• Well, at least I can’t be ac­cused of be­la­bor­ing a point so ob­vi­ous that no one could pos­si­bly get it wrong.

• Within our `any­thing can in­fluence any­thing″ (more or less) physics, the dis­tinc­tion be­tween com­mu­ni­cat­ing the propo­si­tion and just phys­i­cal­ly­`set­ting in mo­tion″ is not clear-cut. Pro­grammable mind can as­sume the dy­nam­ics that is en­coded in some weak sig­nals, a rock can also as­sume differ­ent dy­nam­ics, but you’ll have to build a ma­chine from it first, ap­ply­ing more than weak sig­nals.

• I think the moral is that you shouldn’t try to write soft­ware for which you don’t have the hard­ware to run on, not even if the code could run it­self by em­u­lat­ing the hard­ware. A rock runs on physics, Eu­clid’s rules don’t. We have moral­ity to run on our brains, and… isn’t FAI about port­ing it to physics?

So shouldn’t we dis­t­in­guish be­tween the sym­bols physics::dy­namic and hu­man_brain::dy­namic? (In a way, me read­ing the word “dy­namic” uses more com­put­ing power than run­ning any Java ap­plet could on cur­rent com­put­ers...)

• This is why it’s always seemed to silly to me to try to ax­iomi­tize logic. Either you already “im­ple­ment” logic, in which case it’s un­nec­ces­sary, or you don’t, in which case you’re a rock and there’s no point in deal­ing with you.

I think this also has deeper im­pli­ca­tions for the philos­o­phy of math—the de­sire to fully ax­iomi­tize is still deeply in­grained de­spite Goedel, but in some ways this seems like a more fun­da­men­tal challenge. You can write down as many rules as you want for string ma­nipu­la­tion, but the re­al­iza­tion of those rules in ac­tual ma­nipu­la­tion re­mains in­ef­fable on pa­per.

• Ax­io­m­a­tiz­ing logic isn’t to make us im­ple­ment logic in the first place!

It’s to en­able us to store and com­mu­ni­cate logic.

• I wouldn’t de­scribe any typ­i­cal hu­man mind as im­ple­ment­ing logic. Even those that are log­i­cal don’t seem to think that way nat­u­rally or in­nately. But par­tic­u­lar hu­man minds have had much suc­cess think­ing with ‘ax­iomi­tized’ logic.

• Isn’t a sili­con chip tech­ni­cally a rock?

Also, I take it that this means you don’t be­lieve in the whole, “if a pro­gram im­ple­ments con­scious­ness, then it must be con­scious while sit­ting pas­sively on the hard disk” thing. I re­mem­ber this came up be­fore in the quan­tum se­ries and it seemed to me ab­surd, sort of for the rea­sons you say.

• Isn’t a sili­con chip tech­ni­cally a rock?

Rocks are nat­u­rally formed. It’s not phys­i­cally im­pos­si­ble for nat­u­ral pro­cesses to form sili­con into a work­ing com­puter, but it’s cer­tainly not likely.

• Also, I take it that this means you don’t be­lieve in the whole, “if a pro­gram im­ple­ments con­scious­ness, then it must be con­scious while sit­ting pas­sively on the hard disk” thing. I re­mem­ber this came up be­fore in the quan­tum se­ries and it seemed to me ab­surd, sort of for the rea­sons you say.

I used that as an ar­gu­ment against time­less physics: If you could have con­scious­ness in a time­less uni­verse, than this means that you could simu­late a con­scious be­ing with­out ac­tu­ally run­ning the simu­la­tion, you could just put the data on the hard drive. I’m still wait­ing out for an an­swer on that one!

• In or­der for it to be analo­gous, you’d have to put the con­tents of the mem­ory for ev­ery step of the pro­gram as its run­ning on the hard drive. The pro­gram it­self isn’t suffi­cient.

Since there’s no way to get the mem­ory ev­ery step with­out ac­tu­ally run­ning the pro­gram, it doesn’t seem that para­dox­i­cal.

Also, if time was an ex­plicit di­men­sion, that would just mean that the re­sults of the pro­gram are spread out on a straight line al­igned along the t-axis. I don’t see why mak­ing it a curvy line makes it any differ­ent.

• Huh? A “time­less uni­verse” still con­tains ‘time’; it’s just not fun­da­men­tal. Con­scious­ness may be a lot of things, but it’s definitely not static in ‘time’, i.e. it’s dy­namic with re­spect to causal­ity.

• IL, isn’t the differ­ence the pres­ence or ab­sence of causal­ity?

• “And even if you have a mind that does carry out modus po­nens, it is fu­tile for it to have such be­liefs as… (A) If a tod­dler is on the train tracks, then pul­ling them off is fuz­zle. (B) There is a tod­dler on the train tracks. …un­less the mind also im­ple­ments: Dy­namic: When the be­lief pool con­tains “X is fuz­zle”, send X to the ac­tion sys­tem.”

It seems to me that much of the frus­tra­tion in my life prior to a few years ago has been due to think­ing that all other hu­man minds nec­es­sar­ily and con­sis­tently im­ple­ment modus po­nens and the Dy­namic: “When the be­lief pool con­tains “X is right/​de­sired/​max­i­miz­ing-my-util­ity-func­tion/​good”, send X to ac­tion sys­tem”

Th­ese days my thoughts are largely oc­cu­pied with con­sid­er­ing what causal dy­namic could cause modus poens and the above Dy­namic to be im­ple­mented in a hu­man mind.

IL: Time­less physics re­tains causal­ity. Change some of the data on the hard drive and the other data won’t change as an in­fer­en­tial re­sult. There are un­solved is­sues in this do­main, but prob­a­bly not easy ones. The pro­cess of cre­at­ing the data on the hard drive might be nec­es­sar­ily con­scious, for in­stance, or might not. I think that this was dis­cussed ear­lier when we dis­cussed gi­ant look-up ta­bles.

• It seems to me that much of the frus­tra­tion in my life prior to a few years ago has been due to think­ing that all other hu­man minds nec­es­sar­ily and con­sis­tently im­ple­ment modus po­nens and the Dy­namic: “When the be­lief pool con­tains “X is right/​de­sired/​max­i­miz­ing-my-util­ity-func­tion/​good”, send X to ac­tion sys­tem”

This is soooo true

• You can fully de­scribe the mind/​brain in terms of dy­nam­ics with­out refer­ence to logic or data. But you can’t do the re­verse. I main­tain that the dy­nam­ics are all that mat­ters and the rest is just folk the­ory tarted up with a bad anal­ogy (com­pu­ta­tion­al­ism).

• “Fuz­zle” = “Mo­rally right.”

Only in terms of how this ac­tu­ally gets into a hu­man mind, there is a dy­namic first: be­fore any­one has any idea of fuz­zle­ness, things are already be­ing sent to the ac­tion sys­tem. Then we say, “Oh, these are things are fuz­zle!”, i.e. these are the type of things that get sent to the ac­tion sys­tem. Then some­one else tells us that some­thing else is fuz­zle, and right away it gets sent to the ac­tion sys­tem too.

• “Fuz­zle” = “Mo­rally right.”

Hm… As de­scribed, “fuz­zle” = “cho­sen course of ac­tion”, or, “I choose”. Things la­bel­led “fuz­zle” are sent to the ac­tion sys­tem—this is all we’re told about “fuz­zle”. But any­thing and ev­ery­thing that a sys­tem de­cides, chooses, sets out, to do, are sent to the ac­tion sys­tem. Not just moral things.

If we want to dis­t­in­guish moral things from ac­tions in gen­eral, we need to say more.

• I just want to note that back in 2008, even though I had already read this di­alogue and thought I un­der­stood it, this was one of Eliezer’s posts that made me go: “Holy shit, I didn’t re­al­ize it was pos­si­ble to think this clearly.”

• Go­ing down to the bot­tom of the post for the TL;DR, I was pleas­antly sur­prised to hav­ing the need to go back up again.

• Minor note- When try­ing to prove Strong Foun­da­tion­al­ism (on which I have since given up), I came up with the idea of found­ing logic not on some­thing any­body must ac­cept but some­thing that must be true in any pos­si­ble uni­verse. (E.g 1+1=2 ac­cord­ing to tra­di­tional logic, re­duc­tion­ism if I un­der­stand Eliz­ier cor­rectly). This gets around the tor­toise’s prob­lem and reestab­lishes logic.

Of course, this isn’t so rele­vant be­cause the tor­toise can in re­sponse sug­gest the pos­si­bil­ity Achilles is in­sane ei­ther in his rea­son­ing or his mem­ory (or both, but that’s su­perflous) be­ing so far off-track that he can’t trust them to perform proper rea­son­ing.