Abstracted Idealized Dynamics

Fol­lowup to: Mo­ral­ity as Fixed Computation

I keep try­ing to de­scribe moral­ity as a “com­pu­ta­tion”, but peo­ple don’t stand up and say “Aha!”

Pon­der­ing the sur­pris­ing in­fer­en­tial dis­tances that seem to be at work here, it oc­curs to me that when I say “com­pu­ta­tion”, some of my listen­ers may not hear the Word of Power that I thought I was emit­ting; but, rather, may think of some com­pli­cated bor­ing unim­por­tant thing like Microsoft Word.

Maybe I should have said that moral­ity is an ab­stracted ideal­ized dy­namic. This might not have meant any­thing to start with, but at least it wouldn’t sound like I was de­scribing Microsoft Word.

How, oh how, am I to de­scribe the awe­some im­port of this con­cept, “com­pu­ta­tion”?

Per­haps I can dis­play the in­ner na­ture of com­pu­ta­tion, in its most gen­eral form, by show­ing how that in­ner na­ture man­i­fests in some­thing that seems very un­like Microsoft Word—namely, moral­ity.

Con­sider cer­tain fea­tures we might wish to as­cribe to that-which-we-call “moral­ity”, or “should” or “right” or “good”:

• It seems that we some­times think about moral­ity in our arm­chairs, with­out fur­ther peek­ing at the state of the out­side world, and ar­rive at some pre­vi­ously un­known con­clu­sion.

Some­one sees a slave be­ing whipped, and it doesn’t oc­cur to them right away that slav­ery is wrong. But they go home and think about it, and imag­ine them­selves in the slave’s place, and fi­nally think, “No.”

Can you think of any­where else that some­thing like this hap­pens?

Sup­pose I tell you that I am mak­ing a rec­t­an­gle of peb­bles. You look at the rec­t­an­gle, and count 19 peb­bles on one side and 103 dots peb­bles on the other side. You don’t know right away how many peb­bles there are. But you go home to your liv­ing room, and draw the blinds, and sit in your arm­chair and think; and with­out fur­ther look­ing at the phys­i­cal ar­ray, you come to the con­clu­sion that the rec­t­an­gle con­tains 1957 peb­bles.

Now, I’m not go­ing to say the word “com­pu­ta­tion”. But it seems like that-which-is “moral­ity” should have the prop­erty of la­tent de­vel­op­ment of an­swers—that you may not know right away, ev­ery­thing that you have suffi­cient in-prin­ci­ple in­for­ma­tion to know. All the in­gre­di­ents are pre­sent, but it takes ad­di­tional time to bake the pie.

You can spec­ify a Tur­ing ma­chine of 6 states and 2 sym­bols that un­folds into a string of 4.6 × 101439 1s af­ter 2.5 × 102879 steps. A ma­chine I could de­scribe aloud in ten sec­onds, runs longer and pro­duces a larger state than the whole ob­served uni­verse to date.

When you dis­t­in­guish be­tween the pro­gram de­scrip­tion and the pro­gram’s ex­e­cut­ing state, be­tween the pro­cess speci­fi­ca­tion and the fi­nal out­come, be­tween the ques­tion and the an­swer, you can see why even cer­tainty about a pro­gram de­scrip­tion does not im­ply hu­man cer­tainty about the ex­e­cut­ing pro­gram’s out­come. See also Ar­tifi­cial Ad­di­tion on the differ­ence be­tween a com­pact speci­fi­ca­tion ver­sus a flat list of out­puts.

Mo­ral­ity, like­wise, is some­thing that un­folds, through ar­gu­ments, through dis­cov­ery, through think­ing; from a bounded set of in­tu­itions and be­liefs that an­i­mate our ini­tial states, to a po­ten­tially much larger set of spe­cific moral judg­ments we may have to make over the course of our life­times.

• When two hu­man be­ings both think about the same moral ques­tion, even in a case where they both start out un­cer­tain of the an­swer, it is not un­known for them to come to the same con­clu­sion. It seems to hap­pen more of­ten than chance alone would al­low—though the bi­ased fo­cus of re­port­ing and mem­ory is on the shout­ing and the ar­gu­ments. And this is so, even if both hu­mans re­main in their arm­chairs and do not peek out the liv­ing-room blinds while think­ing.

Where else does this hap­pen? It hap­pens when try­ing to guess the num­ber of peb­bles in a rec­t­an­gle of sides 19 and 103. Now this does not prove by Greek anal­ogy that moral­ity is mul­ti­pli­ca­tion. If A has prop­erty X and B has prop­erty X it does not fol­low that A is B. But it seems that moral­ity ought to have the prop­erty of ex­pected agree­ment about un­known la­tent an­swers, which, please note, gen­er­ally im­plies that similar ques­tions are be­ing asked in differ­ent places.

This is part of what is con­veyed by the Word of Power, “com­pu­ta­tion”: the no­tion of similar ques­tions be­ing asked in differ­ent places and hav­ing similar an­swers. Or as we might say in the busi­ness, the same com­pu­ta­tion can have mul­ti­ple in­stan­ti­a­tions.

If we know the struc­ture of calcu­la­tor 1 and calcu­la­tor 2, we can de­cide that they are “ask­ing the same ques­tion” and that we ought to see the “same re­sult” flash­ing on the screen of calcu­la­tor 1 and calcu­la­tor 2 af­ter press­ing the En­ter key. We de­cide this in ad­vance of see­ing the ac­tual re­sults, which is what makes the con­cept of “com­pu­ta­tion” pre­dic­tively use­ful.

And in fact, we can make this de­duc­tion even with­out know­ing the ex­act cir­cuit di­a­grams of calcu­la­tors 1 and 2, so long as we’re told that the cir­cuit di­a­grams are the same.

And then when we see the re­sult “1957” flash on the screen of calcu­la­tor 1, we know that the same “1957″ can be ex­pected to flash on calcu­la­tor 2, and we even ex­pect to count up 1957 peb­bles in the ar­ray of 19 by 103.

A hun­dred calcu­la­tors, perform­ing the same mul­ti­pli­ca­tion in a hun­dred differ­ent ways, can be ex­pected to ar­rive at the same an­swer—and this is not a vac­u­ous ex­pec­ta­tion ad­duced af­ter see­ing similar an­swers. We can form the ex­pec­ta­tion in ad­vance of see­ing the ac­tual an­swer.

Now this does not show that moral­ity is in fact a lit­tle elec­tronic calcu­la­tor. But it high­lights the no­tion of some­thing that fac­tors out of differ­ent phys­i­cal phe­nom­ena in differ­ent phys­i­cal places, even phe­nom­ena as phys­i­cally differ­ent as a calcu­la­tor and an ar­ray of peb­bles—a com­mon an­swer to a com­mon ques­tion. (Where is this fac­tored-out thing? Is there an Ideal Mul­ti­pli­ca­tion Table writ­ten on a stone tablet some­where out­side the uni­verse? But we are not con­cerned with that for now.)

See­ing that one calcu­la­tor out­puts “1957”, we in­fer that the an­swer—the ab­stracted an­swer—is 1957; and from there we make our pre­dic­tions of what to see on all the other calcu­la­tor screens, and what to see in the ar­ray of peb­bles.

So that-which-we-name-moral­ity seems to have the fur­ther prop­er­ties of agree­ment about de­vel­oped la­tent an­swers, which we may as well think of in terms of ab­stract an­swers; and note that such agree­ment is un­likely in the ab­sence of similar ques­tions.

• We some­times look back on our own past moral judg­ments, and say “Oops!” E.g., “Oops! Maybe in ret­ro­spect I shouldn’t have kil­led all those guys when I was a teenager.”

So by now it seems easy to ex­tend the anal­ogy, and say: “Well, maybe a cos­mic ray hits one of the tran­sis­tors in the calcu­la­tor and it says ’1959′ in­stead of 1957—that’s an er­ror.”

But this no­tion of “er­ror”, like the no­tion of “com­pu­ta­tion” it­self, is more sub­tle than it ap­pears.

Calcu­la­tor Q says ’1959′ and calcu­la­tor X says ‘1957’. Who says that calcu­la­tor Q is wrong, and calcu­la­tor X is right? Why not say that calcu­la­tor X is wrong and calcu­la­tor Q is right? Why not just say, “the re­sults are differ­ent”?

“Well,” you say, draw­ing on your store of com­mon sense, “if it was just those two calcu­la­tors, I wouldn’t know for sure which was right. But here I’ve got nine other calcu­la­tors that all say ’1957′, so it cer­tainly seems prob­a­ble that 1957 is the cor­rect an­swer.”

What’s this busi­ness about “cor­rect”? Why not just say “differ­ent”?

“Be­cause if I have to pre­dict the out­come of any other calcu­la­tors that com­pute 19 x 103, or the num­ber of peb­bles in a 19 x 103 ar­ray, I’ll pre­dict 1957—or what­ever ob­serv­able out­come cor­re­sponds to the ab­stract num­ber 1957.”

So per­haps 19 x 103 = 1957 only most of the time. Why call the an­swer 1957 the cor­rect one, rather than the mere fad among calcu­la­tors, the ma­jor­ity vote?

If I’ve got a hun­dred calcu­la­tors, all of them rather er­ror-prone—say a 10% prob­a­bil­ity of er­ror—then there is no one calcu­la­tor I can point to and say, “This is the stan­dard!” I might pick a calcu­la­tor that would hap­pen, on this oc­ca­sion, to vote with ten other calcu­la­tors rather than ninety other calcu­la­tors. This is why I have to ideal­ize the an­swer, to talk about this ethe­real thing that is not as­so­ci­ated with any par­tic­u­lar phys­i­cal pro­cess known to me—not even ar­ith­metic done in my own head, which can also be “in­cor­rect”.

It is this ethe­real pro­cess, this ideal­ized ques­tion, to which we com­pare the re­sults of any one par­tic­u­lar calcu­la­tor, and say that the re­sult was “right” or “wrong”.

But how can we ob­tain in­for­ma­tion about this perfect and un-phys­i­cal an­swer, when all that we can ever ob­serve, are merely phys­i­cal phe­nom­ena? Even do­ing “men­tal” ar­ith­metic just tells you about the re­sult in your own, merely phys­i­cal brain.

“Well,” you say, “the prag­matic an­swer is that we can ob­tain ex­tremely strong ev­i­dence by look­ing at the re­sults of a hun­dred calcu­la­tors, even if they are only 90% likely to be cor­rect on any one oc­ca­sion.”

But wait: When do elec­trons or quarks or mag­netic fields ever make an “er­ror”? If no in­di­vi­d­ual par­ti­cle can be mis­taken, how can any col­lec­tion of par­ti­cles be mis­taken? The con­cept of an “er­ror”, though hu­mans may take it for granted, is hardly some­thing that would be men­tioned in a fully re­duc­tion­ist view of the uni­verse.

Really, what hap­pens is that we have a cer­tain model in mind of the calcu­la­tor—the model that we looked over and said, “This im­ple­ments 19 * 103”—and then other phys­i­cal events caused the calcu­la­tor to de­part from this model, so that the fi­nal out­come, while phys­i­cally lawful, did not cor­re­late with that mys­te­ri­ous ab­stract thing, and the other phys­i­cal calcu­la­tors, in the way we had in mind. Given our mis­taken be­liefs about the phys­i­cal pro­cess of the first calcu­la­tor, we would look at its out­put ’1959′, and make mis­taken pre­dic­tions about the other calcu­la­tors (which do still hew to the model we have in mind).

So “in­cor­rect” cashes out, nat­u­ral­is­ti­cally, as “phys­i­cally de­parted from the model that I had of it” or “phys­i­cally de­parted from the ideal­ized ques­tion that I had in mind”. A calcu­la­tor struck by a cos­mic ray, is not ‘wrong’ in any phys­i­cal sense, not an un­lawful event in the uni­verse; but the out­come is not the an­swer to the ques­tion you had in mind, the ques­tion that you be­lieved em­piri­cally-falsely the calcu­la­tor would cor­re­spond to.

The calcu­la­tor’s “in­cor­rect” an­swer, one might say, is an an­swer to a differ­ent ques­tion than the one you had in mind—it is an em­piri­cal fact about the calcu­la­tor that it im­ple­ments a differ­ent com­pu­ta­tion.

• The ‘right’ act or the ‘should’ op­tion some­times seem to de­pend on the state of the phys­i­cal world. For ex­am­ple, should you cut the red wire or the green wire to disarm the bomb?

Sup­pose I show you a long straight line of peb­bles, and ask you, “How many peb­bles would I have, if I had a rec­t­an­gu­lar ar­ray of six lines like this one?” You start to count, but only get up to 8 when I sud­denly blind­fold you.

Now you are not com­pletely ig­no­rant of the an­swer to this ques­tion. You know, for ex­am­ple, that the re­sult will be even, and that it will be greater than 48. But you can’t an­swer the ques­tion un­til you know how many peb­bles were in the origi­nal line.

But mark this about the ques­tion: It wasn’t a ques­tion about any­thing you could di­rectly see in the world, at that in­stant. There was not in fact a rec­t­an­gu­lar ar­ray of peb­bles, six on a side. You could per­haps lay out an ar­ray of such peb­bles and count the re­sults—but then there are more com­pli­cated com­pu­ta­tions that we could run on the un­known length of a line of peb­bles. For ex­am­ple, we could treat the line length as the start of a Good­stein se­quence, and ask whether the se­quence halts. To phys­i­cally play out this se­quence would re­quire many more peb­bles than ex­ist in the uni­verse. Does it make sense to ask if the Good­stein se­quence which starts with the length of this line of peb­bles, “would halt”? Does it make sense to talk about the an­swer, in a case like this?

I’d say yes, per­son­ally.

But med­i­tate upon the ethe­re­al­ness of the an­swer—that we talk about ideal­ized ab­stract pro­cesses that never re­ally hap­pen; that we talk about what would hap­pen if the law of the Good­stein se­quence came into effect upon this line of peb­bles, even though the law of the Good­stein se­quence will never phys­i­cally come into effect.

It is the same sort of ethe­re­al­ness that ac­com­pa­nies the no­tion of a propo­si­tion that 19 * 103 = 1957 which fac­tors out of any par­tic­u­lar phys­i­cal calcu­la­tor and is not iden­ti­fied with the re­sult of any par­tic­u­lar phys­i­cal calcu­la­tor.

Only now that ethe­re­al­ness has been mixed with phys­i­cal things; we talk about the effect of an ethe­real op­er­a­tion on a phys­i­cal thing. We talk about what would hap­pen if we ran the Good­stein pro­cess on the num­ber of peb­bles in this line here, which we have not counted—we do not know ex­actly how many peb­bles there are. There is no tiny lit­tle XML tag upon the peb­bles that says “Good­stein halts”, but we still think—or at least I still think—that it makes sense to say of the peb­bles that they have the prop­erty of their Good­stein se­quence ter­mi­nat­ing.

So com­pu­ta­tions can be, as it were, ideal­ized ab­stract dy­nam­ics—ideal­ized ab­stract ap­pli­ca­tions of ideal­ized ab­stract laws, iter­ated over an imag­i­nary causal-time that could go on for quite a num­ber of steps (as Good­stein se­quences of­ten do).

So when we won­der, “Should I cut the red wire or the green wire?”, we are not mul­ti­ply­ing or simu­lat­ing the Good­stein pro­cess, in par­tic­u­lar. But we are won­der­ing about some­thing that is not phys­i­cally im­ma­nent in the red wires or the green wires them­selves; there is no lit­tle XML tag on the green wire, say­ing, “This is the wire that should be cut.”

We may not know which wire de­fuses the bomb, but say, “Whichever wire does in fact de­fuse the bomb, that is the wire that should be cut.”

Still, there are no lit­tle XML tags on the wires, and we may not even have any way to look in­side the bomb—we may just have to guess, in real life.

So if we try to cash out this no­tion of a definite wire that should be cut, it’s go­ing to come out as...

...some rule that would tell us which wire to cut, if we knew the ex­act state of the phys­i­cal world...

...which is to say, some kind of ideal­ized ab­stract pro­cess into which we feed the state of the world as an in­put, and get back out, “cut the green wire” or “cut the red wire”...

...which is to say, the out­put of a com­pu­ta­tion that would take the world as an in­put.

• And fi­nally I note that from the twin phe­nom­ena of moral agree­ment and moral er­ror, we can con­struct the no­tion of moral dis­agree­ment.

This adds noth­ing to our un­der­stand­ing of “com­pu­ta­tion” as a Word of Power, but it’s helpful in putting the pieces to­gether.

Let’s say that Bob and Sally are talk­ing about an ab­stracted ideal­ized dy­namic they call “Ena­muh”.

Bob says “The out­put of Ena­muh is ‘Cut the blue wire’,” and Sally says “The out­put of Ena­muh is ‘Cut the brown wire’.”

Now there are sev­eral non-ex­clu­sive pos­si­bil­ities:

Either Bob or Sally could have com­mit­ted an er­ror in ap­ply­ing the rules of Ena­muh—they could have done the equiv­a­lent of mis-mul­ti­ply­ing known in­puts.

Either Bob or Sally could be mis­taken about some em­piri­cal state of af­fairs upon which Ena­muh de­pends—the wiring of the bomb.

Bob and Sally could be talk­ing about differ­ent things when they talk about Ena­muh, in which case both of them are com­mit­ting an er­ror when they re­fer to Ena­muh_Bob and Ena­muh_Sally by the same name. (How­ever, if Ena­muh_Bob and Ena­muh_Sally differ in the sixth dec­i­mal place in a fash­ion that doesn’t change the out­put about which wire gets cut, Bob and Sally can quite le­gi­t­i­mately gloss the differ­ence.)

Or if Ena­muh it­self is defined by some other ab­stracted ideal­ized dy­namic, a Meta-Ena­muh whose out­put is Ena­muh, then ei­ther Bob or Sally could be mis­taken about Meta-Ena­muh in any of the same ways they could be mis­taken about Ena­muh. (But in the case of moral­ity, we have an ab­stracted ideal­ized dy­namic that in­cludes a speci­fi­ca­tion of how it, it­self, changes. Mo­ral­ity is self-renor­mal­iz­ing—it is not a guess at the product of some differ­ent and out­side source.)

To sum up:

  • Mo­ral­ity, like com­pu­ta­tion, in­volves la­tent de­vel­op­ment of an­swers;

  • Mo­ral­ity, like com­pu­ta­tion, per­mits ex­pected agree­ment of un­known la­tent an­swers;

  • Mo­ral­ity, like com­pu­ta­tion, rea­sons about ab­stract re­sults apart from any par­tic­u­lar phys­i­cal im­ple­men­ta­tion;

  • Mo­ral­ity, like com­pu­ta­tion, un­folds from bounded ini­tial state into some­thing po­ten­tially much larger;

  • Mo­ral­ity, like com­pu­ta­tion, can be viewed as an ideal­ized dy­namic that would op­er­ate on the true state of the phys­i­cal world—per­mit­ting us to speak about ideal­ized an­swers of which we are phys­i­cally un­cer­tain;

  • Mo­ral­ity, like com­pu­ta­tion, lets us to speak of such un-phys­i­cal stuff as “er­ror”, by com­par­ing a phys­i­cal out­come to an ab­stract out­come—pre­sum­ably in a case where there was pre­vi­ously rea­son to be­lieve or de­sire that the phys­i­cal pro­cess was iso­mor­phic to the ab­stract pro­cess, yet this was not ac­tu­ally the case.

And so with all that said, I hope that the word “com­pu­ta­tion” has come to con­vey some­thing other than Microsoft Word.

Part of The Me­taethics Sequence

Next post: “‘Ar­bi­trary’

Pre­vi­ous post: “Mo­ral Er­ror and Mo­ral Disagree­ment