Causal Reference

Fol­lowup to: The Fabric of Real Things, Stuff That Makes Stuff Happen

Pre­vi­ous med­i­ta­tion: “Does your rule for­bid epiphe­nom­e­nal­ist the­o­ries of con­scious­ness that con­scious­ness is caused by neu­rons, but doesn’t af­fect those neu­rons in turn? The clas­sic ar­gu­ment for epiphe­nom­e­nal con­scious­ness is that we can imag­ine a uni­verse where peo­ple be­have ex­actly the same way, but there’s no­body home—no aware­ness, no con­scious­ness, in­side the brain. For all the atoms in this uni­verse to be in the same place—for there to be no de­tectable differ­ence in­ter­nally, not just ex­ter­nally - ‘con­scious­ness’ would have to be some­thing cre­ated by the atoms in the brain, but which didn’t af­fect those atoms in turn. It would be an effect of atoms, but not a cause of atoms. Now, I’m not so much in­ter­ested in whether you think epiphe­nom­e­nal the­o­ries of con­scious­ness are true or false—rather, I want to know if you think they’re im­pos­si­ble or mean­ingless a pri­ori based on your rules.”

Is it co­her­ent to imag­ine a uni­verse in which a real en­tity can be an effect but not a cause?

Well… there’s a cou­ple of senses in which it seems imag­in­able. It’s im­por­tant to re­mem­ber that imag­in­ing things yields info pri­mar­ily about what hu­man brains can imag­ine. It only pro­vides info about re­al­ity to the ex­tent that we think imag­i­na­tion and re­al­ity are sys­tem­at­i­cally cor­re­lated for some rea­son.

That said, I can cer­tainly write a com­puter pro­gram in which there’s a tier of ob­jects af­fect­ing each other, and a sec­ond tier—a lower tier—of epiphe­nom­e­nal ob­jects which are af­fected by them, but don’t af­fect them. For ex­am­ple, I could write a pro­gram to simu­late some balls that bounce off each other, and then some lit­tle shad­ows that fol­low the balls around.

But then I only know about the shad­ows be­cause I’m out­side that whole uni­verse, look­ing in. So my mind is be­ing af­fected by both the balls and shad­ows—to ob­serve some­thing is to be af­fected by it. I know where the shadow is, be­cause the shadow makes pix­els be drawn on screen, which make my eye see pix­els. If your uni­verse has two tiers of causal­ity—a tier with things that af­fect each other, and an­other tier of things that are af­fected by the first tier with­out af­fect­ing them—then could you know that fact from in­side that uni­verse?

Again, this seems easy to imag­ine as long as ob­jects in the sec­ond tier can af­fect each other. You’d just have to be liv­ing in the sec­ond tier! We can imag­ine, for ex­am­ple—this wasn’t the way things worked out in our uni­verse, but it might’ve seemed plau­si­ble to the an­cient Greeks—that the stars in heaven (and the Sun as a spe­cial case) could af­fect each other and af­fect Earthly forces, but no Earthly force could af­fect them:

(Here the X’d-ar­row stands for ‘can­not af­fect’.)

The Sun’s light would illu­mi­nate Earth, so it would cause plant growth. And some­times you would see two stars crash into each other and ex­plode, so you’d see they could af­fect each other. (And af­fect your brain, which was see­ing them.) But the stars and Sun would be made out of a differ­ent sub­stance, the ‘heav­enly ma­te­rial’, and throw­ing any Earthly ma­te­rial at it would not cause it to change state in the slight­est. The Earthly ma­te­rial might be burned up, but the Sun would oc­cupy ex­actly the same po­si­tion as be­fore. It would af­fect us, but not be af­fected by us.

(To clar­ify an im­por­tant point raised in the com­ments: In stan­dard causal di­a­grams and in stan­dard physics, no two in­di­vi­d­ual events ever af­fect each other; there’s a causal ar­row from the PAST to FUTURE but never an ar­row from FUTURE to PAST. What we’re talk­ing about here is the sun and stars over time, and the gen­er­al­iza­tion over causal ar­rows that point from Star-in-Past to Sun-in-Pre­sent and Sun-in-Pre­sent back to Star-in-Fu­ture. The stan­dard for­mal­ism deal­ing with this would be Dy­namic Bayesian Net­works (DBNs) in which there are re­peat­ing nodes and re­peat­ing ar­rows for each suc­ces­sive timeframe: X1, X2, X3, and causal laws F re­lat­ing Xi to Xi+1. If the laws of physics did not re­peat over time, it would be rather hard to learn about the uni­verse! The Sun re­peat­edly sends out pho­tons, and they obey the same laws each time they fall on Earth; rather than the Fi be­ing new tran­si­tion ta­bles each time, we see a con­stant Fphysics over and over. By say­ing that we live in a sin­gle-tier uni­verse, we’re ob­serv­ing that when­ever there are F-ar­rows, causal-link-types, which (over re­peat­ing time) de­scend from vari­ables-of-type-X to vari­ables-of-type-Y (like pre­sent pho­tons af­fect­ing fu­ture elec­trons), there are also ar­rows go­ing back from Ys to Xs (like pre­sent elec­trons af­fect­ing fu­ture pho­tons). If we weren’t gen­er­al­iz­ing over time, it couldn’t pos­si­bly make sense to speak of thin­gies that “af­fect each other”—causal di­a­grams don’t al­low di­rected cy­cles!)

A two-tier causal uni­verse seems easy to imag­ine, even easy to spec­ify as a com­puter pro­gram. If you were ar­rang­ing a Dy­namic Bayes Net at ran­dom, would it ran­domly have ev­ery­thing in a sin­gle tier? If you were de­sign­ing a causal uni­verse at ran­dom, wouldn’t there ran­domly be some things that ap­peared to us as causes but not effects? And yet our own physi­cists haven’t dis­cov­ered any up­per-tier par­ti­cles which can move us with­out be­ing mov­able by us. There might be a hint here at what sort of thin­gies tend to be real in the first place—that, for what­ever rea­sons, the Real Rules some­how man­date or sug­gest that all the causal forces in a uni­verse be on the same level, ca­pa­ble of both af­fect­ing and be­ing af­fected by each other.

Still, we don’t ac­tu­ally know the Real Rules are like that; and so it seems pre­ma­ture to as­sign a pri­ori zero prob­a­bil­ity to hy­pothe­ses with multi-tiered causal uni­verses. Dis­cov­er­ing a class of up­per-tier af­fect-only par­ti­cles seems imag­in­able[1] - we can imag­ine which ex­pe­riences would con­vince us that they ex­isted. If we’re in the Ma­trix, we can see how to pro­gram a Ma­trix like that. If there’s some deeper rea­son why that’s im­pos­si­ble in any base-level re­al­ity, we don’t know it yet. So we prob­a­bly want to call that a mean­ingful hy­poth­e­sis for now.

But what about lower-tier par­ti­cles which can be af­fected by us, and yet never af­fect us?

Per­haps there are whole sen­tient Shadow Civ­i­liza­tions liv­ing on my nose hairs which can never af­fect those nose hairs, but find my nose hairs solid be­neath their feet. (The solid Earth af­fect­ing them but not be­ing af­fected, like the Sun’s light af­fect­ing us in the ‘heav­enly ma­te­rial’ hy­poth­e­sis.) Per­haps I wreck their world ev­ery time I sneeze. It cer­tainly seems imag­in­able—you could write a com­puter pro­gram simu­lat­ing physics like that, given suffi­cient per­verse­ness and com­put­ing power...

And yet the fun­da­men­tal ques­tion of ra­tio­nal­ity—“What do you think you know, and how do you think you know it?”—raises the ques­tion:

How could you pos­si­bly know about the lower tier, even if it ex­isted?

To ob­serve some­thing is to be af­fected by it—to have your brain and be­liefs take on differ­ent states, de­pend­ing on that thing’s state. How can you know about some­thing that doesn’t af­fect your brain?

In fact there’s an even deeper ques­tion, “How could you pos­si­bly talk about that lower tier of causal­ity even if it ex­isted?”

Let’s say you’re a Lord of the Ma­trix. You write a com­puter pro­gram which first com­putes the phys­i­cal uni­verse as we know it (or a dis­crete ap­prox­i­ma­tion), and then you add a cou­ple of lower-tier effects as fol­lows:

First, ev­ery time I sneeze, the bi­nary vari­able YES_SNEEZE will be set to the sec­ond of its two pos­si­ble val­ues.

Se­cond, ev­ery time I sneeze, the bi­nary vari­able NO_SNEEZE will be set to the first of its two pos­si­ble val­ues.

Now let’s say that—some­how—even though I’ve never caught any hint of the Ma­trix—I just mag­i­cally think to my­self one day, “What if there’s a vari­able that watches when I sneeze, and gets set to 1?”

It will be all too easy for me to imag­ine that this be­lief is mean­ingful and could be true or false:

And yet in re­al­ity—as you know from out­side the ma­trix—there are two shadow vari­ables that get set when I sneeze. How can I talk about one of them, rather than the other? Why should my thought about ‘1’ re­fer to their sec­ond pos­si­ble value rather than their first pos­si­ble value, in­side the Ma­trix com­puter pro­gram? If we tried to es­tab­lish a truth-value in this situ­a­tion, to com­pare my thought to the re­al­ity in­side the com­puter pro­gram—why com­pare my thought about SNEEZE_VAR to the vari­able YES_SNEEZE in­stead of NO_SNEEZE, or com­pare my thought ‘1’ to the first pos­si­ble value in­stead of the sec­ond pos­si­ble value?

Un­der more epistem­i­cally healthy cir­cum­stances, when you talk about things that are not di­rectly sen­sory ex­pe­riences, you will refer­ence a causal model of the uni­verse that you in­ducted to ex­plain your sen­sory ex­pe­riences. Let’s say you re­peat­edly go out­side at var­i­ous times of day, and your eyes and skin di­rectly ex­pe­rience BRIGHT-WARM, BRIGHT-WARM, BRIGHT-WARM, DARK-COOL, DARK-COOL, etc. To ex­plain the pat­terns in your sen­sory ex­pe­riences, you hy­poth­e­size a la­tent vari­able we’ll call ‘Sun’, with some kind of state which can change be­tween 1, which causes BRIGHT­ness and WARM­ness, and 0, which causes DARK­ness and COOL­ness. You be­lieve that the state of the ‘Sun’ vari­able changes over time, but usu­ally changes less fre­quently than you go out­side.

p(BRIGHT|Sun=1) 0.9
p(¬BRIGHT|Sun=1) 0.1
p(BRIGHT|Sun=0) 0.1
p(¬BRIGHT|Sun=0) 0.9

Stand­ing here out­side the Ma­trix, we might be tempted to com­pare your be­liefs about “Sun = 1”, to the real uni­verse’s state re­gard­ing the visi­bil­ity of the sun in the sky (or rather, the Earth’s ro­ta­tional po­si­tion).

But even if we com­press the sun’s visi­bil­ity down to a bi­nary cat­e­go­riza­tion, how are we to know that your thought “Sun = 1” is meant to cor­re­spond to the sun be­ing visi­ble in the sky, rather than the sun be­ing oc­cluded by the Earth? Why the first state of the vari­able, rather than the sec­ond state?

How in­deed are we know that this thought “Sun = 1” is meant to com­pare to the sun at all, rather than an anteater in Venezuela?

Well, be­cause that ‘Sun’ thingy is sup­posed to be the cause of BRIGHT and WARM feel­ings, and if you trace back the cause of those sen­sory ex­pe­riences in re­al­ity you’ll ar­rive at the sun that the ‘Sun’ thought allegedly cor­re­sponds to. And to dis­t­in­guish be­tween whether the sun be­ing visi­ble in the sky is meant to cor­re­spond to ‘Sun’=1 or ‘Sun’=0, you check the con­di­tional prob­a­bil­ities for that ‘Sun’-state giv­ing rise to BRIGHT—if the ac­tual sun be­ing visi­ble has a 95% chance of caus­ing the BRIGHT sen­sory feel­ing, then that true state of the sun is in­tended to cor­re­spond to the hy­po­thet­i­cal ‘Sun’=1, not ‘Sun’=0.

Or to put it more gen­er­ally, in cases where we have...

...then the cor­re­spon­dence be­tween map and ter­ri­tory can at least in prin­ci­ple be point-wise eval­u­ated by trac­ing causal links back from sen­sory ex­pe­riences to re­al­ity, and trac­ing hy­po­thet­i­cal causal links from sen­sory ex­pe­riences back to hy­po­thet­i­cal re­al­ity. We can’t di­rectly eval­u­ate that truth-con­di­tion in­side our own thoughts; but we can perform ex­per­i­ments and be cor­rected by them.

Be­ing able to imag­ine that your thoughts are mean­ingful and that a cor­re­spon­dence be­tween map and ter­ri­tory is be­ing main­tained, is no guaran­tee that your thoughts are true. On the other hand, if you can’t even imag­ine within your own model how a piece of your map could have a trace­able cor­re­spon­dence to the ter­ri­tory, that is a very bad sign for the be­lief be­ing mean­ingful, let alone true. Check­ing to see whether you can imag­ine a be­lief be­ing mean­ingful is a test which will oc­ca­sion­ally throw out bad be­liefs, though it is no guaran­tee of a be­lief be­ing good.

Okay, but what about the idea that it should be mean­ingful to talk about whether or not a space­ship con­tinues to ex­ist af­ter it trav­els over the cos­molog­i­cal hori­zon? Doesn’t this the­ory of mean­ingful­ness seem to claim that you can only sen­si­bly imag­ine some­thing that makes a differ­ence to your sen­sory ex­pe­riences?

No. It says that you can only talk about events that your sen­sory ex­pe­riences pin down within the causal graph. If you ob­serve enough pro­tons, elec­trons, neu­trons, and so on, you can pin down the phys­i­cal gen­er­al­iza­tion which says, “Mass-en­ergy is nei­ther cre­ated nor de­stroyed; and in par­tic­u­lar, par­ti­cles don’t van­ish into noth­ing­ness with­out a trace.” It is then an effect of that rule, com­bined with our pre­vi­ous ob­ser­va­tion of the ship it­self, which tells us that there’s a ship that went over the cos­molog­i­cal hori­zon and now we can’t see it any more.

To nav­i­gate refer­en­tially to the fact that the ship con­tinues to ex­ist over the cos­molog­i­cal hori­zon, we nav­i­gate from our sen­sory ex­pe­rience up to the laws of physics, by talk­ing about the cause of elec­trons not blink­ing out of ex­is­tence; we also nav­i­gate up to the ship’s ex­is­tence by trac­ing back the cause of our ob­ser­va­tion of the ship be­ing built. We can’t see the fu­ture ship over the hori­zon—but the causal links down from the ship’s con­struc­tion, and from the laws of physics say­ing it doesn’t dis­ap­pear, are both pinned down by ob­ser­va­tion—there’s no difficulty in figur­ing out which causes we’re talk­ing about, or what effects they have.[2]

All righty-ighty, let’s re­visit that med­i­ta­tion:

“Does your rule for­bid epiphe­nom­e­nal­ist the­o­ries of con­scious­ness in which con­scious­ness is caused by neu­rons, but doesn’t af­fect those neu­rons in turn? The clas­sic ar­gu­ment for epiphe­nom­e­nal con­scious­ness is that we can imag­ine a uni­verse where peo­ple be­have ex­actly the same way, but there’s no­body home—no aware­ness, no con­scious­ness, in­side the brain. For all the atoms in this uni­verse to be in the same place—for there to be no de­tectable differ­ence in­ter­nally, not just ex­ter­nally - ‘con­scious­ness’ would have to be some­thing cre­ated by the atoms in the brain, but which didn’t af­fect those atoms in turn. It would be an effect of atoms, but not a cause of atoms. Now, I’m not so much in­ter­ested in whether you think epiphe­nom­e­nal the­o­ries of con­scious­ness are true or false—rather, I want to know if you think they’re im­pos­si­ble or mean­ingless a pri­ori based on your rules.”

The clos­est the­ory to this which definitely does seem co­her­ent—i.e., it’s imag­in­able that it has a pin­pointed mean­ing—would be if there was an­other lit­tle brain liv­ing in­side my brain, made of shadow par­ti­cles which could af­fect each other and be af­fected by my brain, but not af­fect my brain in turn. This brain would cor­rectly hy­poth­e­size the rea­sons for its sen­sory ex­pe­riences—that there was, from its per­spec­tive, an up­per tier of par­ti­cles in­ter­act­ing with each other that it couldn’t af­fect. Up­per-tier par­ti­cles are ob­serv­able, i.e., can af­fect lower-tier senses, so it would be pos­si­ble to cor­rectly in­duct a sim­plest ex­pla­na­tion for them. And this in­ner brain would think, “I can imag­ine a Zom­bie Uni­verse in which I am miss­ing, but all the up­per-tier par­ti­cles go on in­ter­act­ing with each other as be­fore.” If we imag­ine that the up­per-tier brain is just a robotic sort of agent, or a kit­ten, then the in­ner brain might jus­tifi­ably imag­ine that the Zom­bie Uni­verse would con­tain no­body to listen—no lower-tier brains to watch and be aware of events.

We could write that com­puter pro­gram, given sig­nifi­cantly more knowl­edge and vastly more com­put­ing power and zero ethics.

But this in­ner brain com­posed of lower-tier shadow par­ti­cles can­not write up­per-tier philos­o­phy pa­pers about the Zom­bie uni­verse. If the in­ner brain thinks, “I am aware of my own aware­ness”, the up­per-tier lips can­not move and say aloud, “I am aware of my own aware­ness” a few sec­onds later. That would re­quire causal links from lower par­ti­cles to up­per par­ti­cles.

If we try to sup­pose that the lower tier isn’t a com­pli­cated brain with an in­de­pen­dent rea­son­ing pro­cess that can imag­ine its own hy­pothe­ses, but just some shad­owy pure ex­pe­riences that don’t af­fect any­thing in the up­per tier, then clearly the up­per-tier brain must be think­ing mean­ingless gib­ber­ish when the up­per-tier lips say, “I have a lower tier of shad­owy pure ex­pe­riences which did not af­fect in any way how I said these words.” The de­liber­at­ing up­per brain that in­vents hy­pothe­ses for sense data, can only use sense data that af­fects the up­per neu­rons car­ry­ing out the search for hy­pothe­ses that can be re­ported by the lips. Any shad­owy pure ex­pe­riences couldn’t be in­puts into the hy­poth­e­sis-in­vent­ing cog­ni­tive pro­cess. So the up­per brain would be talk­ing non­sense.

There’s a ver­sion of this the­ory in which the part of our brain that we can re­port out loud, which in­vents hy­pothe­ses to ex­plain sense data out loud and man­i­fests phys­i­cally visi­ble pa­pers about Zom­bie uni­verses, has for no ex­plained rea­son in­vented a mean­ingless the­ory of shadow ex­pe­riences which is ex­pe­rienced by the shadow part as a mean­ingful and cor­rect the­ory. So that if we look at the “merely phys­i­cal” slice of our uni­verse, philos­o­phy pa­pers about con­scious­ness are mean­ingless and the phys­i­cal part of the philoso­pher is say­ing things their phys­i­cal brain couldn’t pos­si­bly know even if they were true. And yet our in­ner ex­pe­rience of those philos­o­phy pa­pers is mean­ingful and true. In a way that couldn’t pos­si­bly have caused me to phys­i­cally write the pre­vi­ous sen­tence, mind you. And yet your ex­pe­rience of that sen­tence is also true even though, in the up­per tier of the uni­verse where that sen­tence was ac­tu­ally writ­ten, it is not only false but mean­ingless.

I’m hon­estly not sure what to say when a con­ver­sa­tion gets to that point. Mostly you just want to yell, “Oh, for the love of Bel­l­dandy, will you just give up already?” or some­thing about the im­por­tance of say­ing oops.

(Oh, plus the un­ex­plained cor­re­la­tion vi­o­lates the Markov con­di­tion for causal mod­els.)

Maybe my re­ply would be some­thing along the lines of, “Okay… look… I’ve given my ac­count of a sin­gle-tier uni­verse in which agents can in­vent mean­ingful ex­pla­na­tions for sense data, and when they build ac­cu­rate maps of re­al­ity there’s a known rea­son for the cor­re­spon­dence… if you want to claim that a differ­ent kind of mean­ingful­ness can hold within a differ­ent kind of agent di­vided into up­per and lower tiers, it’s up to you to ex­plain what parts of the agent are do­ing which kinds of hy­poth­e­siz­ing and how those hy­pothe­ses end up be­ing mean­ingful and what causally ex­plains their mirac­u­lous ac­cu­racy so that this all makes sense.”

But frankly, I think peo­ple would be wiser to just give up try­ing to write sen­si­ble philos­o­phy pa­pers about lower causal tiers of the uni­verse that don’t af­fect the philos­o­phy pa­pers in any way.

Med­i­ta­tion: If we can only mean­ingfully talk about parts of the uni­verse that can be pinned down in­side the causal graph, where do we find the fact that 2 + 2 = 4? Or did I just make a mean­ingless noise, there? Or if you claim that “2 + 2 = 4” isn’t mean­ingful or true, then what al­ter­nate prop­erty does the sen­tence “2 + 2 = 4″ have which makes it so much more use­ful than the sen­tence “2 + 2 = 3”?

Main­stream sta­tus.

[1] Well, it seems imag­in­able so long as you toss most of quan­tum physics out the win­dow and put us back in a clas­si­cal uni­verse. For par­ti­cles to not be af­fected by us, they’d need their own con­figu­ra­tion space such that “which con­figu­ra­tions are iden­ti­cal” was de­ter­mined by look­ing only at those par­ti­cles, and not look­ing at any lower-tier par­ti­cles en­tan­gled with them. If you don’t want to toss QM out the win­dow, it’s ac­tu­ally pretty hard to imag­ine what an up­per-tier par­ti­cle would look like.

[2] This di­a­gram treats the laws of physics as be­ing just an­other node, which is a con­ve­nient short­hand, but prob­a­bly not a good way to draw the graph. The laws of physics re­ally cor­re­spond to the causal ar­rows Fi, not the causal nodes Xi. If you had the laws them­selves—the func­tion from past to fu­ture—be an Xi of vari­able state, then you’d need meta-physics to de­scribe the Fphysics ar­rows for how the physics-stuff Xphysics could af­fect us, fol­lowed promptly by a need for meta-meta-physics et cetera. If the laws of physics were a kind of causal stuff, they’d be an up­per tier of causal­ity—we can’t ap­pear to af­fect the laws of physics, but if you call them causes, they can af­fect us. In Ma­trix terms, this would cor­re­spond to our uni­verse run­ning on a com­puter that stored the laws of physics in one area of RAM and the state of the uni­verse in an­other area of RAM, the first area would be an up­per causal tier and the sec­ond area would be a lower causal tier. But the in­finite regress from treat­ing the laws of de­ter­mi­na­tion as causal stuff, makes me sus­pi­cious that it might be an er­ror to treat the laws of physics as “stuff that makes stuff hap­pen and hap­pens be­cause of other stuff”. When we trust that the ship doesn’t dis­ap­pear when it goes over the hori­zon, we may not be nav­i­gat­ing to a physics-node in the graph, so much as we’re nav­i­gat­ing to a sin­gle Fphysics that ap­pears in many differ­ent places in­side the graph, and whose pre­vi­ously un­known func­tion we have in­ferred. But this is an unim­por­tant tech­ni­cal quib­ble on Tues­days, Thurs­days, Satur­days, and Sun­days. It is only an in­cred­ibly deep ques­tion about the na­ture of re­al­ity on Mon­days, Wed­nes­days, and Fri­days, i.e., less than half the time.

Part of the se­quence Highly Ad­vanced Episte­mol­ogy 101 for Beginners

Next post: “Proofs, Im­pli­ca­tions, and Models

Pre­vi­ous post: “Stuff That Makes Stuff Hap­pen