Computational Model: Causal Diagrams with Symmetry

Con­sider the fol­low­ing pro­gram:

f(n):
    if n == 0:
        re­turn 1
    re­turn n * f(n-1)

Let’s think about the pro­cess by which this func­tion is eval­u­ated. We want to sketch out a causal DAG show­ing all of the in­ter­me­di­ate calcu­la­tions and the con­nec­tions be­tween them (feel free to pause read­ing and try this your­self).

Here’s what the causal DAG looks like:

Each dot­ted box cor­re­sponds to one call to the func­tion f. The re­cur­sive call in f be­comes a sym­me­try in the causal di­a­gram: the DAG con­sists of an in­finite se­quence of copies of the same sub­cir­cuit.

More gen­er­ally, we can rep­re­sent any Tur­ing-com­putable func­tion this way. Just take some pseu­docode for the func­tion, and ex­pand out the full causal DAG of the calcu­la­tion. In gen­eral, the di­a­gram will ei­ther be finite or have sym­met­ric com­po­nents—the sym­me­try is what al­lows us to use a finite rep­re­sen­ta­tion even though the graph it­self is in­finite.

Why would we want to do this?

For our pur­poses, the cen­tral idea of em­bed­ded agency is to take these black-box sys­tems which we call “agents”, and break open the black boxes to see what’s go­ing on in­side.

Causal DAGs with sym­me­try are how we do this for Tur­ing-com­putable func­tions in gen­eral. They show the ac­tual cause-and-effect pro­cess which com­putes the re­sult; con­cep­tu­ally they rep­re­sent the com­pu­ta­tion rather than a black-box func­tion.

In par­tic­u­lar, a causal DAG + sym­me­try rep­re­sen­ta­tion gives us all the nat­u­ral ma­chin­ery of causal­ity—most no­tably coun­ter­fac­tu­als. We can ask ques­tions like “what would hap­pen if I reached in and flipped a bit at this point in the com­pu­ta­tion?” or “what value would f(5) re­turn if f(3) were 11?”. We can pose these ques­tions in a well-defined, un­am­bigu­ous way with­out wor­ry­ing about log­i­cal coun­ter­fac­tu­als, and with­out adding any ad­di­tional ma­chin­ery. This be­comes par­tic­u­larly im­por­tant for em­bed­ded op­ti­miza­tion: if an “agent” (e.g. an or­ganism) wants to plan ahead to achieve an ob­jec­tive (e.g. find food), it needs to ask coun­ter­fac­tual ques­tions like “how much food would I find if I kept go­ing straight?”.

The other main rea­son we would want to rep­re­sent func­tions as causal DAGs with sym­me­try is be­cause our uni­verse ap­pears to be one gi­ant causal DAG with sym­me­try.

Be­cause our uni­verse is causal, any com­pu­ta­tion performed in our uni­verse must even­tu­ally bot­tom out in a causal DAG. We can write our pro­grams in any lan­guage we please, but even­tu­ally they will be com­piled down to ma­chine code and run by phys­i­cal tran­sis­tors made of atoms which are them­selves gov­erned by a causal DAG. In most cases, we can rep­re­sent the causal com­pu­ta­tional pro­cess at a more ab­stract level—e.g. in our ex­am­ple pro­gram, even though we didn’t talk about reg­isters or tran­sis­tors or elec­tric fields, the causal di­a­gram we sketched out would still ac­cu­rately rep­re­sent the com­pu­ta­tion performed even at the lower lev­els.

This raises the is­sue of ab­strac­tion—the core prob­lem of em­bed­ded agency. My own main use-case for the causal di­a­gram + sym­me­try model of com­pu­ta­tion is for­mu­lat­ing mod­els of ab­strac­tion: how can one causal di­a­gram (pos­si­bly with sym­me­try) rep­re­sent an­other in a way which makes coun­ter­fac­tual queries on the map cor­re­spond to some kind of coun­ter­fac­tual on the ter­ri­tory? Can that work when the “map” is a sub­DAG of the ter­ri­tory DAG? It feels like causal di­a­grams + sym­me­try are the min­i­mal com­pu­ta­tional model needed to get agency-rele­vant an­swers to this sort of ques­tion.

Learning

The tra­di­tional ul­ti­mate learn­ing al­gorithm is Solomonoff In­duc­tion: take some black-box sys­tem which spews out data, and look for short pro­grams which re­pro­duce that data. But the phrase “black-box” sug­gests that per­haps we could do bet­ter by look­ing in­side that box.

To make this a lit­tle bit more con­crete: imag­ine I have some python pro­gram run­ning on a server which re­sponds to http re­quests. Solomonoff In­duc­tion would look at the data re­turned by re­quests to the pro­gram, and learn to pre­dict the pro­gram’s be­hav­ior. But that sort of black-box in­ter­ac­tion is not the only op­tion. The pro­gram is run­ning on a phys­i­cal server some­where—so, in prin­ci­ple, we could go grab a screw­driver and a tiny os­cillo­scope and di­rectly ob­serve the com­pu­ta­tion performed by the phys­i­cal ma­chine. Even with­out mea­sur­ing ev­ery voltage on ev­ery wire, we may at least get enough data to nar­row down the space of can­di­date pro­grams in a way which Solomonoff In­duc­tion could not do. Ideally, we’d gain enough in­for­ma­tion to avoid need­ing to search over all pos­si­ble pro­grams.

Com­pared to Solomonoff In­duc­tion, this pro­cess looks a lot more like how sci­en­tists ac­tu­ally study the real world in prac­tice: there’s lots of tak­ing stuff apart and pok­ing at it to see what makes it tick.

In gen­eral, though, how to learn causal DAGs with sym­me­try is still an open ques­tion. We’d like some­thing like Solomonoff In­duc­tion, but which can ac­count for par­tial in­for­ma­tion about the in­ter­nal struc­ture of the causal DAG, rather than just over­all in­put-out­put be­hav­ior. (In prin­ci­ple, we could shoe­horn this whole thing into tra­di­tional Solomonoff In­duc­tion by treat­ing in­for­ma­tion about the in­ter­nal DAG struc­ture as nor­mal old data, but that doesn’t give us a good way to ex­tract the learned DAG struc­ture.)

We already have al­gorithms for learn­ing causal struc­ture in gen­eral. Pearl’s Causal­ity sketches out some such al­gorithms in chap­ter 2, al­though they’re only prac­ti­cal for ei­ther very small sys­tems or very large amounts of data. Bayesian struc­ture learn­ing can han­dle larger sys­tems with less data, though some­times at the cost of a very large amount of com­pute—i.e. es­ti­mat­ing high-di­men­sional in­te­grals.

How­ever, in gen­eral, these ap­proaches don’t di­rectly ac­count for sym­me­try of the learned DAGs. Ideally, we would use a prior which weights causal DAGs ac­cord­ing to the size of their rep­re­sen­ta­tion—i.e. in­finite DAGs would still have nonzero prior prob­a­bil­ity if they have some sym­me­try al­low­ing for finite rep­re­sen­ta­tion, and in gen­eral DAGs with mul­ti­ple copies of the same sub-DAG would have higher prob­a­bil­ity. This isn’t quite the same as weight­ing by min­i­mum de­scrip­tion length in the Solomonoff sense, since we care speci­fi­cally about sym­me­tries which cor­re­spond to func­tion calls—i.e. iso­mor­phic sub­DAGs. We don’t care about graphs which can be gen­er­ated by a short pro­gram but don’t have these sorts of sym­me­tries. So that leaves the ques­tion: if our prior prob­a­bil­ity for a causal DAG is given by a no­tion of min­i­mum de­scrip­tion length which only al­lows com­pres­sion by spec­i­fy­ing re-used sub­cir­cuits, what prop­er­ties will the re­sult­ing learn­ing al­gorithm pos­sess? Is it com­putable? What kinds of data are needed to make it tractable?