Intelligence without causality

In this work, I ex­plain how a mind can func­tion in a uni­verse de­void of cause and effect. I show that such a de­sign can ac­com­mo­date ar­bi­trary re­stric­tions that the laws of physics might hap­pen to place on its out­puts.

I origi­nally de­cided to write this post when I de­cided that I wanted to re­fer to this idea within some­thing else that I in­tend to write, and then re­al­ized that I couldn’t think of any­where that the whole idea was clearly ex­pressed.

Imag­ine a acausal uni­verse, as de­scribed in https://​​www.less­​​posts/​​o5F2p3krzT4JgzqQc/​​causal-uni­verses (The rest of this post prob­a­bly won’t make sense if you haven’t read and un­der­stood this first)
What kind of mind de­sign would work in such a uni­verse. An acausal mind doesn’t have sep­a­rate in­puts and out­puts, it just has acausal con­nec­tions, which for the pur­poses of this post, I call aputs.
An aput is a cor­re­la­tion be­tween some part of the ex­ter­nal world and the AI al­gorithm that is en­forced by the laws of physics. Lets as­sume that all the aputs are bi­nary, true or false. Let The AI is then a black box that is con­sis­tent with some but not nec­es­sar­ily all pos­si­ble aputs. In terms of the ex­po­nen­tially big com­puter simu­lat­ing the uni­verse, the AI has a set of aputs and the AI has an ac­tion set . . The com­puter simu­lat­ing the uni­verse searches through all pos­si­ble bit­strings. If the sub­set of bits that con­sist of the AI’s aputs is not in , then the bit­string is dis­carded. Of course, the AI might not be a black box to the physics of the uni­verse. The AI might con­tain some in­ter­nal vari­ables, with mul­ti­ple con­sis­tency checks re­lat­ing them to the sur­round­ings. The AI might not be guaran­teed ex­is­tence at all. If the state of the world where the AI is never turned on is self con­sis­tent, there isn’t any­thing the AI can do about it.

Of course, some­times the laws of physics put con­straints on . Th­ese phys­i­cal con­straints can be mod­eled by forc­ing the AI to pick for some set de­ter­mined by the laws of physics.
Some phys­i­cally rele­vant con­straints on in­clude causal­ity.

for fixed . ( means there ex­ists a unique value)
If we di­vide the aputs into blocks for eas­ier no­ta­tion, then we can de­scribe the con­di­tion that not all out­puts de­pend on all in­puts.

You can put other more com­pli­cated con­di­tions to rep­re­sent other limits to in­for­ma­tion flow.
Another con­di­tion you might want is

This con­di­tion says that the AI has to im­ple­ment a bi­jec­tive func­tion. This might be a real limi­ta­tion of an AI built out of re­versible com­put­ing with no in­for­ma­tion sink. Quan­tum com­put­ers can be rep­re­sented by mak­ing the be real num­bers de­scribing the in­put and out­put quan­tum state, with C be­ing the set pro­duced by uni­tary ma­tri­ces.

An agent that had ac­cess to an out­come pump type de­vice that could force re­al­ity to not give it cer­tain in­puts by cre­at­ing a con­di­tional con­tra­dic­tion would also be de­scribed eas­ily. For each in­put, there is at most one out­put.

What does AIXI look like in this for­mal­ism. You have some func­tion
You have a set H of acausal uni­verse mod­els, a sin­gle uni­verse model is a func­tion . H can be con­sid­ered a mul­ti­set if you like.
You have a weight func­tion . The weight func­tion could be where (states ) where =set of Tur­ing ma­chines from ma­chines, con­sid­ered as func­tions from n bits to 1 bit that acts as an in­di­ca­tor func­tion for .
Your set is de­ter­mined by what physics you have ac­cess to when build­ing your AI. If the uni­verse con­tains closed timelike curves, but your AI doesn’t, then has the causal struc­ture.

By sep­a­rat­ing the phys­i­cal facts about where in­for­ma­tion can and can’t go from our agent de­sign, we are left with a sim­pler de­sign that can be ap­plied to a wide range of sys­tems. Note that the new de­sign is search­ing an ex­po­nen­tially larger ac­tion space than AIXI.

No comments.