I present a step-by-step argument in philosophy of mind. The main conclusion is that it is probably possible for conscious homomorphically encrypted digital minds to exist. This has surprising implications: it demonstrates a case where “mind exceeds physics” (epistemically), which implies the disjunction “mind exceeds reality” or “reality exceeds physics”. The main new parts of the discussion consist of (a) an argument that, if digital computers are conscious, so are homomorphically encrypted versions of them (steps 7-9); (b) speculation on the ontological consequences of homomorphically encrypted consciousness, in the form of a trilemma (steps 10-11).
Step 1. Physics
Let P be the set of possible physics states of the universe, according to “the true physics”. I am assuming that the intellectual project of physics has an idealized completion, which discovers a theory integrating all potentially accessible physical information. The theory will tend to be microscopic (although not necessarily strictly) and lawful (also not necessarily strictly). It need not integrate all real information, as some such information might not be accessible (e.g. in the case of the simulation hypothesis).
Rejecting this step: fundamental skepticism about even idealized forms of the intellectual project of physics; various religious/spiritual beliefs.
Step 2. Mind
Let M be the set of possible mental states of minds in the universe. Note, an element of M specifies something like a set or multiset of minds, as the universe could contain multiple minds. We don’t need M to be a complete theory of mind (specifying color qualia and so on); the main concern is doxastic facts, about beliefs of different agents. For example, I believe there is a wall behind me; this is a doxastic mental fact. This step makes no commitment to reductionism or non-reductionism. (Color qualia raise a number of semantic issues extraneous to this discussion; it is sufficient for now to consider mental states to be quotiented over any functionally equivalent color inversion/rotations, as these make no doxastic differences.)
Rejecting this step: eliminativism, especially eliminative physicalism.
Step 3. Reality
Let R be the set of possible reality states, according to “the true reality theory”. To motivate the idea, physics (P) only includes physical facts that could in principle be determined from the contents of our universe. There would remain basic ambiguities about the substrate, such as multiverse theories, or whether our universe exists in a computer simulation. R represents “the true theory of reality”, whatever that is; it is meant to include enough information to determine all that is real. For example, if physicalism is strictly true, then , or is at least isomorphic. Solomonoff induction, and similarly the speed prior, posit that reality consists of an input to a universal Turing machine (specifying some other Turing machine and its input), and its execution trajectory, producing digital subjective experience.
Let specify the universe’s physical state as a function of the reality state. Let specify the universe’s mental state as a function of the reality state. These presumably exist under the above assumptions, because physics and mind are both aspects of reality, though these need not be efficiently computable functions. (The general structure of physics and mind being aspects of reality is inspired by neutral monism, though it does not necessitate neutral monism.)
Rejecting this step: fundamental doubt about the existence of a reality on which mind and physics supervene; incompatibilism between reality of mind and of physics.
Step 4. Natural supervenience
Similar to David Chalmers’s concept in The Conscious Mind. Informally, every possible physical state has a unique corresponding mental state. Formally:
Here means “there exists a unique”.
Assuming ZFC and natural supervenience, there exists the mapping function commuting (), though again, h need not be efficiently computable.
Natural supervenience is necessary for it to be meaningful to refer to the mental properties corresponding to some physical entity. For example, to ask about the mental state corresponding to a physical dog. Natural supervenience makes no strong claim about physics “causing” mind; it is rather a claim of constant conjunction, in the sense of Hume. We are not ruling out, for example, physics and mind being always consistent due to a common cause.
Rejecting this step: Interaction dualism. “Antenna theory”. Belief in P-zombies as not just logically possible, but really possible in this universe. Belief in influence of extra-physical entities, such as ghosts or deities, on consciousness.
Step 5. Digital consciousness
Assume it is possible for a digital computer running a program to be conscious. We don’t need to make strong assumptions about “abstract algorithms being conscious” here, just that realistic physical computers that run some program (such as a brain emulation) contain consciousness. This topic has been discussed to death, but to briefly say why I think digital computer consciousness is possible:
The mind not being digitally simulable in a behaviorist manner (accepting normal levels of stochasticity/noise) would imply hypercomputation in physics, which is dubious.
Chalmers’s fading qualia argument implies that, if a brain is gradually transformed into a behaviorally equivalent simulation, and the simulation is not conscious, then qualia must fade either gradually or suddenly; both are problematic.
Having knowledge that no digital computer can be conscious would imply we have knowledge of ultimate reality , specifically, that we do not exist in a digital computer simulation. While I don’t accept the simulation hypothesis as likely, it seems presumptuous to reject it on philosophy of mind grounds.
Rejecting this step: Brains as hypercomputers; or physical substrate dependence, e.g. only organic matter can be conscious.
Step 6. Real-physics fully homomorphic encryption is possible
Fully homomorphic encryption allows running a computation in an encrypted manner, producing an encrypted output; knowing the physical state of the computer and the output, without knowing the key, is insufficient to determine details of the computation or its output in physical polynomial time. Physical polynomial time is polynomial time with respect to the computing power of physics, BQP according to standard theories of quantum computation. Homomorphic encryption is not proven to work (since P != NP is not proven). However, quantum-resistant homomorphic encryption, e.g. based on lattices, is an active area of research, and is generally believed to be possible. This assumption says that (a) quantum-resistant homomorphic encryption is possible and (b) quantum-resistance is enough; physics doesn’t have more computing power than quantum. Or alternatively, non-quantum FHE is possible, and quantum computers are impossible. Or alternatively, the physical universe’s computation is more powerful than quantum, and yet FHE resisting it is still possible.
Rejecting this step: Belief that the physical universe has enough computing power to break any FHE scheme in polynomial time. Non-standard computational complexity theory (e.g. P = NP), cryptography, or physics.
Step 7. Homomorphically encrypted consciousness is possible
(Original thought experiment proposed by Scott Aaronson.)
Assume that a conscious digital computer can be homomorphically encrypted, and still be conscious, if the decryption key is available nearby. Since the key is nearby, the homomorphic encryption does not practically obscure anything. It functions more as a virtualization layer, similar to a virtual machine. If we already accept digital computer consciousness as possible, we need to tolerate some virtualization, so why not this kind?
An intuition backing this assumption is “can’t get something from nothing”. If we decrypt the output, we get the results that we would have gotten from running a conscious computation (perhaps including the entire brain emulation state trajectory in the output), so we by default assume consciousness happened in the process. We got the results without any fancy brain lesioning (to remove the seat of consciousness while preserving functional behavior), just a virtualization step.
As a concrete example, consider if someone using brain emulations as workers in a corporation decided to homomorphically encrypt the emulation (and later decrypt the results with a key on hand), to get the results of the work, without any subjective experience of work. It would seem dubious to claim that no consciousness happened in the course of the work (which could even include, for example, writing papers about consciousness), due to the homomorphic encryption layer.
As with digital consciousness, if we knew that homomorphically encrypted computations (with a nearby decryption key) were not conscious, then we would know something about ultimate reality, namely that we are not in a homomorphically encrypted simulation.
Rejecting this step: Picky quasi-functionalism. Enough multiple realizability to get digital computer consciousness, but not enough to get homomorphically encrypted consciousness, even if the decryption key is right there.
Step 8. Moving the key further away doesn’t change things
Now that the homomorphically encrypted conscious mind is separated from the key, consider moving the key 1 centimeter further away. We assume this doesn’t change the consciousness of the system, as long as the key is no more than 1 light-year away, so that it is in principle possible to retrieve the key. We can iterate to move the key 1 light-year away in small steps, without changing the consciousness of the overall system.
As an intuition, suppose the contrary that the computation with the nearby key was conscious, but not with the far-away key. We run the computation, still encrypted, to completion, while the key is far away. Then we bring the key back and decrypt it. It seems we “got something from nothing” here: we got the results of a conscious computation with no corresponding consciousness, and no fancy brain lesioning, just a virtualization layer with extra steps.
Rejecting this step: Either a discrete jump where moving the key 1 cm removes consciousness (yet consciousness can be brought back by moving the key back 1cm?), or a continuous gradation of diminished consciousness across distance, though somehow making no behavioral difference.
Step 9. Deleting a far-away key doesn’t change things
Suppose the system of the encrypted computation and the far-away key is conscious. Now suppose the key is destroyed. Assume this doesn’t affect the system’s consciousness: the encrypted computation by itself, with no key anywhere in the universe, is still conscious.
This assumption is based on locality intuition. Could my consciousness depend directly on events happening 1 light-year away, which I have no way of observing? If my consciousness depended on it in a behaviorally relevant way, then that would imply faster-than-light communication. So it can only depend on it in a behaviorally irrelevant way, but this presents similar problems as with P-zombies.
We could also consider a hypothetical where the key is destroyed, but then randomly guessed or brute-forced later. Does consciousness flicker off when the key is destroyed, then on again as it is guessed? Not in any behaviorally relevant way. We did something like “getting something from nothing” in this scenario, except that the key-guessing is real computational work. The idea that key-guessing is itself what is producing consciousness is highly dubious, due to the dis-analogy between the computation of key-guessing and the original conscious computation.
Rejecting this step: Consciousness as a non-local property, affected by far-away events, though not in a way that makes any physical difference. Global but not local natural supervenience.
Step 10. Physics does not efficiently determine encrypted mind
If a homomorphically encrypted mind (with no decryption key) is conscious, and has mental states such as belief, it seems it knows things (about its mental states, or perhaps mathematical facts) that cannot be efficiently determined from physics, using the computation of physics and polynomial time. Physical omniscience about the present state of the universe is insufficient to decrypt the computation. This is basically re-stating that homomorphic encryption works.
Imagine you learn you are in such an encrypted computation. It seems you know something that a physically omniscient agent doesn’t know except with super-polynomial amounts of computation: the basic contents of your experience, which could include the decryption key, or the solution to a hard NP complete problem.
There is a slight complication, in that perhaps the mental state can be determined from the entire trajectory of the universe, as the key was generated at some point in the past, even if every trace of it has been erased. However, in this case we are imagining something like Laplace’s demon looking at the whole physics history; this would imply that past states are “saved”, efficiently available to Laplace’s demon. (The possibility of real information, such as the demon’s memory of the physical trajectory, exceeding physical information, is discussed later; “Reality exceeds physics, informationally”.)
If locality of natural supervenience applies temporally, not just spatially, then the consciousness of the homomorphically encrypted computation can’t depend directly on the far past, only at most the recent past. In principle, the initial state of the homomorphically encrypted computation could have been “randomly initialized”, not generated from any existent original key, although of course this is unlikely.
So I assume that, given the steps up to here, the homomorphically encrypted mind really does know something (e.g. about its own experiences/beliefs, or mathematical facts) that goes beyond what can be efficiently inferred from physics, given the computing power of physics.
Rejecting this step: Temporal non-locality. Mental states depend on distinctions in the distant physical past, even though these distinctions make no physical or behavioral difference in the present or recent past. Doubt that the randomly initialized homomorphically encrypted mind really “knows anything” beyond what can be efficiently determined from physics, even reflexive properties about its own experience.
Step 11. A fork in the road
A terminological disambiguation: by P-efficiently computable, I mean computable in polynomial time with respect to the computing power of physics, which is BQP according to standard theories. By R-efficiently computable, I mean computable in polynomial time with respect to the computing power of reality, which is at least that of physics, but could in principle be higher, e.g. if our universe was simulated in a universe with beyond-quantum computation.
If assumptions so far are true, then there is no P-efficiently computable mapping physical states to mental states, corresponding to the natural supervenience relation. This is because, in the case of homomorphically encrypted computation, h would have to run in P-super-polynomial time. This can be summarized as “mind exceeds physics, epistemically”: some mind in the system knows something that cannot be P-efficiently determined from physics, such as the solution to some hard NP-complete problem.
Now we ask a key question: Is there a R-efficiently computable mapping reality states to mental states, and if so, is there a P-efficiently computable g?
Path A: Mind exceeds reality
Suppose there is no R-efficiently computable g (from which it follows that there is no P-efficiently computable g). That is, even given omniscence about ultimate reality, and polynomial computation with respect to the computation of reality (which is at least as strong as that of physics, perhaps stronger), it is still not possible to know all about minds in the universe, and in particular, details of the experience contained in a homomorphically encrypted computation. Mind doesn’t just exceed physics; mind exceeds reality.
Again, imagine you learn you are in a homomorphically encrypted computation. You look around you and it seems you see real objects. Yet these objects’ appearances can’t be R-efficiently determined on the basis of all that is real. Your experiences seem real, but they are more like “potentially real”, similar to hard-to-compute mathematical facts. Yet you are in some sense physically embodied; cracking the decryption key would reveal your experience. And you could even have correct beliefs about the key, having the requisite mathematical knowledge for the decryption. You could even have access to and check the solution to a hard NP complete problem that no one else knows the solution to; does this knowledge not “exist in reality” even though you have access to it and can check it?
Something seems unsatisfactory about this, even if it isn’t clearly wrong. If we accept step 2 (existence of mind), rejecting eliminativism, then we accept that mental facts are in some sense real. But here, they aren’t directly real in the sense of being R-efficiently determined from reality. It is as if an extra computation (search or summation over homomorphic embeddings?) is happening to produce subjective experience, yet there is nowhere in reality for this extra computation to take place. The point of positing physics and/or reality is partially to explain subjective experience, yet here there is no R-efficient explanation of experience in terms of reality.
Path B: Reality exceeds physics, computationally
Suppose is R-efficiently computable, but not P-efficiently computable. Then the real substrate computes more powerfully than physics (given polynomial time in each case). Reality exceeds physics: there really is a more powerful computing substrate than is implied by physics.
As a possibility argument, consider that a Turing-computable universe, such as Conway’s Game of Life, can be simulated in this universe. Reality contains at least quantum computing, since our universe (presumably) supports it. This would allow us to, for example, decrypt the communications of Conway’s Game of Life lifeforms who use RSA.
So we can’t easily rule out that the real substrate has enough computation to efficiently determine the homomorphically encrypted experience, despite physics not being this powerful. This would contradict strict physicalism. It could open further questions about whether homomorphic encryption is possible in the substrate of reality, though of course in theory something analogous to P = NP could apply to the substrate.
Path C: Reality exceeds physics, informationally
Suppose instead that is P-efficiently computable (and therefore also R-efficiently computable). Then physicalism is strictly false: R contains more accessible information than P. There is real information, exceeding the information of physics, which is sufficient to P-efficiently determine the mental state of the conscious mind in the homomorphically encrypted computation. Perhaps reality has what we might consider “high-level information” or a “multi-level map”. Maybe reality has a category theoretic and/or universal algebraic structure of domains and homomorphisms between them.
According to this path, reductionism is not strictly true. Mental facts could be “reduced” to physical facts sufficient to re-construct them (by natural supervenience). However, there is no efficient re-construction; the reduction destroys P-computation-bounded information even though it destroys no computation-unbounded information. Hence, since reality P-efficiently determines subjective experiences, unlike physics, it contains information over and above physics.
HashLife is inspirational, in its informational preservation and use of high-level features, while maintaining the expected low-level dynamics of Conway’s Game of Life. Though this is only a loose analogy.
Conclusion
Honestly, I don’t know what to think at this point. I feel pretty confident about conscious digital computers being possible. The homomorphic encryption step (with a key nearby) seems to function as a virtualization step, so I’m willing to accept that, though it introduces complications. I am pretty sure moving the key far away, then deleting it, doesn’t make a difference; denying either would open up too many non-locality paradoxes. So I do think a homomorphically encrypted computation, with no decryption key anywhere, is probably conscious, though ordinary philosophical uncertainty applies.
That leads to the fork in the road. Path A (mind exceeds reality) seems least intuitive; it implies actual minds can “know more” than reality, e.g. know mathematical facts not R-efficiently determinable from reality. It seems dogmatic to be confident in either path B or C; both paths imply substantial facts about the ultimate substrate. Path B seems to have the fewest conceptual problems: unlike path C, it doesn’t require positing the informational existence of “high-level” homomorphic levels above physics. However, attributing great computational power to the real substrate would have anthropic implications: why do we seem to be in a quantum-computing universe, if the real substrate can support more advanced computations?
Path C is fun to imagine. What if some of what we would conceive of as “high-level properties” really exist in the ultimate substrate of reality, and reductionism simply assumes away this information, with invalid computational consequences? This thought inspires ontological wonder.
In any case, the disjunction of path B or C implies that strict physicalism is false, which is theoretically notable. If B or C is correct, reality exceeds physics one way or another, computationally and/or informationally. Ordinary philosophical skepticism applies, but I accept the disjunction as the mainline model. (Note that Chalmers believes natural supervenience holds but that strict physicalism is false.)
As an end note, there is a general “trivialism” objection to functionalism, in that many physical systems, such as rocks, can be interpreted as running any of a great number of computations. Chalmers has discussed causal solutions; Jeff Buenchner has discussed computational complexity solutions (in Gödel, Putnam, and Functionalism), restricting interpretations to computationally realistic ones, e.g. not interpreting a rock as solving the halting problem. Trivialism and solutions to it are of course relevant to attributing mental or computational properties to a computer running a homomorphically encrypted computation.
(thanks to @adrusi for a X discussion leading to many of these thoughts)
I actually think that A is the most intuitive option. I don’t see why it should be possible for something which knows the physical state of my brain to be able to efficiently compute the contents of it.
Then again, given functionalism, perhaps it’s the case that extracting information about the contents of the brain from the encrypted computation is not as hard as one might think. The encryption is just a reversible map from one state space to another. If an omniscient observer can extract the contents of a brain by assembling a causal model of it in un-encrypted phase space, why would it struggle to build the same casual model in encrypted phase space? If some high-level abstractions of the computation are what matter, then the difficult part is mostly in finding the right abstractions.
I think you meant “philosophically necessary” where you wrote “possible”? If so, agreed, that’s also my take.
I don’t understand this part. “Causal model” is easy—if the computer is a Turing machine, then you have a causal model in terms of the head and the tape etc. You want “understanding” not “causal model”, right?
If a superintelligence were to embark on the project of “understanding” a brain, it would be awfully helpful to see the stream of sensory inputs and the motor outputs. Without encryption, you can do that: the environment is observable. Under homomorphic encryption without the key, the environmental simulation, and the brain’s interactions with it, look like random bits just like everything else. Likewise, it would be awfully helpful to be able to notice that the brain is in a similar state at times t₁ versus t₂, and/or the ways that they’re different. But under homomorphic encryption without the key, you can’t do that, I think. See what I mean?
To be clear in path A I’m imagining that the omniscent observer knows not just physics, but all of reality. By step 10 we already have that physical omniscience + physical (BQP) computation isn’t enough to derive mental states. (So it’s a question of whether the mental states / abstractions are “real”, encoded somewhere in reality even if not properly in physics)
I think the extra difficulty with encrypted phase space is the homomorphic encryption presumably makes it computationally intractable? If it really is intractable then “search over the right abstractions” is going to be computationally hard.
It’s possible to alter a homomorphic computation in arbitrary ways without knowing the decryption key.
An omniscient observer can homomorphically encrypt a copy of themselves under the same key as the encrypted mind and run a computation of its own copy examining every aspect of the internal mental states of the subject, since they share the same key.
If there are N homomorphically encrypted minds in reality then the omniscient observer will have to create N layers of homomorphic computation in order for the innermost computation to yield the observation of all N minds’ internal states, each passed in turn to a sub-computation, and relying on the premise that homomorphically encrypted minds are conscious for the inner observer to be conscious.
The question is whether encoding all of reality and homomorphically encrypting it necessarily causes a loss of fidelity. If yes, one of the trilemmas still holds. Otherwise there’s no trilemma and the innermost omniscient observer sees all of reality and all internal mental states. I’d argue that for a meaningful omniscient observer to exist it is the case that encoding of reality (into the mind of the observer) must not result in a loss of fidelity. There could be some edge-cases where a polynomial amount of fidelity is lost due to the homomorphic encryption that wouldn’t be lost to the “natural” omniscient observer’s encoding of reality, but I think it stretches the practical definition of omniscience for an observer.
I think the argument extends to physics but the polynomial loss of fidelity is more likely to cause problems in a very homomorphically-encrypted-mind-populated universe.
Hmm… I’m not sure if I’m imagining what you are, but wouldn’t the omniscient observer need to know the key already to encrypt themselves? (If reality somehow contains the key, then I suppose omniscience about reality is enough, but omniscience about physics isn’t.)
It is true that being more encrypted is more compatible with being omnsiscent. It’s strange because base physics is often thought of as the more omniscent layer. Like, I still think you get “mind exceeds physics” (hence the trilemma) since the omniscient observer you’re positing isn’t just working in base level physics, they have somehow encrypted themselves with the same key (which is not tractably available). But it seems if they knew the key they wouldn’t even need to encrypt themselves to know anything additional.
To perform homomorphic operations you need the public key, and that also allows one to encrypt any new value and perform further hidden computations under that key. The private key allows decryption of the values.
I suppose you could argue that the homomorphically encrypted mind exists ala mathematical realism even if the public key is destroyed, but it would be something “outside reality” computing future states of the encrypted mind after the public key is no longer available.
Oh, maybe what you are imagining is that it is possible to perceive a homomorphic mind in progress, by encrypting yourself, and feeding intermediate states of that other mind to your own homomorphically encrypted mind. Interesting hypothetical.
I think with respect to “reality” I don’t want to be making a dogmatic assumption “physics = reality” so I’m open to the possibility (C) that the computation occurs “in reality” even if not “in physics”.
After doing some more research I am not sure that it’s always possible to derive a public key knowing only the evaluation key; it seems to depend on the actual FHE scheme.
So the trilemma may be unaffected by this hypothetical. There’s also the question of duplication vs. unification for an observer that has the option to stay at base level reality or enter a homomorphically encrypted computation and whether those should be considered equivalent (enough).
I agree with J Bostock. I see no problem with A. Why do you think that polynomial complexity is this important?
(Thanks for a very nice structuring, btw!)
Speed prior type reasons. Like, a basic intuition is “my experiences are being produced somehow, by some process”. Speed prior leads to “this process is at least somewhat efficient”.
Like, usually if you see a hard computation being done (e.g. mining bitcoin), you would assume it happened somewhere. If one’s experiences are produced by some process, and that process is computationally hard, it raises the question “is the computation happening somewhere?”
My first pass response to this is: Yes, there’s a unique mental state for each physical state, but the aspects of that mental state can be partitioned from each other in ways that are computationally intractable to un-partition. The mapping you use from raw physics or reality to whatever understanding you use it for[1] is a function not a primitive, and in this case that function could place you on either side of an informational partition[2] (depending on whether the mapping function does something like encrypts your viewing portal/perspective). Analogous to looking at an object from different perspectives, which under normal circumstances would be connectable efficiently, but here aren’t.
Normally you can just privilege the simpler mapping function and get everything you’d want, but your simple mapping function isn’t physics, it’s viewing physics from a direction that looks simpler to you. If this is right:
A misses that the homomorphic mind is ‘real’ from a different perspective, there’s just more than one direction to be omniscient into the universe from
The somewhat shaky assumptions B makes aren’t required
I don’t quite think C’s framing fits, it’s not that reality exceeds physics, it’s that physics isn’t informationally/computationally interconnected in a way that makes any single perspective capable of efficiently getting all the interesting information.
I think some of Wolfram’s work on the Ruliad gave me some of the intuitions I’m using here, if this feels worth digging into.
Itself a computational/informational process, which it looks like you’re not staring right at in this post?
The function could even give you access to both sides of the partition, with some complexity addition.
Right so, by step 4 I’m not trying to assume that h is computationally tractable; the homomorphic case goes to show that it’s probably not in general.
With respect to C, perhaps I’m not verbally expressing it that well, but the thing you are thinking of, where there is some omniscient perspective that includes “more than” just the low level of physics (where the “more than” could be certain informational/computational interconnections) would be an instance. Something like, “there is a way to construct an omniscient perspective, it just isn’t going to be straightforwardly derivable from the physical state”.
Thanks for the link to Wolfram’s work. I listened to an interview with him on Lex I think, and wasn’t inspired to investigate further. However what you have provided does seem worthwhile looking into.
There’s no need to drag consciousness and all its metaphysical baggage through all this. Consider instead a simulation of an environment, and a simulated robot in that environment which has sensors and has basic logical reasoning about what it senses, thereby allowing it to “know” various facts about its local environment.
I think then that step 4 is not strictly true. With the robot, M now just refers to its sensory states. I expect that there are many ways to come up with g/h such that the right sort of correspondence is satisfied. But taking into account the k-complexity of g/h allows such a grounding in-practice.
Similarly, it seems clear you could concoct a cursed g/h in this case such that 11.A is true. And the k-complexity is again what keeps you from needing to worry about these.
To be clear I am mainly talking about doxastic states, it’s just that much of the past discussion and accordingly intuitions and terminology is based on “consciousness”.
Step 4 is assuming that there are real f/g/h, which need not be known. I get that this might not be valid if there is fundamental indeterminacy. However even in that case the indeterminacy might decompose into a disjunction over some equivalence class of f/g/h triples?
For particular f/g it seems for natural supervenience to not hold would require extra-physical information, “antennae theory” or something. In the Chalmers sense I mean f/g to determine psycho-physical bridging laws which are sufficient for natural supervenience, so there is no extra “soul goo”. So that the possible indeterminacy of the computational interpretation is fixed by deciding f/g.
I think basically g/h are part of an agent’s anthropic priors. It builds a model of reality and of its state of mind, and has a distribution over ways to bridge these. I don’t know what it would mean for there to be canonical such functions even in principle.
g/h can be posited by an agent e.g. Solomonoff induction.
But also, if you’re talking about agents in the first place as meaningful things, then it seems something like “doxastic mental states” is already reified, in which case you can ask things like “do these supervene on the same reality physics does”… It doesn’t really work to explain doxastic states in terms of other doxastic states in an infinite regress.
Sure.
I reject that there is any such “base ground” from which to define things. An agent has to start with itself as it understands itself. My own talk of agents is grounded in my own subjective experience and sense of meaning ultimately. Even if there was some completely objective one I would still have to start from this place in order to evaluate and accept it.
In practice it all ends up pretty normal. Everyone agrees on what is real for basically the same reason that any bounded agent has to agree on the temperature, even though it’s technically subjective. The k-complexity priors are very constraining.
Well that seems like a good starting point. I guess then, some of the arguments could be subjectivized at the level of, among agents who believe they exist in reality, what possible hypotheses could they have about their mental states and reality and how they relate; is there something like a “disjunction over plausible alternatives” (which would include something like f/g), natural supervenience, etc. Then with k-complexity epistemology it’s possible to ask, what sort of reality theory will that tend to produce, e.g. what would a k-complexity epistemology think about homomorphic encryption, in the case of other agents or itself? One thing I am suggesting is that computation bounded k-complexity type reasoning (speed prior etc) will tend to believe reality contains more information than micro-scale physics, as such information would otherwise be intractable (would be penalized by speed prior). Or put another way, physicalist reductionism “works” for computation-unbounded agents (given supervenience, information about microstates exhausts information about macrostates), but not computation-bounded agents (the derivation of macrostates from microstates is sometimes computationally intractable; this is extra relevant when such macrostates are observable, e.g. in the case of the homomorphically encrypted agent observing its own beliefs).
Forgive me, I’m probably being stupid again 😬.
On efficient computability being necessary for reality: I’m not sure I understand the logic behind this. Would you not always get diagonalization problems if you want supervening “real” things to be blessed with R-efficiently computability? For example, take R to be something like a Solomonoff induction. R-efficiently computable there means Turing computable. For our M which supervenes on R, instead of Minds, let’s let M be the probability p of a given state. The mapping function g: R->M, mapping states to the probability of states, cannot be R-efficiently computed (no matter what sort of Turing machine or speed prior you use for R) for diagonalization reasons. So the probabilities of states aren’t a “real” thing? It seems like a lot of natural emergent things wouldn’t be R-efficiently computable.
On homomorphic encryption being un-reversible: quantum computers are reversible, right? So if you say physics is as powerful as a quantum computer, and you want homomorphic encryption to be uncomputable in polynomial time, you have to make P’s physics “state” throw quantum information away over time (which it could, in e.g. Copenhagen or objective collapse interpretations, but does not in e.g. many worlds) or maybe restrict the size of the physical universe you’re giving as state to not include information we radiated away many years ago (less than 62.9 billion light years).
(Don’t feel obligated to reply)
Hmm… I think with Solomonoff induction I would say R is the UTM input, plus the entire execution trace/trajectory. Then M would be like the agent’s observations, which are a simple function of R.
I see that we can’t have all “real” things being R-efficiently computable. But the thing about doxastic states is, some agent has access to them, so it seems like from their perspective, they are “effective”, being “produced somewhere”… so I infer they are probably “computed in reality” in some sense (although that’s not entirely clear). They have access to their beliefs/observations in a more direct way than they have access to probabilities.
With respect to reversibility: The way I was thinking about it was that when the key is erased, it’s erased really far away. Then the heat from the key gets distributed somehow. Like the information could even enter a black hole. Then there would be no way to retrieve it. (Shouldn’t matter too much anyway if natural supervenience is local, then mental states couldn’t be affected by far away physical states anyway)
Here’s a pure quantum, information theoretic, no computability assumptions version that might or might not be illustrative. I don’t actually know if the quantum computer I’m talking about could be built—I’m going off intuition. EDIT I think this is 2 party quantum computation and none of the methods I’ve found are quite as strong as what I list here (real methods require e.g. a number of entangled qbits on order of the size of the computation).
You have two quantum computers, Alice and Bob, preforming the same computation steps. Alice and Bob have entangled qbits. If you observe the qbits of either Alice or Bob in isolation, you’ll forever get provably random noise from both of them. But if you bring Alice and Bob together and line up their qbits and something something mumble, you get a pure state and can read off their joint computation.
Now we have all sorts of fun thought experiments. You run Alice and Bob, separating them very far from one another. Is Alice currently running a mind computation? Provably not, if someone looked at Bob last year. But Bob is many many light years away—how can we know if someone looked at Bob? What if we separate Alice and Bob past each other’s cosmic horizons, such that the acceleration of the expanding universe makes it impossible for them to ever reach each other again even if they run towards each other at the speed of light? Or send Bob to Alpha Centauri and back at close to the speed of light so he’s aged only 1 year where Alice has aged 8. Has Alice been doing the mind thing for the past 7 years? Depends on whether you look at Bob or not.
(but I’ll note that for me, this version, like the homomorphic version, is mostly saying that your description of a quantum physics state shouldn’t be purely local. A purely local description must discard information, something something mixed state Von Neumann entropy)
Yeah that seems like a case where non-locality is essential to the computation itself. I’m not sure how the “provably random noise from both” would work though. Like, it is possible to represent some string as the xor of two different strings, each of which are themselves uniformly random. But I don’t know how to generalize that to computation in general.
I think some of the non locality is inherited from “no hidden variable theory”. Like it might be local in MWI? I’m not sure.
This is an extremely cool line of argument. The first thing that has concretely advanced my understanding of consciousness in quite a while.
Thanks, glad you found this helpful.
Brains are already, in effect, encrypted. That is, we don’t know how they work. We can’t trace the wiring in useful detail and see exactly how they produce some visible action, even things as simple as engaging in conversation or walking without falling over. (We don’t know nothing at all, but we don’t know enough by a long way.) The same applies to LLMs. We do know how LLMs are trained, but we do not know how the resulting LLM works. Their behaviours are encoded in their parameters and we have no decryption key.
Forgive me, I only scanned.
You’re talking about exponentially unlikely physical states, like the kind where you disintegrate from location 1 andjust by chancean identical copy of you appears in location 2 for no reason, or the thermodynamic arrow of time runs backwards, or states that encode a mind you can’t decode without the right homomorphic key but then the homomorphic key appears in your alphabet soupjust by chance, or your whole life was an elaborate prank for a reality TV show and most of the universe is actually made of cheese, or there’s a giant superintelligent pink elephant in every room butjust by chancenobody notices them, or the Easter Bunny and Harry Potter both appear and their magic worksjust by chanceeach time they try to use it (in a way conforming to the standard model), or whatever. These states with ≈0 measure might be theoretically possible but personally I don’t put much stock in thought experiments about them?EDIT still only scanned, but I think I misread the post. I (unconfidently) think the post is about if someone homomorphically encrypts a mind computation, then moves the information in the key past the cosmic event horizon of the expanding universe so the information in the key and the encrypted mind can never return together again. (Or are exponentially unlikely to). You can get an effect like this by e.g. burning the key and letting the infrared light of the fire escape to the blackness of the night sky.
That only comes in in step 10. I agree it’s somewhat suspect. The main reason to imagine these scenarios is temporal locality of natural supervenience. That is, I believe that an agent does not have mental access to the distant past except mediated by the recent past and the present. Any access implying mental states would have to make no behavioral difference, else physical causality would be contradicted. So the randomly generated key is a supporting intuition for temporal locality, and I agree it has problems, but I still think temporal locality is correct, otherwise there would be strange consequences about knowing about the distant past not mediated by the recent past.