Quotes and Notes on Scott Aaronson’s “The Ghost in the Quantum Turing Machine”

This highly speculative paper has been discussed here before, but I found the discussion’s quality rather disappointing. People generally took bits and pieces out of context and then mostly offered arguments already addressed in the paper. Truly the internet is the most powerful misinterpretation engine ever built. It’s nice to see that Scott, who is no stranger to online adversity, is taking it in stride.

So I went through the paper and took notes, which I posted on my blog, but I am also attaching them below, in a hope that someone else here finds them useful. I initially intended to write up a comment in the other thread, but the size of it grew too large for a comment, so I am making this post. Feel free to downvote if you think it does not belong in Discussion (or for any other reason, of course).

TL;DR: The main idea of the paper is, as far as I can tell, that it is possible to construct a physical model, potentially related to the “free” part of the free will debate, where some events cannot be predicted at all, not even probabilistically, like it is done in Quantum Mechanics. Scott also proposes one possible mechanism for this “Knightian unpredictability”: the not-yet-decohered parts of the initial state of the universe, such as the Cosmic Microwave Background radiation. He does not claim a position on whether the model is correct, only that it is potentially testable and thus shifts a small piece of the age-old philosophical debate on free will into the realm of physics.

For those here who say that the free-will question has been dissolved, let me note that the picture presented in the paper is one explicitly rejected by Eliezer, probably a bit hastily. Specifically in this diagram:

Eliezer says that the sequential picture on the left is the only correct one, whereas Scott offers a perfectly reasonable model which is better described by the picture on the right. To reiterate, there is a part of the past (Scott calls it “microfacts”) which evolves reversibly and unitarily until some time in the future. Given that this part has not been measured yet, there is no way, not even probabilistically, to estimate its influence on some future event, when some of those microfacts interact with the rest of the world and decohere, thus affecting “macrofacts”, potentially including human choices. This last speculative idea could be tested if it is shown that small quantum fluctuations can be chaotically amplified to macroscopic levels. If this model is correct, it may have significant consequences on whether a human mind can be successfully cloned and on whether an AI can be called sentient, or even how it can be made sentient.

My personal impression is that Scott’s arguments are much better thought through than the speculations by Penrose in his books, but you may find otherwise. I also appreciate this paper for doing what mainstream philosophers are qualified and ought to do, but consistently fail to do: look at one of the Big Questions, chip away some small solvable piece of it, and offer this piece to qualified scientists.

Anyway, below are my notes and quotes. If you think you have found an obvious objection to some of the quotes, this is likely because I did not provide enough context, so please read the relevant section of the paper before pointing it out. It may also be useful to recite the Litany of a Bright Dilettante.


p.6. On QM’s potentially limiting “an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems” : “In this essay I’ll argue strongly [...] that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question is yes, and other such worlds where the answer is no. And we don’t yet know which kind we live in.”

p. 7. “The [...] idea—that of being “willed by you”—is the one I consider outside the scope of science, for the simple reason that no matter what the empirical facts were, a skeptic could always deny that a given decision was “really” yours, and hold the true decider to have been God, the universe, an impersonating demon, etc. I see no way to formulate, in terms of observable concepts, what it would even mean for such a skeptic to be right or wrong.”

“the situation seems different if we set aside the “will” part of free will, and consider only the “free” part.”

“I’ll use the term freedom, or Knightian freedom, to mean a certain strong kind of physical unpredictability: a lack of determination, even probabilistic determination, by knowable external factors. [..] we lack a reliable way even to quantify using probability distributions.”

p.8. “I tend to see Knightian unpredictability as a necessary condition for free will. In other words, if a system were completely predictable (even probabilistically) by an outside entity—not merely in principle but in practice—then I find it hard to understand why we’d still want to ascribe “free will” to the system. Why not admit that we now fully understand what makes this system tick?”

p.12. “from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.”—professional philosophers would do well to keep this in mind. Of course, once you break off such answerable part, it tends to leave the realm of philosophy and become a natural science of one kind or another. Maybe something useful professional philosophers could do is to look for “answerable parts”, break them off and pass along to the experts in the subject matter. And maybe look for the answers in the natural sciences and see how they help sculpt the “unanswerable riddles”.

p.14. Weak compatibilism: “My perspective embraces the mechanical nature of the universe’s time-evolution laws, and in that sense is proudly “compatibilist.” On the other hand, I care whether our choices can actually be mechanically predicted—not by hypothetical Laplace demons but by physical machines. I’m troubled if they are, and I take seriously the possibility that they aren’t (e.g., because of chaotic amplification of unknowable details of the initial conditions).”

p.19. Importance of copyability: “the problem with this response [that you are nothing but your code] is simply that it gives up on science as something agents can use to predict their future experiences. The agents wanted science to tell them, “given such and- such physical conditions, here’s what you should expect to see, and why.” Instead they’re getting the worthless tautology, “if your internal code causes you to expect to see X, then you expect to see X, while if your internal code causes you to expect to see Y , then you expect to see Y .” But the same could be said about anything, with no scientific understanding needed! To paraphrase Democritus, it seems like the ultimate victory of the mechanistic worldview is also its defeat.”—If a mind cannot be copied perfectly, then there is no such thing as your “code”, i.e. an algorithm which can be run repeatedly.

p.20. Constrained determinism: “A form of “determinism” that applies not merely to our universe, but to any logically possible universe, is not a determinism that has “fangs,” or that could credibly threaten any notion of free will worth talking about.”

p.21: Bell’s theorem, quoting Conway and Kochen: “if there’s no faster than-light communication, and Alice and Bob have the “free will” to choose how to measure their respective particles, then the particles must have their own “free will” to choose how to respond to the measurements.”—the particles’ “free will” is still constrained by the laws of Quantum Mechanics, however.

p.23. Multiple (micro-)past compatibilism: “multiple-pasts compatibilism agrees that the past microfacts about the world determine its future, and it also agrees that the past macrofacts are outside our ability to alter. [...] our choices today might play a role in selecting one past from a giant ensemble of macroscopically-identical but microscopically-different pasts.”

p.26. Singulatarianism: “all the Singulatarians are doing is taking conventional thinking about physics and the brain to its logical conclusion. If the brain is a “meat computer,” then given the right technology, why shouldn’t we be able to copy its program from one physical substrate to another? [...] given the stakes, it seems worth exploring the possibility that there are scientific reasons why human minds can’t be casually treated as copyable computer programs: not just practical difficulties, or the sorts of question-begging appeals to human specialness that are child’s-play for Singulatarians to demolish. If one likes, the origin of this essay was my own refusal to accept the lazy cop-out position, which answers the question of whether the Singulatarians’ ideas are true by repeating that their ideas are crazy and weird. If uploading our minds to digital computers is indeed a fantasy, then I demand to know what it is about the physical universe that makes it a fantasy.”

p.27. Predictability of human mind: “I believe neuroscience might someday advance to the point where it completely rewrites the terms of the free-will debate, by showing that the human brain is “physically predictable by utside observers” in the same sense as a digital computer.”

p.28. Em-ethics: “I’m against any irreversible destruction of knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner.”—E.g. it’s not immoral to stop a simulation which can be resumed or restored from a backup. (The cryonics implications are obvious.) “Deleting the last copy of an em in existence should be prosecuted as murder, not because doing so snuffs out some inner light of consciousness (who is anyone else to know?), but rather because it deprives the rest of society of a unique, irreplaceable store of knowledge and experiences, precisely as murdering a human would.”—Again, this is a pretty transhumanist view, see the anti-deathist position of Eliezer Yudkowsky as expressed in HPMoR.

p.29. Probabilistic uncertainty vs Knightian uncertainty: “if we see a conflict between free will and the deterministic predictability of human choices, then we should see the same conflict between free will and probabilistic predictability, assuming the probabilistic predictions are as accurate as those of quantum mechanics. [...] If we know a system’s quantum state , then quantum mechanics lets us calculate the probability of any outcome of any measurement that might later be made on the system. But if we don’t know the state, then itself can be thought of as subject to Knightian uncertainty.”

On the source of this unquantifiable “Knightian uncertainty”: “in current physics, there appears to be only one source of Knightian uncertainty that could possibly be both fundamental and relevant to human choices. That source is uncertainty about the microscopic, quantum-mechanical details of the universe’s initial conditions (or the initial conditions of our local region of the universe)”

p.30. “In economics, the “second type” of uncertainty—the type that can’t be objectively quantified using probabilities—is called Knightian uncertainty, after Frank Knight, who wrote about it extensively in the 1920s [49]. Knightian uncertainty has been invoked to explain phenomena from risk-aversion in behavioral economics to the 2008 financial crisis (and was popularized by Taleb [87] under the name “black swans”).”

p.31. “I think that the free-will-is-incoherent camp would be right, if all uncertainty were probabilistic.” Bayesian fundamentalism: “Bayesian probability theory provides the only sensible way to represent uncertainty. On that view, “Knightian uncertainty” is just a fancy name for someone’s failure to carry a probability analysis far enough.”″ Against the Dutch-booking argument for Bayesian fundamentalism: “A central assumption on which the Dutch book arguments rely—basically, that a rational agent shouldn’t mind taking at least one side of any bet—has struck many commentators as dubious.”

p.32. Objective prior: “one can’t use Bayesianism to justify a belief in the existence of objective probabilities underlying all events, unless one is also prepared to defend the existence of an “objective prior.””

Universal prior: “a distribution that assigns a probability proportional to 2^(−n) to every possible universe describable by an n-bit computer program.” Why it may not be a useful “true” prior: “a predictor using the universal prior can be thought of as a superintelligent entity that figures out the right probabilities almost as fast as is information-theoretically possible. But that’s conceptually very different from an entity that already knows the probabilities.”

p.34. Quantum no-cloning: “it’s possible to create a physical object that (a) interacts with the outside world in an interesting and nontrivial way, yet (b) effectively hides from the outside world the information needed to predict how the object will behave in future interactions.”

p.35. Quantum teleportation answers the problem of “what to do with the original after you fax a perfect copy of you to be reconstituted on Mars”: “in quantum teleportation, the destruction of the original copy is not an extra decision that one needs to make; rather, it happens as an inevitable byproduct of the protocol itself”

p.36. Freebit picture: “due to Knightian uncertainty about the universe’s initial quantum state, at least some of the qubits found in nature are regarded as freebits” making “predicting certain future events—possibly including some human decisions—physically impossible, even probabilistically”. Freebits are qubits because otherwise they could be measured without violating no-cloning. Observer-independence requirement: “it must not be possible (even in principle) to trace [the freebit’s] causal history back to any physical process that generated [the freebit] according to a known probabilistic ensemble.”

p.37. On existence of freebits: “In the actual universe, are there any quantum states that can’t be grounded in PMDs?” PMD, a “past macroscopic determinant” is a classical observable that would have let one non-invasively probabilistically predict the prospective freebit to arbitrary accuracy. This is the main question of the paper: can freebits from the initial conditions of the universe survive till present day and even affect human decisions?

p.38: CMB (cosmic microwave background radiation) is one potential example of freebits: detected CMB radiation did not interact with matter since the last scattering, roughly 380, 000 years after the Big Bang. Objections: a) last scattering is not initial conditions by any means, b) one can easily shield from CMB.

p.39. Freebit effects on decision-making: “what sorts of changes to [the quantum state of the entire universe] would or wouldn’t suffice to … change a particular decision made by a particular human being? … For example, would it suffice to change the energy of a single photon impinging on the subject’s brain?” due to potential amplification of “microscopic fluctuations to macroscopic scale”. Sort of a quantum butterfly effect.

p.40. Freebit amplification issues: amplification time and locality. Locality: the freebit only affects the person’s actions, which mediates all other influences on the rest of the world. I.e. no direct freebit effect on anything else. On why these questions are interesting: “I can easily imagine that in (say) fifty years, neuroscience, molecular biology, and physics will be able to say more about these questions than they can today. And crucially, the questions strike me as scientifically interesting regardless of one’s philosophical predilections.”

p.41. Role of freebits: “freebits are simply part of the explanation for how a brain can reach decisions that are not probabilistically predictable by outside observers, and that are therefore “free” in the sense that interests us.” It could just a noise source, it can help “foils probabilistic forecasts made by outside observers, yet need not play any role in explaining the system’s organization or complexity.”

p.42. “Freedom from the inside out”: “isn’t it anti-scientific insanity to imagine that our choices today could correlate nontrivially with the universe’s microstate at the Big Bang?” “Causality is based on entropy increase, so it can only make sense to draw causal arrows “backwards in time,” in those rare situations where entropy is not increasing with time. [...] where physical systems are allowed to evolve reversibly, free from contact with their external environments.” E.g. the normal causal arrows break down for, say, CMB photons. -- Not sure how Scott jumps from reversible evolution to backward causality.

p.44. Harmonization problem: backward causality leads to all kinds of problems and paradoxes. Not an issue for the freebit model, as backward causality can point only to “microfacts”, which do not affect any “macrofacts”. “the causality graph with be a directed acyclic graph (a dag), with all arrows pointing forward in time, except for some “dangling” arrows pointing backward in time that never lead anywhere else.” The latter is justified by “no-cloning”. In other words, “for all the events we actually observe, we must seek their causes only to their past, never to their future.”″ -- This backward causality moniker seems rather unfortunate and misleading, given that it seems to replace the usual idea of discovery of some (micro)fact about the past with “a microfact is directly caused by a macrofact F to its future”. “A simpler option is just to declare the entire concept of causality irrelevant to the microworld.”

p.45. Micro/​Macro distinction: A potential solution: “a “macrofact” is simply any fact of which the news is already propagating outward at the speed of light”. I.e. an interaction turns microfact into a macrofact. This matches Zurek’s einselection ideas.

p.47 Objections to freebits: 5.1: Humans are very predictable. “Perhaps, as Kane speculates, we truly exercise freedom only for a relatively small number of “self-forming actions” (SFAs)—that is, actions that help to define who we are—and the rest of the time are essentially “running on autopilot.”” Also note “the conspicuous failure of investors, pundits, intelligence analysts, and so on actually to predict, with any reliability, what individuals or even entire populations will do”

p.48. 5.2: The weather objection: How are brains different from weather? “brains seem “balanced on a knife-edge” between order and chaos: were they as orderly as a pendulum, they couldn’t support interesting behavior; were they as chaotic as the weather, they couldn’t support rationality. [...] a single freebit could plausibly influence the probability of some macroscopic outcome, even if we model all of the system’s constituents quantum-mechanically.”

p.49 5.3: The gerbil objection: if a brain or an AI is isolated from freebits except through a a gerbil in a box connected to it, then “the gerbil, though presumably oblivious to its role, is like a magic amulet that gives the AI a “capacity for freedom” it wouldn’t have had otherwise,” in essence becoming the soul of the machine. “Of all the arguments directed specifically against the freebit picture, this one strikes me as the most serious.” Potential reply: brain is not like AI in that “In the AI/​gerbil system, the “intelligence” and “Knightian noise” components were cleanly separable from one another. [...] With the brain, by contrast, it’s not nearly so obvious that the “Knightian indeterminism source” can be physically swapped out for a different one, without destroying or radically altering the brain’s cognitive functions as well.” Now this comes to the issue of identity.

“Suppose the nanorobots do eventually complete their scan of all the “macroscopic, cognitively-relevant” information in your brain, and suppose they then transfer the information to a digital computer, which proceeds to run a macroscopic-scale simulation of your brain. Would that simulation be you? If your “original” brain were destroyed in this process, or simply anesthetized, would you expect to wake up as the digital version? (Arguably, this is not even a philosophical question, just a straightforward empirical question asking you to predict a future observation!) [...] My conclusion is that either you can be uploaded, copied, simulated, backed up, and so forth, leading to all the puzzles of personal identity discussed in Section 2.5, or else you can’t bear the same sort of “uninteresting” relationship to the “non-functional” degrees of freedom in your brain that the AI bore to the gerbil box.”

p.51. The Initial-State Objection: “the notion of “freebits” from the early universe nontrivially influencing present-day events is not merely strange, but inconsistent with known physics” because “it follows from known physics that the initial state at the Big Bang was essentially random, and can’t have encoded any “interesting” information”. The reply is rather involved and discusses several new speculative ideas in physics. It boils down to “when discussing extreme situations like the Big Bang, it’s not okay to ignore quantum-gravitational degrees of freedom simply because we don’t yet know how to model them. And including those degrees of freedom seems to lead straight back to the unsurprising conclusion that no one knows what sorts of correlations might have been present in the universe’s initial microstate.”

p.52. The Wigner’s-Friend Objection: A macroscopic object “in a superposition of two mental states” requires freebits to make a separate “free decision” in each one, requiring 2^(number of states) freebits for independent decision making in each state.

Moreover “if the freebit picture is correct, and the Wigner’s-friend experiment can be carried out, then I think we’re forced to conclude that—at least for the duration of the experiment—the subject no longer has the “capacity for Knightian freedom,” and is now a “mechanistic,” externally-characterized physical system similar to a large quantum computer.”

p.55. “what makes humans any different [from a computer]? According to the most literal reading of quantum mechanics’ unitary evolution rule—which some call the Many-Worlds Interpretation—don’t we all exist in superpositions of enormous numbers of branches, and isn’t our inability to measure the interference between those branches merely a “practical” problem, caused by rapid decoherence? Here I reiterate the speculation put forward in Section 4.2: that the decoherence of a state should be considered “fundamental” and “irreversible,” precisely when [it] becomes entangled with degrees of freedom that are receding toward our de Sitter horizon at the speed of light, and that can no longer be collected together even in principle. That sort of decoherence could be avoided, at least in principle, by a fault-tolerant quantum computer, as in the Wigner’s-friend thought experiment above. But it plausibly can’t be avoided by any entity that we would currently recognize as “human.”

p.56. Difference from Penrose: ” I make no attempt to “explain consciousness.” Indeed, that very goal seems misguided to me, at least if “consciousness” is meant in the phenomenal sense rather than the neuroscientists’ more restricted senses.”

p.57. “instead of talking about the consistency of Peano arithmetic, I believe Penrose might as well have fallen back on the standard arguments about how a robot could never “really” enjoy fresh strawberries, but at most claim to enjoy them.”

“the real issue is not whether the AI follows a program, but rather, whether it follows a program that’s knowable by other physical agents.”

“I’m profoundly skeptical that any of the existing objective reduction [by minds] models are close to the truth. The reasons for my skepticism are, first, that the models seem too ugly and ad hoc (GRW’s more so than Penrose’s); and second, that the AdS/​CFT correspondence now provides evidence that quantum mechanics can emerge unscathed even from the combination with gravity.”

“I regard it as a serious drawback of Penrose’s proposals that they demand uncomputability in the dynamical laws”

p.61. Boltzmann brains: “By the time thermal equilibrium is reached, the universe will (by definition) have “forgotten” all details of its initial state, and any freebits will have long ago been “used up.” In other words, there’s no way to make a Boltzmann brain think one thought rather than another by toggling freebits. So, on this account, Boltzmann brains wouldn’t be “free,” even during their brief moments of existence.”

p.62. What Happens When We Run Out of Freebits? “the number of freebits accessible to any one observer must be finite—simply because the number of bits of any kind is then upper-bounded by the observable universe’s finite holographic.entropy. [...] this should not be too alarming. After all, even without the notion of freebits, the Second Law of Thermodynamics (combined with the holographic principle and the positive cosmological constant) already told us that the observable universe can witness at most s 10^122 “interesting events,” of any kind, before it settles into thermal equilibrium.”

p.63. Indexicality: “indexical puzzle: a puzzle involving the “first-person facts” of who, what, where, and when you are, which seems to persist even after all the “third-person facts” about the physical world have been specified.” This is similar to Knightian uncertainty: “For the indexical puzzles make it apparent that, even if we assume the laws of physics are completely mechanistic, there remain large aspects of our experience that those laws fail to determine, even probabilistically. Nothing in the laws picks out one particular chunk of suitably organized matter from the immensity of time and space, and says, “here, this chunk is you; its experiences are your experiences.””

Free will connection: Take two heretofore identical Earths A and B in an infinite universe and are about to diverge based on your decision, and it’s not impossible for a superintelligence to predict this decision, not even probabilistically, because it is based on a freebit:

“Maybe “youA” is the “real” you, and taking the new job is a defining property of who you are, much as Shakespeare “wouldn’t be Shakespeare” had he not written his plays. So maybe youB isn’t even part of your reference class: it’s just a faraway doppelg¨anger you’ll never meet, who looks and acts like you (at least up to a certain point in your life) but isn’t you. So maybe p = 1. Then again, maybe youB is the “real” you and p = 0. Ultimately, not even a superintelligence could calculate p without knowing something about what it means to be “you,” a topic about which the laws of physics are understandably silent.” “For me, the appeal of this view is that it “cancels two philosophical mysteries against each other”: free will and indexical uncertainty”.

p.65. Falsifiability: “If human beings could be predicted as accurately as comets, then the freebit picture would be falsified.” But this prediction has “an unsatisfying, “god-of-the-gaps” character”. Another: “chaotic amplification of quantum uncertainty locally and on “reasonable” timescales. Another: “consider an omniscient demon, who wants to influence your decision-making process by changing the quantum state of a single photon impinging on your brain. [...] imagine that the photons’ quantum states cannot be altered, maintaining a spacetime history consistent with the laws of physics, without also altering classical degrees of freedom in the photons’ causal past. In that case, the freebit picture would once again fail.”

p.68. Conclusions: “Could there exist a machine, consistent with the laws of physics, that “non-invasively cloned” all the information in a particular human brain that was relevant to behavior— so that the human could emerge from the machine unharmed, but would thereafter be fully probabilistically predictable given his or her future sense-inputs, in much the same sense that a radioactive atom is probabilistically predictable?”

“does the brain possess what one could call a clean digital abstraction layer : that is, a set of macroscopic degrees of freedom that (1) encode everything relevant to memory and cognition, (2) can be accurately modeled as performing a classical digital computation, and (3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure random number sources, generating noise according to prescribed probability distributions? Or is such a clean separation between the macroscopic and microscopic levels unavailable—so that any attempt to clone a brain would either miss much of the cognitively-relevant information, or else violate the No-Cloning Theorem? In my opinion, neither answer to the question should make us wholly comfortable: if it does, then we haven’t sufficiently thought through the implications!”

In a world where a cloning device is possible the indexical questions “would no longer be metaphysical conundrums, but in some sense, just straightforward empirical questions about what you should expect to observe!”

p.69. Reason and mysticism. “but what do I really think?” “in laying out my understanding of the various alternatives—yes, brain states might be perfectly clonable, but if we want to avoid the philosophical weirdness that such cloning would entail, [...] I don’t have any sort of special intuition [...]. The arguments exhaust my intuition.”