Followup to: Stuff that Makes Stuff Happen
Previous meditation: Does the idea that everything is made of causes and effects meaningfully constrain experience? Can you coherently say how reality might look, if our universe did not have the kind of structure that appears in a causal model?
I can describe to you at least one famous universe that didn’t look like it had causal structure, namely the universe of J. K. Rowling’s Harry Potter.
You might think that J. K. Rowling’s universe doesn’t have causal structure because it contains magic—that wizards wave their wands and cast spells, which doesn’t make any sense and goes against all science, so J. K. Rowling’s universe isn’t ‘causal’.
In this you would be completely mistaken. The domain of “causality” is just “stuff that makes stuff happen and happens because of other stuff”. If Dumbledore waves his wand and therefore a rock floats into the air, that’s causality. You don’t even have to use words like ‘therefore’, let alone big fancy phrases like ‘causal process’, to put something into the lofty-sounding domain of causality. There’s causality anywhere there’s a noun, a verb, and a subject: ‘Dumbledore’s wand lifted the rock.’ So far as I could tell, there wasn’t anything in Lord of the Rings that violated causality.
You might worry that J. K. Rowling had made a continuity error, describing a spell working one way in one book, and a different way in a different book. But we could just suppose that the spell had changed over time. If we actually found ourselves in that apparent universe, and saw a spell have two different effects on two different occasions, we would not conclude that our universe was uncomputable, or that it couldn’t be made of causes and effects.
No, the only part of J. K. Rowling’s universe that violates ‘cause and effect’ is...
...the Time-Turners, of course.
A Time-Turner, in Rowling’s universe, is a small hourglass necklace that sends you back in time 1 hour each time you spin it. In Rowling’s universe, this time-travel doesn’t allow for changing history; whatever you do after you go back, it’s already happened. The universe containing the time-travel is a stable, self-consistent object.
If a time machine does allow for changing history, it’s easy to imagine how to compute it; you could easily write a computer program which would simulate that universe and its time travel, given sufficient computing power. You would store the state of the universe in RAM and simulate it under the programmed ‘laws of physics’. Every nanosecond, say, you’d save a copy of the universe’s state to disk. When the Time-Changer was activated at 9pm, you’d retrieve the saved state of the universe from one hour ago at 8pm, load it into RAM, and then insert the Time-Changer and its user in the appropriate place. This would, of course, dump the rest of the universe from 9pm into oblivion—no processing would continue onward from that point, which is the same as ending that world and killing everyone in it.
Still, if we don’t worry about the ethics or the disk space requirements, then a Time-Changer which can restore and then change the past is easy to compute. There’s a perfectly clear order of causality in metatime, in the linear time of the simulating computer, even if there are apparent cycles as seen from within the universe. The person who suddenly appears with a Time-Changer is the causal descendant of the older universe that just got dumped from RAM.
But what if instead, reality is always—somehow—perfectly self-consistent, so that there’s apparently only one universe with a future and a past that never changes, so that the person who appears at 8PM has always seemingly descended from the very same universe that then develops by 9PM...?
How would you compute that in one sweep-through, without any higher-order metatime?
What would a causal graph for that look like, when the past descends from its very own future?
And the answer is that there isn’t any such causal graph. Causal models are sometimes referred to as DAGs, which stands for Directed Acyclic Graph. If instead there’s a directed cycle, there’s no obvious order in which to compute the joint probability table. Even if you somehow knew that at 8PM somebody was going to appear with a Time-Turner used at 9PM, you still couldn’t compute the exact state of the time-traveller without already knowing the future at 9PM, and you couldn’t compute the future without knowing the state at 8PM, and you couldn’t compute the state at 8PM without knowing the state of the time-traveller who just arrived.
In a causal model, you can compute p(9pm|8pm) and p(8pm|7pm) and it all starts with your unconditional knowledge of p(7pm) or perhaps the Big Bang, but with a Time-Turner we have p(9pm|8pm) and p(8pm|9pm) and we can’t untangle them—multiplying those two conditional matrices together would just yield nonsense.
Does this mean that the Time-Turner is beyond all logic and reason?
Complete philosophical panic is basically never justified. We should even be reluctant to say anything like, “The so-called Time-Turner is beyond coherent description; we only think we can imagine it, but really we’re just talking nonsense; so we can conclude a priori that no such Time-Turner that can exist; in fact, there isn’t even a meaningful thing that we’ve just proven can’t exist.” This is also panic—it’s just been made to sound more dignified. The first rule of science is to accept your experimental results, and generalize based on what you see. What if we actually did find a Time-Turner that seemed to work like that? We’d just have to accept that Causality As We Previously Knew It had gone out the window, and try to make the best of that.
In fact, despite the somewhat-justified conceptual panic which the protagonist of Harry Potter and the Methods of Rationality undergoes upon seeing a Time-Turner, a universe like that can have a straightforward logical description even if it has no causal description.
Conway’s Game of Life is a very simple specification of a causal universe; what we would today call a cellular automaton. The Game of Life takes place on a two-dimensional square grid, so that each cell is surrounded by eight others, and the Laws of Physics are as follows:
A cell with 2 living neighbors during the last tick, retains its state from the last tick.
A cell with 3 living neighbors during the last tick, will be alive during the next tick.
A cell with fewer than 2 or more than 3 living neighbors during the last tick, will be dead during the next tick.
It is my considered opinion that everyone should play around with Conway’s Game of Life at some point in their lives, in order to comprehend the notion of ‘laws of physics’. Playing around with Life as a kid (on a Mac Plus) helped me gut-level-understand the concept of a ‘lawful universe’ developing under exceptionless rules.
Now suppose we modify the Game of Life universe by adding some prespecified cases of time travel—places where a cell will descend from neighbors in the future, instead of the past.
In particular we shall take a 4x4 Life grid, and arbitrarily hack Conway’s rules to say:
On the 2nd tick, the cell at (2,2) will have its state determined by that cell’s state on the 3rd tick, instead of its neighbors on the 1st tick.
It’s no longer possible to compute the state of each cell at each time in a causal order where we start from known cells and compute their not-yet-known causal descendants. The state of the cells on the 3rd tick, depend on the state of the cells on the 2nd tick, which depends on the state on the 3rd tick.
In fact, the time-travel rule, on the same initial conditions, also permits a live cell to travel back in time, not just a dead cell—this just gives us the “normal” grid! Since you can’t compute things in order of cause and effect, even though each local rule is deterministic, the global outcome is not determined.
However, you could simulate Life with time travel merely by brute-force searching through all possible Life-histories, discarding all histories which disobeyed the laws of Life + time travel. If the entire universe were a 4-by-4 grid, it would take 16 bits to specify a single slice through Time—the universe’s state during a single clock tick. If the whole of Time was only 3 ticks long, there would be only 48 bits making up a candidate ‘history of the universe’ - it would only take 48 bits to completely specify a History of Time. 2^48 is just 281,474,976,710,656, so with a cluster of 2GHz CPUs it would be quite practical to find, for this rather tiny universe, the set of all possible histories that obey the logical relations of time travel.
It would no longer be possible to point to a particular cell in a particular history and say, “This is why it has the ‘alive’ state on tick 3”. There’s no “reason”—in the framework of causal reasons—why the time-traveling cell is ‘dead’ rather than ‘alive’, in the history we showed. (Well, except that Alex, in the real universe, happened to pick it out when I asked him to generate an example.) But you could, in principle, find out what the set of permitted histories for a large digital universe, given lots and lots of computing power.
Here’s an interesting question I do not know how to answer: Suppose we had a more complicated set of cellular automaton rules, on a vastly larger grid, such that the cellular automaton was large enough, and supported enough complexity, to permit people to exist inside it and be computed. Presumably, if we computed out cell states in the ordinary way, each future following from its immediate past, the people inside it would be as real as we humans computed under our own universe’s causal physics.
Now suppose that instead of computing the cellular automaton causally, we hack the rules of the automaton to add large time-travel loops—change their physics to allow Time-Turners—and with an unreasonably large computer, the size of two to the power of the number of bits comprising an entire history of the cellular automaton, we enumerate all possible candidates for a universe-history.
So far, we’ve just generated all 2^N possible bitstrings of size N, for some large N; nothing more. You wouldn’t expect this procedure to generate any people or make any experiences real, unless enumerating all finite strings of size N causes all lawless universes encoded in them to be real. There’s no causality there, no computation, no law relating one time-slice of a universe to the next...
Now we set the computer to look over this entire set of candidates, and mark with a 1 those that obey the modified relations of the time-traveling cellular automaton, and mark with a 0 those that don’t.
If N is large enough—if the size of the possible universe and its duration is large enough—there would be descriptions of universes which experienced natural selection, evolution, perhaps the evolution of intelligence, and of course, time travel with self-consistent Time-Turners, obeying the modified relations of the cellular automaton. And the checker would mark those descriptions with a 1, and all others with a 0.
Suppose we pick out one of the histories marked with a 1 and look at it. It seems to contain a description of people who remember experiencing time travel.
Now, were their experiences real? Did we make them real by marking them with a 1 - by applying the logical filter using a causal computer? Even though there was no way of computing future events from past events; even though their universe isn’t a causal universe; even though they will have had experiences that literally were not ‘caused’, that did not have any causal graph behind them, within the framework of their own universe and its rules?
I don’t know. But...
Our own universe does not appear to have Time-Turners, and does appear to have strictly local causality in which each variable can be computed strictly forward-in-time.
And I don’t know why that’s the case; but it’s a likely-looking hint for anyone wondering what sort of universes can be real in the first place.
The collection of hypothetical mathematical thingies that can be described logically (in terms of relational rules with consistent solutions) looks vastly larger than the collection of causal universes with locally determined, acyclically ordered events. Most mathematical objects aren’t like that. When you say, “We live in a causal universe”, a universe that can be computed in-order using local and directional rules of determination, you’re vastly narrowing down the possibilities relative to all of Math-space.
So it’s rather suggestive that we find ourselves in a causal universe rather than a logical universe—it suggests that not all mathematical objects can be real, and the sort of thingies that can be real and have people in them are constrained to somewhere in the vicinity of ‘causal universes’. That you can’t have consciousness without computing an agent made of causes and effects, or maybe something can’t be real at all unless it’s a fabric of cause and effect. It suggests that if there is a Tegmark Level IV multiverse, it isn’t “all logical universes” but “all causal universes”.
Of course you also have to be a bit careful when you start assuming things like “Only causal things can be real” because it’s so easy for Reality to come back at you and shout “WRONG!” Suppose you thought reality had to be a discrete causal graph, with a finite number of nodes and discrete descendants, exactly like Pearl-standard causal models. There would be no hypothesis in your hypothesis-space to describe the standard model of physics, where space is continuous, indefinitely divisible, and has complex amplitude assignments over uncountable cardinalities of points.
Reality is primary, saith the wise old masters of science. The first rule of science is just to go with what you see, and try to understand it; rather than standing on your assumptions, and trying to argue with reality.
But even so, it’s interesting that the pure, ideal structure of causal models, invented by statisticians to reify the idea of ‘causality’ as simply as possible, looks much more like the modern view of physics than does the old Newtonian ideal.
If you believed in Newtonian billiard balls bouncing around, and somebody asked you what sort of things can be real, you’d probably start talking about ‘objects’, like the billiard balls, and ‘properties’ of the objects, like their location and velocity, and how the location ‘changes’ between one ‘time’ and another, and so on.
But suppose you’d never heard of atoms or velocities or this ‘time’ stuff—just the causal diagrams and causal models invented by statisticians to represent the simplest possible cases of cause and effect. Like this:
And then someone says to you, “Invent a continuous analogue of this.”
You wouldn’t invent billiard balls. There’s no billiard balls in a causal diagram.
You wouldn’t invent a single time sweeping through the universe. There’s no sweeping time in a causal diagram.
You’d stare a bit at B, C, and D which are the sole nodes determining A, screening off the rest of the graph, and say to yourself:
“Okay, how can I invent a continuous analogue of there being three nodes that screen off the rest of the graph? How do I do that with a continuous neighborhood of points, instead of three nodes?”
You’d stare at E determining D determining A, and ask yourself:
“How can I invent a continuous analogue of ‘determination’, so that instead of E determining D determinining A, there’s a continuum of determined points between E and A?”
If you generalized in a certain simple and obvious fashion...
The continuum of relatedness from B to C to D would be what we call space.
The continuum of determination from E to D to A would be what we call time.
There would be a rule stating that for epsilon time before A, there’s a neighborhood of spatial points delta which screens off the rest of the universe from being relevant to A (so long as no descendants of A are observed); and that epsilon and delta can both get arbitrarily close to zero.
There might be—if you were just picking the simplest rules you could manage—a physical constant which related the metric of relatedness (space) to the metric of determination (time) and so enforced a simple continuous analogue of local causality...
...in our universe, we call it c, the speed of light.
And it’s worth remembering that Isaac Newton did not expect that rule to be there.
If we just stuck with Special Relativity, and didn’t get any more modern than that, there would still be little billiard balls like electrons, occupying some particular point in that neighborhood of space.
But if your little neighborhoods of space have billiard balls with velocities, many of which are slower than lightspeed… well, that doesn’t look like the simplest continuous analogues of a causal diagram, does it?
When we make the first quantum leap and describe particles as waves, we find that the billiard balls have been eliminated. There’s no ‘particles’ with a single point position and a velocity slower than light. There’s an electron field, and waves propagate through the electron field through points interacting only with locally neighboring points. If a particular electron seems to be moving slower than light, that’s just because—even though causality always propagates at exactly c between points within the electron field—the crest of the electron wave can appear to move slower than that. A billiard ball moving through space over time, has been replaced by a set of points with values determined by their immediate historical neighborhood.
And when we make the second quantum leap into configuration space, we find a timeless universal wavefunction with complex amplitudes assigned over the points in that configuration space, and the amplitude of every point causally determined by its immediate neighborhood in the configuration space.
So, yes, Reality can poke you in the nose if you decide that only discrete causal graphs can be real, or something silly like that.
But on the other hand, taking advice from the math of causality wouldn’t always lead you astray. Modern physics looks a heck of a lot more similar to “Let’s build a continuous analogue of the simplest diagrams statisticians invented to describe theoretical causality”, than like anything Newton or Aristotle imagined by looking at the apparent world of boulders and planets.
I don’t know what it means… but perhaps we shouldn’t ignore the hint we received by virtue of finding ourselves inside the narrow space of “causal universes”—rather than the much wider space “all logical universes”—when it comes to guessing what sort of thingies can be real. To the extent we allow non-causal universes in our hypothesis space, there’s a strong chance that we are broadening our imagination beyond what can really be real under the Actual Rules—whatever they are! (It is possible to broaden your metaphysics too much, as well as too little. For example, you could allow logical contradictions into your hypothesis space—collections of axioms with no models—and ask whether we lived in one of those.)
If we trusted absolutely that only causal universes could be real, then it would be safe to allow only causal universes into our hypothesis space, and assign probability literally zero to everything else.
But if you were scared of being wrong, then assigning probability literally zero means you can’t change your mind, ever, even if Professor McGonagall shows up with a Time-Turner tomorrow.
Meditation: Suppose you needed to assign non-zero probability to any way things could conceivably turn out to be, given humanity’s rather young and confused state—enumerate all the hypotheses a superintelligent AI should ever be able to arrive at, based on any sort of strange world it might find by observation of Time-Turners or stranger things. How would you enumerate the hypothesis space of all the worlds we could remotely maybe possibly be living in, including worlds with hypercomputers and Stable Time Loops and even stranger features?
 Sometimes I still marvel about how in most time-travel stories nobody thinks of this. I guess it really is true that only people who are sensitized to ‘thinking about existential risk’ even notice when a world ends, or when billions of people are extinguished and replaced by slightly different versions of themselves. But then almost nobody will notice that sort of thing inside their fiction if the characters all act like it’s okay.)
 Unless you believe in ‘collapse’ interpretations of quantum mechanics where Bell’s Theorem mathematically requires that either your causal models don’t obey the Markov condition or they have faster-than-light nonlocal influences. (Despite a large literature of obscurantist verbal words intended to obscure this fact, as generated and consumed by physicists who don’t know about formal definitions of causality or the Markov condition.) If you believe in a collapse postulate, this whole post goes out the window. But frankly, if you believe that, you are bad and you should feel bad.
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: “Mixed Reference: The Great Reductionist Project”
Previous post: “Logical Pinpointing”
- Finite Factored Sets by 23 May 2021 20:52 UTC; 143 points) (
- Logical Pinpointing by 2 Nov 2012 15:33 UTC; 118 points) (
- Causal Universes by 29 Nov 2012 4:08 UTC; 118 points) (
- The genie knows, but doesn’t care by 6 Sep 2013 6:42 UTC; 115 points) (
- Building Phenomenological Bridges by 23 Dec 2013 19:57 UTC; 95 points) (
- Mixed Reference: The Great Reductionist Project by 5 Dec 2012 0:26 UTC; 59 points) (
- 1 Apr 2021 19:37 UTC; 47 points)'s comment on Why We Launched LessWrong.SubStack by (
- Finite Factored Sets: LW transcript with running commentary by 27 Jun 2021 16:02 UTC; 30 points) (
- Finite Factored Sets: Applications by 31 Aug 2021 21:19 UTC; 27 points) (
- Intelligence without causality by 11 Feb 2020 0:34 UTC; 9 points) (
- 2 Feb 2015 6:05 UTC; 5 points)'s comment on Open Thread, Feb. 2 - Feb 8, 2015 by (
- An Intuitive Introduction to Causal Decision Theory by 7 Mar 2022 16:05 UTC; 3 points) (
- 7 Nov 2015 21:43 UTC; 2 points)'s comment on Open thread, Nov. 02 - Nov. 08, 2015 by (
- 21 Jan 2023 22:53 UTC; 2 points)'s comment on What’s going on with ‘crunch time’? by (
- 13 Jul 2015 6:04 UTC; 1 point)'s comment on I need a protocol for dangerous or disconcerting ideas. by (
- 5 Mar 2015 21:02 UTC; 0 points)'s comment on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 by (
- 14 Jul 2015 11:53 UTC; 0 points)'s comment on I need a protocol for dangerous or disconcerting ideas. by (
- 7 Oct 2015 7:42 UTC; 0 points)'s comment on The Fabric of Real Things by (
- 11 Dec 2012 2:47 UTC; 0 points)'s comment on By Which It May Be Judged by (
- 13 Aug 2015 17:10 UTC; -1 points)'s comment on Bragging thread August 2015 by (
This happens in the ordinary passage of time anyway. (Stephen King’s story “The Langoliers” plays this for horror—the reason the past no longer exists is because monsters are eating it.)
If your theory of time is 4-dimensionalist, then you might think the past people are ‘still there,’ in some timeless sense, rather than wholly annihilated. Interestingly, you might (especially if you reject determinism) think that moving through time involves killing (possible) futures, rather than (or in addition to) killing the past.
Hard to see why you can’t make a version of this same argument, at an additional remove, in the time travel case. For example, if you are a “determinist” and / or “n-dimensionalist” about the “meta-time” concept in Eliezer’s story, the future people who are lopped off the timeline still exist in the meta-timeless eternity of the “meta-timeline,” just as in your comment the dead still exist in the eternity of the past.
In the (seemingly degenerate) hypothetical where you go back in time and change the future, I’m not sure why we should prefer to say that we “destroy” the “old” future, rather than simply that we disconnect it from our local universe. That might be a horrible thing to do, but then again it might not be. There’s lots of at-least-conceivable stuff that is disconnected from our local universe.
(RobbBB seems to refer to what philosophers call the B-theory of time, whereas CronoDAS seems to refer to the A-theory of time.)
Yes, that seems more consistent with the rest of the sequences (and indeed advocacy of cryonics/timeless identity). “You” are a pattern, not a specific collection of atoms. So if the pattern persists (as per successive moments of time, or destroying and re-creating the pattern), so do “you”.
Sure. At the same time, it’s important to note that this is a ‘you’ by stipulation. The question of how to define self-identity for linguistic purposes (e.g., the scope of pronouns) is independent of the psychological question ‘When do I feel as though something is ‘part of me’?‘, and both of these are independent of the normative question ‘What entities should I act to preserve in the same way that I act to preserve my immediate person?’ It may be that there is no unique principled way to define the self, in which case we should be open to shifting conceptions based on which way of thinking is most useful in a given situation.
This is one of the reasons the idea of my death does not terrify me. The idea of death in general is horrific, but the future I who will die will only be somewhat similar to my present self, differing only in degree from my similarity to other persons. I fear death, not just ‘my’ death.
Sure, but my point is that most of the commentary on this site, or that is predicated on the Sequences, assumes the equivalence of all of those.
“Death” is the absence of a future self that is continuous with your present self. I don’t know exactly what constitutes “continuous” but it clearly is not the identity of individual particles. It may require continuity of causal derivation, for example.
Upload yourself to a computer. You’ve got a copy on the computer, you’ve got a physical body. Kill the physical body a few milliseconds after upload.
Repeat, except now kill the physical body a few milliseconds before the upload.
Do you mean to define the former situation as involving a “Death” because a few milliseconds worth of computations were lost, but the latter situation as simple a transfer?
I don’t think the word “death” really applies anymore when we are talking at the level of physical systems, any more than “table” or “chair” would. Those constructs don’t cross over well into (real or imaginary) physics.
Since Eliezer is a temporal reductionist, I think he might not mean “temporally continuous”, but rather “logical/causal continuity” or something similar.
Discrete time travel would also violate temporal continuity, by the way.
(Even the billiard ball model of “classical” chemistry is enough to eliminate “individual particles” as the source of personal identity; you aren’t made of the same atoms you were a year ago, because of eating, respiration, and other biological processes.)
There could be special “mind particles” in you brain and I can’t believe I just said that.
(::shrug:: Well, that does seem logically possible, but it doesn’t seem to be the way our biology works.)
Curses. My poor biology knowledge has betrayed me once again.
I don’t understand why it’s morally wrong to kill people if they’re all simultaneously replaced with marginally different versions of themselves. Sure, they’ve ceased to exist. But without time traveling, you make it so that none of the marginally different versions exist. It seems like some kind of act omission distinction is creeping into your thought processes about time travel.
Moreso, marginally different versions of people are replacing the originals all the time, by the natural physical processes of the universe. If continuity of body is unnecessary for personal identity, why is continuity of their temporal substrate?
Identity is a process, not a physical state. There is a difference between continuity of body, which is physical, and continuity of identity, which is a process. If I replace a hard drive from a running computer, it may still run all of the same processes. The same could be true of processors, or memory. But if I terminate the process, the physical substrate being the same is irrelevant.
I’m not even certain that identity is a process. The process of consciousness shuts down every time we go to sleep, and gets reconstituted from our memories the next time we wake up (with intermittent consciousness-like processes that occur in-between, while we dream).
It seems like the closest thing to “identity” that we have, these days, is a sort of nebulous locus of indistinguishably similar dynamic data structures, regardless of the substrate that is encoding or processing those structures. It seems a rather flimsy thing to hang an “I” on, though.
I’m unclear on your logic; whatever the mechanism, the “cogito” exists. (demonstrably to myself, and presumably to yourself.) Given this, why is it too flimsy? Why does it matter is there is a complex “nebulous locus” that instantiates it—it’s there, and it works, and conveys, to me, the impression that I am.
The ‘cogito’, as you put it, exists in the sense that dynamic processes certainly have effect on the world, and those processes also tend to generate a sense of identity.
Just because it exists and has effect, though, is no reason to take its suggestions about the nature of that identity seriously.
Example: you probably tend to feel that you make choices from somewhere inside your head, as a response to your environment, rather than that your environment comes together in such a way that you react predictably to it, and coincidentally generate a sense of ‘choice’ as part of that feeling. Most people do this; it causes them to tend to attempt to apply willpower directly to “forcing” themselves to make the “choices” they think will produce the correct outcome, rather than crafting their environment so that they naturally react in such a way to produce that outcome, and coincidentally generate a sense that they “chose” to produce that outcome.
Wu wei wu, and all that.
And this is why we (barely) have checkpointing. If you close you web browser, and launch a saved copy from five minutes ago, is the session a different one?
Because our morality is based on our experiential process. We see ourselves as the same person. Because of this, we want to be protected from violence in the future, even if the future person is not “really” the same as the present me.
Why protect one type of “you” over another type? Your response gives a reason that future people are valuable, but not that those future people are more valuable than other future people.
I’m not protecting anyone over anyone else, I’m protecting someone over not-someone. Someone (ie. non-murdered person) is protected, and the outcome that leads to dead person is avoided.
Experientially, we view “me in 10 seconds” as the same as “me now.” Because of this, the traditional arguments hold, at least to the extent that we believe that our impression of continuous living is not just a neat trick of our mind unconnected to reality. And if we don’t believe this, we fail the rationality test in many more severe ways than not understanding morality. (Why would I not jump off buildings, just because future me will die?)
This ignores that insofar as going back in time kills currently existing people it also revives previously existing ones. You’re ignoring the lives created by time travel.
If you’re defending some form of egoism, maybe time travel is wrong. From a utilitarian standpoint, preferring certain people just because of their causal origins makes no sense.
Where did time travel come from? That’s not part of my argument, or the context of the discussion about why murder is wrong; the time travel argument is just point out what non-causality might take the form of. The fact that murder is wrong is a moral judgement, which means it belongs to the realm of human experience.
If the question is whether changing the time stream is morally wrong because it kills people, the supposition is that we live in a non-causal world, which makes all of the arguments useless, since I’m not interested in defining morality in a universe that I have no reason to believe exists.
If you’re not interested in discussing the ethics of time travel, why did you respond to my comment which said
It seems pretty clear that I was talking about time travel, and your comment could also be interpreted that way.
I think we need to limit the set of morally relevant future versions to versions that would be created without interference, because otherwise we split ourselves too thinly among speculative futures that almost never happen. Given that, it makes sense to want to protect the existence of the unmodified future self over the modified one.
“I think we need to arbitrarily limit something. Given that, this specific limit is not arbitrary.”
How is that not equivalent to your argument?
Additionally, please explain more. I don’t understand what you mean by saying that we “split ourselves too thinly”. What is this splitting and why does it invalidate moral systems that do it? Also, overall, isn’t your argument just a reason that considering alternatives to the status quo isn’t moral?
Well, the phrase “split ourselves too thinly among speculative futures that almost never happen” would seem to refer to the fact that we have limited time and processing capacity to think with.
I think it summarizes to “time travel is too improbable and unpredictable to worry about [preserving the interests of yous affected by it]”.
Your argument makes no sense.
Those two sentences do not connect. They actually contradict.
Also, you’re doing moral epistemology backwards, in my view. You’re basically saying, “it would be really convenient if the content of morality was such that we could easily compute it using limited cognitive resources”. That’s an argumentum ad consequentum which is a logical fallacy.
You’re probably right about it contradicting. Though, about the moral-epistemology bit, I think there may be a sort of anthropic-bias type argument that creatures can only implement a morality that they can practically compute to begin with.
Your argument is that it is hard and impractical, not that it is impossible, and I think that only the latter type is a reasonable constraint on moral considerations, although even then I have some qualms about whether or not nihilism would be more justified, as opposed to arbitrary moral limits. I also don’t understand how anthropic arguments might come into play.
It depends. As for universes, so too for individual human beings: Is it moral (in a vacuum — we’re assuming there aren’t indirect harmful consequences) to kill a single individual, provided you replace him a second later with a near-perfect copy? That depends. Could you have made the clone without killing the original? If an individual’s life is good, and you can create a copy of him that will also have a good life, without interfering with the original, then that act of copying may be ethically warranted, and killing either copy may be immoral.
Similarly, if you can make a copy of the whole universe without destroying the original, then, plausibly, it’s just as wicked to destroy the old universe as it would be to destroy it without making a copy. You’re subtracting the same amount of net utility. Of course, this is all assuming that the universe as a whole has positive value.
Regarding universes, there’s a discussion of this in Orson Scott Card’s Pastwatch novel, where future people debate traveling back in time to change the present, realizing that that means basically the elimination of every person presently exisiting.
Regarding individuals, I once wrote a short story about a scientist who placed his mind into the body of a clone of himself, via a destructive process (scanned his original brain synapse by synapse after slicing it up, recreated that in the clone via electro stimulation). He was tried for murder of the clone. I hadn’t seen the connection between the two stories until now, though.
We are talking about time travel and so this doesn’t apply. Your comment is nitpicky for no good reason. I obviously recognize that consequentialists believe that more lives are better; I don’t know why you felt an urge to tell me that. Your wording is also unnecessarily pedantic and inefficient.
Not all of them.
Sure. Again, this isn’t relevant and isn’t providing information that’s new to me. People like Schopenhauer and Benatar might exist, but surely my overall point still stands. The focus on nitpicking is excessive and frustrating. I don’t want to have to invest much time and effort into my comments on this site so that I can avoid allowing people to get distracted by side issues; I want to get my points across as efficiently as possible and without interruption.
I was thinking more of average utilitarians than antinatalists. (I provisionally agree with average utilitarianism, and think more lives are better instrumentally but not terminally. I’m not confident that I wouldn’t change my mind if I thought this stuff through, though.)
The property you talk about the universe having is an interesting one, but I don’t think causality is the right word for it. You’ve smuggled an extra component into the definition: each node having small fan-in (for some definition of “small”). Call this “locality”. Lack of locality makes causal reasoning harder (sometimes astronomically harder) in some cases, but it does not break causal inference algorithms; it only makes them slower.
The time-turner implementation where you enumerate all possible universes, and select one that passed the self-consistency test, can be represented by a DAG; it’s causal. It’s just that the moment where the time-traveler lands depends on the whole space of later universes. That doesn’t make the graph cyclic; it’s just a large fanin. If the underlying physics is discrete and the range of time-turners is time limited to six hours, it’s not even infinite fanin. And if you blur out irrelevant details, like we usually do when reasoning about physical processes, you can even construct manageable causal graphs of events involving time-turner usage, and use them to predict experimental outcomes!
You can imagine universes which violate the small-fanin criterion in other ways. For example, imagine a Conway’s life-like game on an infinite plane, with a special tile type that copies a randomly-selected other cell in each timestep, with each cell having a probability of being selected that falls off with distance. Such cells would also have infinite fan-in, but there would still be a DAG representing the causal structure of that universe. It used to be believed that gravity behaved this way.
I haven’t yet particularly seen anyone else point out that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it. (In fact I hadn’t yet thought of how to do it at the time I wrote Harry’s panic attack in Ch. 14 of HPMOR, though a primary literary goal of that scene was to promise my readers that Harry would not turn out to be living in a computer simulation. I think there might have been an LW comment somewhere that put me on that track or maybe even outright suggested it, but I’m not sure.)
The requisite behavior of the Time Turner is known as Stable Time Loops on the wiki that will ruin your life, and known as the Novikov self-consistency principle to physicists discussing “closed timelike curve” solutions to General Relativity. Scott Aaronson showed that time loop logic collapses PSPACE to polynomial time.
I haven’t yet seen anyone else point out that space and time look like a simple generalization of discrete causal graphs to continuous metrics of relatedness and determination, with c being the generalization of locality. This strikes me as important, so any precedent for it or pointer to related work would be much appreciated.
The relationship between continuous causal diagrams and the modern laws of physics that you described was fascinating. What’s the mainstream status of that?
Showed up in Penrose’s “The Fabric of Reality.” Curvature of spacetime is determined by infinitesimal light cones at each point. You can get a uniquely determined surface from a connection as well as a connection from a surface.
Obviously physicists totally know about causality being restricted to the light cone! And “curvature of space = light cones at each point” isn’t Penrose, it’s standard General Relativity.
Not claiming it’s his own idea, just that it showed up in the book, I assume it’s standard.
David Deutsch, not Roger Penrose. Or wrong title.
I think probably Penrose’s “The Road to Reality” was intended. I don’t think there’s anything in the Deutsch book like “curvature of spacetime is determined by infinitesimal light cones”; I don’t think I’ve read the relevant bits of the Penrose but it seems like exactly the sort of thing that would be in it.
Odd, the last paragraph of the above seems to have gotten chopped. Restored. No, I haven’t particularly heard anyone else point that out but wouldn’t be surprised to find someone had. It’s an important point and I would also like to know if anyone has developed it further.
I found that idea so intriguing I made an account.
Have you considered that such a causal graph can be rearranged while preserving the arrows? I’m inclined to say, for example, that by moving your node E to be on the same level—simultaneous with—B and C, and squishing D into the middle, you’ve done something akin to taking a Lorentz transform?
I would go further to say that the act of choosing a “cut” of a discrete causal graph—and we assume that B, C, and D share some common ancestor to prevent completely arranging things—corresponds to the act of the choosing a reference frame in Minkowski space. Which makes me wonder if max-flow algorithms have a continuous generalization.
edit: in fact, max-flows might be related to Lagrangians. See this.
Mind officially blown once again. I feel something analogous to how I imagine someone who had been a heroin addict in the OB-bookblogging time period and in methadone treatment during the subsequent non-EY-non-Yvain-LW time period would feel upon shooting up today. Hey Mr. Tambourine Man, play a song for me / In the jingle-jangle morning I’ll come following you.
In computational physics, the notion of self-consistent solutions is ubiquitous. For example, the behaviour of charged particles depends on the electromagnetic fields, and the electromagnetic fields depend on the behaviour of charged particles, and there is no “preferred direction” in this interaction. Not surprisingly, much research has been done on methods of obtaining (approximations of) such self-consistent solutions, notably in plasma physics and quantum chemistry. just some examples.
It is true that these examples do not involve time travel, but I expect the mathematics to be quite similar, with the exception that these physics-based examples tend to have (should have) uniquely defined solutions.
Er, I was not claiming to have invented the notion of an equilibrium but thank you for pointing this out.
I didn’t think you were claiming that, I was merely pointing out that the fact that self-consistent solutions can be calculated may not be that surprising.
The Novikov self-consistency principle has already been invented; the question was whether there was precedent for “You can actually compute consistent histories for discrete universes.” Discrete, not continuous.
Yes, hence, “In computational physics”, a branch of physics which necessarily deals with discrete approximations of “true” continuous physics. It seems really quite similar, I can even give actual examples of (somewhat exotic) algorithms where information from the future state is used to calculate the future state, very analogous to your description of a time-travelling game of life.
There are precedents and parallels in Causal Sets and Causal Dynamical Triangulation
CDT is particularly interesting for its ability to predict the correct macroscopic dimensionality of spacetime:
″ At large scales, it re-creates the familiar 4-dimensional spacetime, but it shows spacetime to be 2-d near the Planck scale, and reveals a fractal structure on slices of constant time”
I was going to reply with something similar. Kevin Knuth in particular has an interesting paper deriving special relativity from causal sets: http://arxiv.org/abs/1005.4172
It replaces the exponential time requirement with an exactly analogous exponential MTBF reliability requirement. I’m surprised by how infrequently this is pointed out in such discussions, since it seems to me rather important.
It’s true that it requires an exponentially small error rate, but that’s cheap, so why emphasize it?
I am not aware of any process, ever, with a demonstrated error rate significantly below that implied by a large, fast computer operating error-free for an extended period of time. If you can’t improve on that, you aren’t getting interesting speed improvements from the time machine, merely moderately useful ones. (In other words, you’re making solvable expensive problems cheap, but you’re not making previously unsolvable problems solvable.)
In cases where building high-reliability hardware is more difficult than normal (for example: high-radiation environments subject to drastic temperature changes and such), the existing experience base is that you can’t cheaply add huge amounts of reliability, because the error detection and correction logic starts to limit the error performance.
Right now, a high performance supercomputer working for a couple weeks can perform ~ 10^21 operations, or about 2^70. If we assume that such a computer has a reliability a billion times better than it has actually demonstrated (which seems like a rather generous assumption to me), that still only leaves you solving 100-bit size NP / PSPACE problems. Adding error correction and detection logic might plausibly get you another factor of a billion, maybe two factors of a billion. In other words: it might improve things, but it’s not the indistinguishable from magic NP-solving machine some people seem to think it is.
And fuel requirements too, for similar reasons.
Why do the fuel requirements go up? Where did they come from in the first place?
A time loop amounts to a pocket eternity. How will you power the computer? Drop a sun in there, pick out a brown dwarf. That gives you maybe ten billion years of compute time, which isn’t much.
I was assuming a wormhole-like device with a timelike separation between the entrance and exit. The computer takes a problem statement and an ordering over the solution space, then receives a proposed solution from the time machine. It checks the solution for validity, and if valid sends the same solution into the time machine. If not valid, it sends the lexically following solution back. The computer experiences no additional time in relation to the operator and the rest of the universe, and the only thing that goes through the time machine is a bit string equal to the answer (plus whatever photons or other physical representation is required to store that information).
In other words, exactly the protocol Harry uses in HPMoR.
Is there some reason this protocol is invalid? If so, I don’t believe I’ve seen it discussed in the literature.
Now here’s the panic situation: What happens if the computer experiences a malfunction or bug, such that the validation subroutine fails and always outputs not-valid? If the answer is sent back further in time, can the entire problem be simplified to “We will ask any question we want, get a true answer, and then sometime in the future send those answers back to ourselves?”
If so, all we need do in the present is figure out how to build the receiver for messages from the future: those messages will themselves explain how to build the transmitter.
The wormhole-like approach cannot send a message to a time before both ends of the wormhole are created. I strongly suspect this will be true of any logically consistent time travel device.
And yes, you can get answers to arbitrarily complex questions that way, but as they get difficult, you need to check them with high reliability.
Is it possible to create a wormhole exit without knowing how to do so? If so, how likely is it that there is a wormhole somewhere within listening range?
As for checking the answers, I use the gold standard of reliability: did it work? If it does work, the answer is sent back to the initiating point. If it doesn’t work, send the next answer in the countable answer space back.
If the answer can’t be shown to be in a countable answer space (the countable answer space includes every finite sequence of bits, and therefore is larger than the space of the possible outputs of every Turing Machine), then don’t ask the question. I’m not sure what question you could ask that can’t be answered in a series of bits.
Of course, that means that the first (and probably) last message sent back through time will be some variant of “Do not mess with time” It would take a ballsy engineer indeed to decide that the proper response to trying the solution “Do not mess with time” is to conclude that it failed and send the message “Do not mess with timf”
My very limited understanding is that wormholes only make logical sense with two endpoints. They are, quite literally, a topological feature of space that is a hole in the same sense as a donut has a hole. Except that the donut only has a two dimensional surface, unlike spacetime.
My mostly unfounded assumption is that other time traveling schemes are likely to be similar.
How do you plan to answer the question “did it work?” with an error rate lower than, say, 2^-100? What happens if you accidentally hit the wrong button? No one has ever tested a machine of any sort to that standard of reliability, or even terribly close. And even if you did, you still haven’t done well enough to send a 126 bit message, such as “Do not mess with time” with any reliability.
I ask the future how they will did it.
I was going to say “bootstraps don’t work that way”, but since the validation happens on the future end, this might actually work.
Because thermodynamics and Shannon entropy are equivalent, all computationally reversible processes are thermodynamically reversible as well, at least in principle. Thus, you only need to “consume” power when doing a destructive update (i.e., overwriting memory locations) - and the minimum amount of energy necessary to do this per-bit is known, just like the maximum efficiency of a heat engine is known.
Of course, for a closed timelike loop, the entire process has to return to its start state, which means there is theoretically zero net energy loss (otherwise the loop wouldn’t be stable).
Can’t you just receive a packet of data from the future, verify it, then send it back into the past? Wouldn’t that avoid having an eternal computer?
It’s also interesting how few people seem to realize that Scott Aaronson’s time loop logic is basically a form of branching timelines rather than HP’s one consistent universe.
Yeah, this is one of the most profound things I’ve ever read. This is a RIDICULOUSLY good post.
The ‘c is the generalization of locality’ bit looked rather trivial to me. Maybe that’s just EY rubbing off on me, but...
Its obvious that in Conways Game, it takes at least 5 iterations for one cell to affect a cell 5 units away, and c has for some time seemed to me like our worlds version of that law
It is rarely appreciated that the Novikov self-consistency principle is a trivial consequence of the uniqueness of the metric tensor (up to diffeomorphisms) in GR.
Indeed, given that (a neighborhood of) each spacetime point, even in a spacetime with CTCs, has a unique metric, it also has unique stress-energy tensor derived from this metric (you neighborhoods to do derivatives). So there is a unique matter content at each spacetime point. In other words, your grandfather cannot be alternately alive (first time through the loop) or dead (when you kill him the second time through the loop) at a given moment in space and time.
The unfortunate fact that we can even imagine the grandfather paradox to begin with is due to our intuitive thinking that that the spacetime is only a background for “real events”, a picture as incompatible with GR as perfectly rigid bodies are with SR.
How does the mass-energy of a dead grandfather differ from the mass-energy of a live one?
Pretty drastically. One is decaying in the ground, the other is moving about in search of a mate. Most people have no trouble telling the difference.
The total four-momentum may well be the same in both case, but the stress-energy-momentum tensor is different (the blood is moving in the live grandfather but not the dead one, etc., etc.)
I’ve seen academic physicists use postselection to simulate closed timelike curves; see for instance this arXiv paper, which compares a postselection procedure to a mathematical formalism for CTCs.
I tend to believe that most fictional characters are living in malicious computer simulations, to satisfy my own pathological desire for consistency. I now believe that Harry is living in an extremely expensive computer simulation.
Also known as Eliezer Yudkowsky’s brain.
I know that the idea of “different systems of local consistency constraints on full spacetimes might or might not happen to yield forward-sampleable causality or things close to it” shows up in Wolfram’s “A New Kind of Science”, for all that he usually refuses to admit the possible relevance of probability or nondeterminism whenever he can avoid doing so; the idea might also be in earlier literature.
I’d thought about that a long time previously (not about Time-Turners; this was before I’d heard of Harry Potter). I remember noting that it only really works if multiple transitions are allowed from some states, because otherwise there’s a much higher chance that the consistency constraints would not leave any histories permitted. (“Histories”, because I didn’t know model theory at the time. I was using cellular automata as the example system, though.) (I later concluded that Markov graphical models with weights other than 1 and 0 were a less brittle way to formulate that sort of intuition (although, once you start thinking about configuration weights, you notice that you have problems about how to update if different weight schemes would lead to different partition function) values).)
I know we argued briefly at one point about whether Harry could take the existence of his subjective experience as valid anthropic evidence about whether or not he was in a simulation. I think I was trying to make the argument specifically about whether or not Harry could be sure he wasn’t in a simulation of a trial timeline that was going to be ruled inconsistent. (Or, implicitly, a timeline that he might be able to control whether or not it would be ruled inconsistent. Or maybe it was about whether or not he could be sure that there hadn’t been such simulations.) But I don’t remember you agreeing that my position was plausible, and it’s possible that that means I didn’t convey the information about which scenario I was trying to argue about. In that case, you wouldn’t have heard of the idea from me. Or I might have only had enough time to figure out how to halfway defensibly express a lesser idea: that of “trial simulated timelines being iterated until a fixed point”.
You can do some sort of lazy evaluation. I took the example you gave with the 4x4 grid (by the way you have a typo: “we shall take a 3x3 Life grid”), and ran it forwards, and it converges to all empty squares in 4 steps. See this doc for calculations.
Even if it doesn’t converge, you can add another symbol to the system and continue playing the game with it. You can think of the symbol as a function. In my document x = compute_cell(x=2,y=2,t=2)
I don’t totally understand it, but Zuse 1969 seems to talk about spacetime as a sort of discrete causal graph with c as the generalization of locality (“In any case, a relation between the speed of light and the speed of transmission between the individual cells of the cellular automaton must result from such a model.”). Fredkin and Wolfram probably also have similar discussions.
I certainly made a remark on LW, very early in HPMoR, along the following lines: If magic, or anything else that seems to operate fundamentally at the level of human-like concepts, turns out to be real, then we should see that as substantial evidence for some kind of simulation/creation hypothesis. So if you find yourself in the role of Harry Potter, you should expect that you’re in a simulation, or in a universe created by gods, or in someone’s dream … or the subject of a book :-).
I don’t think you made any comment on that, so I’ve no idea whether you read it. I expect other people made similar points.
It’s more immediately plausible to hypothesize that certain phenomena and regularities in Harry’s experience are intelligently designed, rather than that the entire universe Harry occupies is. We can make much stronger inferences about intelligences within our universe being similar to us, than about intelligences who created our universe being similar to us, since, being outside our universe/simulation, they would not necessarily exist even in the same kind of logical structure that we do.
I’m not sure how to respond to this; the ability to compute it in a finite fashion for discrete universes seemed trivially obvious to me when I first pondered the problem. It would never have occurred to me to actually write it down as an insight because it seemed like something you’d figure out within five minutes regardless.
“Well, we know there are things that can’t happen because there are paradoxes, so just compute all the ones that can and pick one. It might even be possible to jig things such that the outcome is always well determined, but I’d have to think harder about that.”
That said, this may just be a difference in background. When I was young, I did a lot of thinking about Conway’s Life and in particular “garden of eve” states which have no precursor. Once you consider the possibility of garden of eve states and realize that some Life universes have a strict ‘start time’, you automatically start thinking about what other kinds of universes would be restricted. Adding a rule with time travel is just one step farther.
On the other hand, the space/time causal graph generalization is definitely something I didn’t think about and isn’t even something I’d heard vaguely mentioned. That one I’ll have to put some thought into.
1) If we ask whether the entities embedded in strings watched over by the self-consistent universe detector really have experiences, aren’t we violating the anti-zombie principle?
2) If Tegmark possible worlds have measure inverse their algorithmic complexity, and causal universes are much more easily computable than logical ones, should we not then find it not surprising that we are in an (apparently) causal universe even if the UE includes logical ones?
I think that a correct metaphor for computer-simulating other universe is not that we create it, but that we look at it. It already exists somewhere in the multiverse, but previously it was separated from our universe.
If simulating things doesn’t add measure to them, why do you believe you’re not a Boltzmann brain just because lawful versions of you are much more commonly simulated by your universe’s physics?
This is not a full answer (I don’t have one), just a sidenote: Believing to most likely not be a Boltzmann brain does not necessarily mean that Boltzmann brains are less likely. It could also be some kind of a survivor bias.
Imagine that every night when you sleep, someone makes hundred copies of you. One copy, randomly selected, remains in your bed. Other 99 copies are taken away and killed horribly. This was happening all your life, you just didn’t know it. What do you expect about tomorrow?
From the outside view, tomorrow the 99 copies of you will be killed, and 1 copy will continue to live. Therefore you should expect to be killed.
But from inside, today’s you is the lucky copy of the lucky copy, because all the unlucky copies are dead. Your whole experience is about surviving, because the unlucky ones don’t have experiences now. So based on your past, you expect to survive the next day. And the next day, 99 copies of you will die, but the remaining 1 will say: “I told you so!”.
So even if the Boltzmann brains are more simulated, and 99.99% of my copies are dying horribly in vacuum within the next seconds, they don’t have a story. The remaining copy does. And the story says: “I am not a Boltzman brain”.
If you can’t tell the difference, what’s the use of considering that you might be a Boltzmann brain, regardless of how likely it is?
By the way, how precise must be a simulation to add measure? Did I commit genocide by watching Star Wars, or is particle-level simulation necessary?
A possible answer could be that an imprecise simulation adds way less, but still nonzero measure, so my pleasure from watching Star Wars exceeds the suffering of all the people dying in the movie, multiplied by the epsilon increase of their measure. (A variant of a torture vs dust specks argument.) Running a particle-level Star Wars simulation would be a real crime.
This would mean there is no clear boundary between simulating and not simulating, so the ethical concerns about simulation must be solved by weighting how detailed is the simulation versus what benefits do we get by running it.
Sort of discussed here and here.
First, knowing you’re a Boltzmann brain doesn’t give you anything useful. Even if I believed that 90% of my measure were Boltzmann brains, that wouldn’t let me make any useful predictions about the future (because Boltzmann brains have no future). Our past narrative is the only thing we can even try and extract any useful predictions from.
Second, it might be possible to recover “traditional” predictability from vanity. If some observer looks at a creature that implements my behavior, I want that someone to find that creature to make correct predictions about the future. Assuming any finite distribution of probabilities over observers, I expect observers finding me via a causal, coherent, simple simulation to vastly outweigh observers finding me as a Boltzmann brain (since Boltzmann brains are scattered [because there’s no prior reason to anticipate any brain over another] but causal simulations recur in any form of “iterate all possible universes” search, and in a causal simulation, I am much more likely to implement this reasoning). Call it vanity logic—I want to be found to have been correct. I think (intuitively), but am not sure, that given any finite distribution of expectation over observers, I should expect to be observed via a simple simulation with near-certainty. I mean—how would you find a Boltzmann brain? I’m fairly sure any universe that can find me in simulation space is either looking for me specifically—in which case, they’re effectively hostile and should not be surprised at finding that my reasoning failed—or are iterating universes looking for brains, in which case they’ll find vastly more this-reasoning-implementers through causal processes than random ones.
This is a side point, but I’m curious if there is a strong argument for claiming lawful brains are more common (had an argument with some theists on this issue, they used BB to argue against multiverse theories)
I would say: because it seems that (in our universe and those sufficiently similar to count, anyway) the total number of observer-moments experienced by evolved brains should vastly exceed the total number of observer-moments experienced by Boltzmann brains. Evolved brains necessarily exist in large groups, and stick around for absolutely aeons as compared to the near-instantaneous conscious moment of a BB.
The problem is that the count of “similar” universes does not matter, the total count of brains does. It seems a serious enough issue for prominent multiverse theorists to reason backwards and adjust things to avoid the undesirable conclusion http://www.researchgate.net/publication/1772034_Boltzmann_brains_and_the_scale-factor_cutoff_measure_of_the_multiverse
If they can host brains, they’re “similar” enough for my original intention—I was just excluding “alien worlds”.
I don’t see why the total count of brains matters as such; you are not actually sampling your brain (a complex 4-dimensional object) you are sampling an observer-moment of consciousness. A Boltzmann brain has one such moment, an evolved human brain has (rough back of an envelope calculation, based on a ballpark figure of 25ms for the “quantum” of human conscious experience and a 70-year lifespan) 88.3 x 10^9. Add in the aforementioned requirement for evolved brains to exist in multiplicity wherever they do occur, and the ratio of human moments:Boltzmann moments in a sufficiently large defined volume of (large-scale homogenous) multiverse gets higher still.
This is all assuming that a Boltzmann brain actually experiences consciousness at all. Most descriptions of them seem to be along the lines of “matter spontaneously organises such that for an instant it mimics the structure of a conscious brain”. It’s not clear to me, though, that an instantaneous time-slice through a consciousness is itself conscious (for much the same reason that an instant extracted from a physical trajectory lacks the property of movement). If you overcome that by requiring them to exist for a certain minimum amount of time, they obviously become astronomically rarer than they already are.
Seems to me that combining those factors gives a reasonably low expectation for being a Boltzmann brain.
… but I’m only an amateur, this is probably nonsense ;-)
it does add measure, but probably a tiny fraction of it’s total measure, making it more of “making it slightly more real” then “creating” it. But that’s semantics.
Edit: and it may very well be the case that other types of “looking at” also add measure, such as accessing a highly optimized/cryptographically obscuficated simulation through a straightforward analog interface.
“Correct” is too strong. It might be a useful metaphor in showing which way the information is flowing, but it doesn’t address the question about the moral worth of the action of running a simulation. Certain computations must have moral worth, for example consider running an uploaded person in a similar setup (so that they can’t observe the outside world, and only use whatever was pre-packaged with them, but can be observed by the simulators). The fact of running this computation appears to be morally relevant, and it’s either better to run the computation or to avoid running it. So similarly with simulating a world, it’s either better to run it or not.
Whether it’s better to simulate a world appears to be dependent on what’s going on inside of it. Any decision that takes place within a world has an impact on the value of each particular simulation of the world, and if there are more simulations, the decision has a greater impact, because it influences the moral value of more simulations. Thus, by deciding to run a simulation, you are amplifying the moral value of the world that you are simulating and of decisions that take place in it, which can be interpreted as being equivalent to increasing its probability mass.
Just how much additional probability mass a simulation provides is unclear, for example a second simulation probably adds less than the first, and the first might matter very little already. It probably depends on how a world is defined in some way.
Why? Seems like the simulated universe gets at least as much additional reality juice as the simulating universe has.
It’s starting to seem like the concept of “probability mass” is violating the “anti-zombie principle”.
Edit: this is why I don’t believe in the “anti-zombie principle”.
We’re not asking if they have experiences; obviously if they exist, they have experiences. Rather we’re asking if their entire universe gains any magical reality-fluid from our universe simulating it (e.g., that mysterious stuff which, in our universe, manifests in proportion to the integrated squared modulus in the Born probabilities) which will then flow into any conscious agents embedded within.
Sadly, my usual toolbox for dissolving questions about consciousness does not seem to yield results on reality-fluid as yet—all thought experiments about “What if I simulate / what if I see...” either don’t vary with the amount of reality-fluid, or presume that the simulating universe exists in the first place.
There are people who claim to be less confused about this than I am. They appear to me to be jumping the gun on what constitutes lack of confusion, and ought to be able to answer questions like e.g. “Would straightforwardly simulating the quantum wavefunction in sufficient detail automatically give rise to sentients experiencing outcomes in proportion to the Born probabilities, i.e., reproduce our current experience?” by something other than e.g. “But people in branches like ours will have utility functions that go by squared modulus” which I consider to be blatantly silly for reasons I may need to talk about further at some point.
I suspect I’m misunderstanding the question, because I notice that I’m not confused, and that’s usually a bad sign when dealing with a question which is supposed to be complicated.
Is this not equivalent to asking “If one were to simulated our entire universe, would it be exactly like ours? Could we use it to predict the future (or at least the possible space of futures) in our own universe with complete accuracy?”
If so, the immediate answer that comes to mind is “yes...why not?”
I’m not convinced “reality fluid” is an improvement over “qualia”.
“Magical reality fluid” highlights the fact that it’s still mysterious, and so seems to be a fairly honest phrasing.
So what would you think of “magical qualia”?
It captures my feelings on the matter pretty well, although it also seems like an unnecessarily rude way of summarizing the opinions of any qualiaphiles I might be debating. Like if a Christian self-deprecatingly said that yes, he believes the reason for akrasia is a magic snake, that seems (a) reasonable (description), whereas if an atheist described a Christian’s positions in those terms she’s just an asshole.
Solipsists should be able to dissolve the whole thing easily.
I don’t feel confused about this at all, and your entire concept of reality fluid looks confused. Keywords here are “look” and “feel”, I don’t have any actual justification and thus despite feeling lots of confidence-the-emotion I probably (hopefully) wouldn’t bet on it.
It sure looks a lot like “reality fluid” is just what extrapolated priors over universes feel like from the inside when they have been excluded from feeling like probabilities for one reason or another.
in response to the actual test thou: it seem that depends on what exactly you mean by “straightforwardly”, as well as on actual physics. There are basically 3 main classes of possibilities: Either something akin to Mangled Worlds automatically falls out of the equations, in which case they do with most types of simulation method. Or that doesn’t happen and you simulate in the forwards direction way with a number attached to each point in configuration space (aka, what’d happen automatically if you did it in C++), in which case they don’t. Or you “simulate” it functional programming style where history is traced backwards in a more particle like way from the point you are trying to look at (aka, what would happen automatically if you did it in Haskell), in which case they sort of do, but probably with some bias. In all cases, the “reason” for the simulations “realness” turning out like it did is in some sense the same one as for ours. This information probably does not make sense since it’s a 5 second intuition haphazardly translated from visual metaphor as well as some other noise source I forgot about.
Oh, and I don’t really know anything about quantum mechanics and there’s probably some catch specific to them that precludes one or more of these alternatives, possibly all of them. I’m fully aware most of what I’m saying is probably nonsense, I’m just hoping it’s surprising nonsense and ironmaning it might yield something useful.
I downvoted because this seems to be a case of “I don’t know, but I don’t happen to feel confused.” It does not, at least, seem to be “I don’t know, but I don’t feel confused, therefore I know,” which can occasionally happen :D
It’s more of a case of not knowing if I know or not, nor even if I’m confused or not. I do know that thus I’m meta-confused, but that does not necessarily imply object level confusion. It’s a black boxes and lack of introspective access thing.
A way has occurred to me.
Take the basic program described in the beginning of this post, in which the universe is deterministically computed with a cached series of states of the universe. The change is to make this computation is parallel on a staggering scale, because of how Time-Turners work. I’m going to explain this like there’s only one wizard with a Time-Turner that works for one hour, but I’m pretty sure it holds up for the complex case of many Turners that can go back a varying amount of time.
A wizard (Marty McFloat, let’s say) carrying a Time-Turner constantly generates a huge number new copies of the universe that differ slightly from the ‘real’ universe in a way that has gone unobserved because of the anthropic principle. In the ‘main track’ of the universe, nothing interesting happens, a new state is computed from the previous state. Every other copy of the universe is the same except that a rough copy of the wizard has appeared.
Maybe this copy of Marty has a new life vest, or has a bruise, or just has his head turned slightly to the left and one leg tensed. There are a finite but huge number of these variations, including copies where only a single quark (or whatever the smallest computed unit of manner is) is out of place. But in every variation, this copy of Marty’s brain has an extra hour of memories in its head. (More variations!)
Let’s follow one of these, the Marty with a new vest. This is a potential future Marty. You can think of this as the appearance of a Marty who went clothes shopping and used his Time-Turner an hour from now, but it’s not: it’s a variation of the Marty wearing the Time-Turner, not a copy of part of a future computed state. Every possible Marty-variant is generated in a different parallel universe state.
The state of the universe is computed onwards. If, one hour later, Marty does not activate his Time-Turner, the universe fails its consistency check and is deleted. If Marty does activate it, the universe looks back at the Marty that was added an hour ago. If the two Martys are not bit-for-bit (at whatever the lowest scale of the computation is), the universe fails its consistency check and is deleted. If Marty is identical, the ‘younger’ one that activated the Time-Turner is deleted and the universe rolls on in a consistent, causal way.
This system has no metatime. Universe computation never has to go back and restart from a previous state, modified or modified. It just requires generating a near-infinite number of parallel branches for every state of the universe and destroying nearly all of them. (Which I guess is quite a lot of world-killing, to understate it.)
The universe is causal and can be modeled with a directed acyclic graph. Just imagine that each state of the universe is a DAG, which may include a new node with no parents (the ‘arriving’ variant wizard), and it’s not one DAG but an incredibly thick stack of DAGs, most of which are discarded. The universe never needs to compute (p8pm|9pm).
If I correctly understood the prompt (does having multiple copies of the timeline count as “higher order metatime”, or does that just mean “no rolling the clock back” as in the example?), I think this perverse solution satisfies the constraints of the question I quoted, but I’d love to hear correction.
As a variation, nothing really requires that universes be computed in parallel; as long as the computer has lots of disk space it could write out the generated parallel universes to disk and continue simulating the current universe until it fails the consistency check or ends, then pick any remaining state from disk and pick up computation from wherever that state left off. This is a in-fiction way of restating that you can trade off space for parallelism in computation, but I’m not entirely certain what “higher-order” precludes so I wanted to toss it out there as a variation.
Actually, this whole post is an example of the general principle that you can trade off space for time in programs. It just ends up looking really weird when you push this optimization technique to a ridiculous scale. As for who would simulate a universe this way, I would guess some poor sap who was overruled in the design meetings and told to “just make it work”.
(On a meta note, I’ve been wondering for a few years if anything would prompt me to create a LessWrong account and participate. I read this post from my feed reader this morning, went on to have a day that’s been very busy and meaningful for my personal life. I didn’t think about this post at any point, went to bed, and woke up three hours later with this answer fully-formed except for the one-simulation-at-a-time variation I thought of while typing it up so I can get it out of my head and go back to bed. I guess waking me up with a ridiculous answer to a puzzle I didn’t know I was chewing on will prompt it.)
By whom? The DM? Jokes aside, how does that happen, exactly?
(As a matter of fact, that could be an amusing mechanic to add to games that allow for time travel, though the players would be stuck in a Groundhog Day Loop until they pass the check).
There is a game which does exactly this. It’s called Chronotron, and you play a time-traveling robot who has to create multiple versions of himself via time travel in order to complete each stage. The loop must be closed for all iterations, though, which means that if a future version of himself interacts with a past version of himself in such a way that the past version is prevented from going back in time, you lose the stage and have to start over.
Pretty fun game, although since it’s a Flash game it’s relatively short.
I was continuing on the post’s opening thought experiment of a computed universe; I was thinking whatever program is computing the new states of the universe would do this check. Sorry for the confusion.
It really seems you need to taboo “real” here, and instead ask some related questions such as:
which types of universes could observe which other types of universe (an universe which can observe you you can also, obviously, “travel” to)? Which universes could trade, in the broadest senses of the word, with which other universe? What types of creatures in which types of universes are capable of consistently caring about things in what types of universes?
Specifically it seems likely that your usage of “real” in this case refers to “things that humans could possibly, directly or indirectly, in principle care about at all.”, which is the class of universes we must make sure to include in our priors for where we are.
Yep. Gary Drescher in Good and Real makes the point that there’s no inherent difference between the real universe and other mathematically possible universes (essentially Tegmark’s MUH, but put in a more comprehensible (for me at least) way), and “real” is just a deictic, meaning ‘contained in the universe the speaker is in’. (But if we found that the Kolgomorov complexity of this universe is much larger than what would suffice for sentient beings to arise in it, that might mean that there’s something else that makes this universe real other than the fact that we are in it.)
I’m having trouble with this usage of the world “universe”. Can’t you call it “timeline” or “plane” or something?
No, because that’s not at all what I mean. What I mean is much closer to “mathematical structure”, but potentially much wider, up to and including logical impossibilities and the like.
I think my brain just quit on me. I’m asking it to come back and not be scared, but I’ll need some help here. Could you present this in more palatable terms, and in more detail? Examples are good; my brain is much better at inference than deduction.
Being scared shitless is a perfectly normal and justified reaction. Quick examples don’t really work, but I do recommend reading Permutation City, which handles some of these issues, but not the more exotic ones.
Hm. The only thing I know about Perm City is that there’s this guy who keeps building chair legs. Over and over. EY seems to think it’s a sad fate.
But none of those things is what he means by ‘real.’ Else it would be nonsense to ask, ‘Can completely boring, completely unobservable things be real?‘; the answer would be ‘Trivially, they cannot be real.’
What we mean by ‘real’ is similar to what we mean by ‘exists’ and ‘actual’ and ‘territory’ (in the map-territory sense). You could argue that the existential quantifier is too general to be meaningful, that we should make it more anthropocentric, make it mean ‘something we could observe’ or ‘something we could observe or imagine observing’ or similar. But this would simply be a semantic non-starter. As a linguistic fact, that is not what we mean by ‘reality’ or ‘existence.’ We mean the most general category to which instantiated things can belong—a highly disjunctive property, perhaps, but not on that account a meaningless one.
If you pay attention, you’ll notice Eliezer basically has spent a significant portion of this sequence saying precisely that completely boring unobservable things, in a broad enough sense of those words, can’t be real.
You list a bunch of synonyms, of which the “instantiated” at the end if one, by standard rationalists taboo these are not allowed either.
I am not saying they are meaningless; however, like most words, if you want to examine their referent closely rather than conveniently referring to it with a shorthand, you need to be able to taboo them. This does not mean they are not meaningful; like most words they are both useful and meaningful, but only as a compression of a much longer and more awkward expression, and not uniquely indispensable.
Can you give me some examples? They seem absent from this particular post, where Eliezer explicitly allows that boring unobservable things can be real. (If he didn’t allow this, then he couldn’t substantively argue against the likelihood that they actually are real. It makes no sense to give evidence in support of the proposition that P is P.)
I allow that rationality taboo is very useful. But it is also limited. If you go deep enough, eventually all attempts to define or explain our terms will end up being circular, arbitrary, or grounded in an ostensive act. The basic verificationist fallacy is to think that all definitions are ultimately reducible to ostension; or, in its even more hyperbolic form, that they are all immediately reducible to ostension. But obviously our language does not work this way. We learn new terms by their theoretical roles and associations as much as by linking them to specific perceptions. The idea of ‘territory’ (i.e., of ‘reality,’ of ‘the world,’ of ‘what exists’ at its most general) is one of those terms that is best understood in terms of its theoretical role, not in terms of a disjunction of all the observation-statements.
If any of our words are fundamental, ‘reality’ and ‘existence’ almost certainly are. Suppose I asked you to taboo negation. Can you explain to me what negativity is, without appeal to any terms that are themselves in any way negative? Is this necessary for rational discourse involving negations?
examples: pretty much this entire post: http://lesswrong.com/lw/ezu/stuff_that_makes_stuff_happen/
And I disagree about taboos bottoming out: eventually you should reduce words to things that are not words, such as images and equations. If you think “real” can’t be reduced to other words or math, feel free to point at a heap of real things, a heap of non-real things, or even better machine that outputs “1″ when you show it a real thing and “0” when you show it a non-existent thing.
edit: missed last paragraph: “a negative number is one for wich there exists a number wich is not itself negative that when added to it yelds zero”. ugly and probably has a few bugs, but works as prroof of concept thoguh up in 5 seconds. Or maybe you meant logical negation, in wich case I say “1 → 0, 0 → 1″
Quote a specific line where Eliezer’s words suggest that ‘real’ for him means simply ‘important’ or ‘interesting’ or ‘observable.’
Why? If you understand my words in terms of their relationship to other words, what added value is gained in reducing to an act of mere gesturing? (Also, images and equations are still symbols, so they’re clearly not where ostension should bottom out; the meanings of images and equations require bottoming out, on your view, in something that is not itself meaningful. A better example might be a sense-datum. See Russell’s The Relation of Sense-data to Physics.)
Sorry, I’m not talking about negative numbers like −5. I’m talking about negated propositions, like “2 plus 2 is not equal to five,” or “fire is not cold.” I don’t think negative numbers are conceptually basic, but I think that negation is.
looks like i linked the wrong post, meant to link the previous one ( http://lesswrong.com/lw/eva/the_fabric_of_real_things/ ). Quote anywya:
yea, not a very good one, I’m really tired and can’t find a better one at the moment, I remember there were some in there somewhere...
Rationalist taboo is supposed to get around the problems associated with fuzzy human words. There might still be problems with more direct forms of reference in theory, but in practice the word specific ones are usually enough.
“Not” is one of those things that reduce to math. Specifically, the formal system of boolean algebra.
Whether the many-worlds hypothesis is true, false, or meaningless (and I believe it’s meaningless precisely because all branches you’re not on are forever inaccessible/unobservable), the concept of a universe being observable has more potential states than true and false.
Consider our own universe as it’s most widely understood to be. Each person can only observe (past) or affect (future) events within his light cone. All others are forever out of reach. (I know, it may turn out that QM makes this not true, but I’m not going there right now.) Thus you might say that no two people inhabit exactly the same universe, but each his own, though with a lot of overlap.
Time travel, depending on how it works (if it does), may or may not alter this picture much. Robert Forward’s Timemaster gives an example of one possible way that does not require a many-worlds model, but in which time “loops” have the effect of changing the laws of statistics. I especially like this because it provides a way to determine by experiment whether or not the universe does work that way, even though in some uses of the words it abolishes cause and effect.
Before you feel too proud for postdicting the successors of Newtonian dynamics, I’d like to point out that as soon as Newton proposed his theory of gravitation, it was criticised for proposing instantaneous action at a distance.
The only story I’ve seen directly address this issue at all is Homestuck, in which any timeline that splits off from the ‘alpha’ timeline is ‘doomed’ and ceases to exist once it diverges too far from the alpha. The three characters with time traveling capabilities are someone who is extremely careful to avoid creating doomed timelines, one who is nihilistically apathetic about death and creates doomed timelines willy-nilly, and one who is a psychopathic monster bent on using his powers for destruction. Several times, main characters are shown experiencing existential despair over the idea that their own timeline might be a doomed one, and at one point a character with time-traveling capabilities realizes that the only way to prevent the destruction of the universe is to travel back in time, leaving his current timeline doomed. His realization of the implications of dooming that timeline and his efforts to somehow save his timeline’s version of his only surviving friend were particularly poignant (to me, at least).
Isn’t that “hint” just an observer selection effect?
Is it surprising that the correlation between “universes that are absolutely/highly causal” and “universes in which things as complex as conscious observers can be assembled by evolution and come to contemplate the causal nature of their universe” is very high? (The fitness value of intelligence must be at least somewhat proportional to how predictable reality is...)
I worry about this “what sort of thingies can be real” expression. It might be more useful to ask “what sort of thingies can we observe”. The word “real”, except as an indexical, seems vacuous.
It’s true that intelligence wouldn’t do very well in a completely unpredictable universe; but I see no reason why it doesn’t work in something like HPMoR, and there are plenty of such “almost-sane” possibilities.
Woudn’t HPMoR count as “highly, but not completely, causal”?
Further, somewhat more speculative thought:
A totally causal universe has the potential to have an initial state (including the rules of its time-evolution) that is extremely simple (low Shannon entropy), as compared to a causal-but-with-some-exceptions universe. As Eliezer points out, it also requires vastly less computing power to ‘run’.
It therefore seems perfectly reasonable that universe-simulators working with non-infinite resources would have a strong preference for simulating absolutely causal universes—and that we should therefore not be terribly surprised to find ourselves in one.
It’s true that out of the conceivable indeterministic universes, most do not allow for evolvable high-level intelligence anything like ours. But it’s also true that out of the conceivable universes that do allow for evolvable high-level intelligence like ours, most are not perfectly deterministic. So although the existence of intelligence may be explicable anthropically, I’m not sure the non-existence of Time Turners (and other causality-breaking mechanisms) is. Perfect determinism and complete chaos are not the only two options.
This is a rather confused use of some linguistic terminology. I think “a subject, a verb, and an object” is probably what was intended. (It’s worth noting that in academic syntax these terms are somewhat deprecated and don’t necessarily have useful meanings. I think the casual meanings are still clear enough in informal contexts like this though.)
Beyond the terminology issue, I’m unconvinced by the actual claim here. Arguments from linguistic usage often turn out to be very bad on scrutiny, and I’m not sure this one holds up too well. What about ‘Quirrell secretly followed Harry.’? Seems like a much weaker assertion that Quirrell is causally affecting Harry in some way here. I expect there are more obvious examples—that one took me 10 seconds to come up with.
Quirrell is not causally affecting Harry, but Harry is causally affecting Quirrell.
I’m not saying that your point is necessarily wrong, just that your counterexample isn’t really counter.
What about ‘Quirrell resembles Harry.’?
Resemblance is evaluated in someone’s brain, and causality is very much involved in that evaluation process.
There are plenty of sentences that have a noun, a verb, and a subject without having an agent—anything in passive voice or any unaccusative will do the trick. I suspect the argument would be even better worded using semantic roles rather than syntactic categories, eg: “Causality exists when there is an event with an agent”. This isn’t a very interesting thing to say though, because “agent” is a casual semantic role and so relies on causality existing by definition. You literally cannot have an event with an agent unless there is causality.
Yes, agreed. Semantic roles make the claim much more valid (but also less interesting, it seems to me).
Yep. With lots of transitive verbs, the (syntactic) direct object is that which undergoes a change (the patient) and the subject is that which causes it (the agent) -- but not with all of them.
And that’s before you even stray outside the Anglo-centric perspective and consider ergative-absolutive oppositions...
BTW, I wonder whether (all other things being equal) speakers of ergative-absolutive languages tend to exhibit more consequentialist-like thinking and speakers of nominative-accusative more deontological-like thinking… Has anybody tested that?
I wonder if testing bilinguals would be the way to go on this, to mitigate a few confounds at least. You could present moral statements for evaluation in each of the languages and see if you got any kind of effect according to which language the statement was presented in.
As a bilingual person myself (English/Afrikaans, though my Afrikaans is comparatively poor), I have to say that I’d probably treat moral statements in the different languages by mentally translating the Afrikaans to English and then deciding on the basis of the translation. However, here phrasing becomes important.
Consider, for example, the following two statements:
It is wrong to kill
It is wrong to commit murder
Are these two equally true? In the first case, legal execution of a convicted criminal is included, in the second case it is excluded. Such subtle differences in phrasing could very easily turn up between the two languages, as often a word in one language merely has a close approximation in the other (and not a direct translation).
Yes, they are—in as much as two false things are each zero true. What they aren’t is equivalent. If you didn’t included the absolute modifier “always” then it could perhaps make sense to evaluate “degree of truth”.
You are correct; I have edited the grandparent to remove the word “always” from both statements.
Yeah, it’s entirely possible that some effect like that would confound everything too much. Bilinguals with close to equal proficiency in both languages might be less inclined to do some sort of mental translation, though. (Still, the whole idea comes perilously close to wanting people to “think in” a particular one of their languages, which in my opinion doesn’t necessarily make sense at all.)
That’s a really interesting question. I’ve never heard of any research on it.
How about “Harry suspected Quirrel”?
That’s “Quirrel caused suspicion in Harry’s mind”, or perhaps “Harry’s model of Quirrel caused suspicion to be generated in Harry”.
The causality isn’t what you would expect from the syntax, going from subject to object, and it isn’ implied by the syntax at all, it’s in the semantics. Consider “Harry winked out of existence for no reason”.
That implies the existence of some X and Y in the sentence “Harry suspected Quirrel of X because Y”, e.g., “Harry suspected Quirrel of secretly being some variety of uplifted) rodent because Harry had suffered organic brain damage that impaired his ability to think rationally.” As long as Harry is (being modeled as) subject to causal influences, such a sentence can’t escape implying causes.
The alternate way of computing this is to not actually discard the future, but to split it off to a separate timeline so that you now have two simulations: one that proceeds normally aside for the time-traveler having disappeared from the world, and one that’s been restarted from an earlier date with the addition of the time traveler. Of course, this has its own moral dilemmas as well—such as the fact that you’re as good as dead for your loved ones in the timeline that you just left—but generally smaller than erasing a universe entirely.
You could get around this by forking the time traveler with the universe: in the source universe it would simply appear that the attempted time travel didn’t work.
That would create a new problem, though: you’d never see anyone leave a timeline but every attempt would result in the creation of a new one with a copy of the traveler added at the destination time. A persistent traveler could generate any number of timelines differing only by the number of failed time travel attempts made before the succesful one.
Short jumps (like the 1-hour one in the example) look more like erasing a bit of everyone’s memories, anyway. At least if you buy Egan’s model.
Or maybe also another one, somewhat related to the main post—let the universe compute, in it’s own meta-time, a fixed point  of reality (that is, the whole of time between the start and the destination of time travel gets recomputed into a form that allowed it to be internally consistent) and continue from there. You could imagine the universe computer simulating casually the same period of time again and again until a fixed point is reached, just like the iterative algorithms used to find it for functions.
 - http://en.wikipedia.org/wiki/Fixed_point_(mathematics)
As good as dead, until you jump back, right? As long as you jump back to a point after you originally jumped from.
That would imply that any changes you made in the past never had any effect on the time you had come from. Which is certainly logically consistent, but not the way most time travel stories work.
(Apologies for length...)
I doubt this is as relevant as it seems to me, but there is this timetravel strategy game called temporal: http://www.kaldobsky.com/audiogames/ (it’s toward the bottom of the page, and the main audience is visually impaired, hence the limited visual design).
The idea is that it is supposed to work similar to time turners, and the easiest way to lose the game is not by getting shot or crushed in security doors, but by losing track of previous instances of yourself and bumping into them to ruin the consistency of the timeline.
Of course, the developer didn’t get to the end of the game he had in mind, mostly because the final stage was supposed to be a conflict with an opponent who could also travel through time. I wound up trying to recreate it with a different engine (with the original developer’s permission), and got stuck at about the same point.
I also was able to create a paradox that didn’t trigger game over (in the original, not my reconstruction, though it works in mine as well). There is a part where you need to get an armed guard to shoot another guard, but nothing is stopping you from then going back in time and killing the armed guard before he could shoot the other… and this does not interfere with anything else you did that relied on the other guard being dead. It seems patchable, but still...
The developer’s strategy for the timetraveling boss AI, in as much as he told me, was to calculate where it could be within so many ticks, predict where it would move to, and have “future” instances spawn there. This doesn’t sound like it could take into account your actions (only how far you could travel spatially within x ticks), and doesn’t account for the fact that the only limits on your abilities are that you can’t travel back to before you last woke up, or later than has already occurred naturally. Oh, and it does prevent the sort of past/future interactions we see in HPMoR, or with the patronis in Prisoner of Azkaban. So you strictly avoid observing your future selves, while future you can observe all previous instances of you, provided the universe remains consistent.
So I suppose the difference here is that the timetraveler from the future is the one who experiences the results of the timetravel. Past you has to rescue future you before future you needs rescuing, but future you can do nothing for past you. So it’s what time turners would look like if the guidelines from the ministry of magic were strictly followed.
I might try to compute PoA type events by considering all timetravel-capable individuals, or individuals likely to become capable of timetravel within the limits of the ability, then calculate how they are most likely to react to a situation given foreknowledge… at which point this would be the outcome, and that individual would be required to have that outcome happen, or break temporal consistency. So if I knew of a timetraveler in vacinity of a life-threatening situation, and knew that said entity would try to prevent it if given the chance, I would calculate what they would be most likely to do, and make it happen. So in the case of Temporal, if I was, say, trapped in the presence of several armed guards that I did not believe I could escape, I might have the game try to calculate ways that a future instance could come to the rescue, and have it generate an instance to do just that, but then throw game over if you fail to make it happen.
This doesn’t strike me as complete, but I kinda want to try it.
This reminds me of a game/mod (called “Prometheus” IIRC) where you had to complete objectives within a fixed amount of time in a manner impossible to do alone, with guards to kill and multiple switches to press at the same time to open a door… but all you had for help was yourself, in five copies.
The game would basically let you play the first copy, end, play the second copy while the first played out what you had done previously, then the third while the first two kept repeating what you recorded, and so on, and then you could go back and re-do earlier copies to account for new actions taken by the later copies, all culminating in one big five-minute match between You^5 VS (Causally) Impossible Level. Think Portal 2 coop but with time-traveling copies of yourself.
Perhaps some of the inspiration came from those?
This in turn reminds me of a wonderful platformer, Company of Myself
Ambitious. Keep working on it.
Characters in the novel Pastwatch by Orson Scott Card wrestle with this issue.
Re  I totally noticed that “Flight of the Navigator” is a story about a kidnapped, returned boy who forges a new relationship with his older parents and ex younger, now older brother, and a cute nurse at the government facility, and then kills them all.
To say understanding this spoiled the story for me is an understatement. That movie has more dead people than Star Wars. It’s a fricken’ tragedy.
Its okay. In the new timeline, the nurse went on to be a sex columnist.
Um. Doesn’t Star Wars (I take it we’re talking about the movie otherwise known as “Episode IV” rather than the whole series) more or less begin with the destruction of an entire planet? And … is it actually clear that the only way to implement time travel is the one Eliezer describes, and that it’s best described as killing everyone involved? It doesn’t look that way to me.
But I haven’t seen Flight of the Navigator so maybe there are details that nail things down more.
The Star Wars series is about the tragic destruction of one planet and two death stars, and the childish bickering that caused it.
Flight of the Navigator ends the timeline. It destroys every planet, every star, every wandering spaceship billions of light years into the dark, total universal omnicide. And a reboot into a new timeline from a previously existing history.
Upvoted for “omnicide”.
Why would the old timeline deserve to exist more than the new one?
Tentative answer: It was presumably more of a collaborative process than a timeline chosen by one person.
Any opinions about Asimov’s The End of Eternity? An organization attempts to optimize the timeline. The book might be a good starter for discussion of CEV, terminal values, and such.
Has anyone read Garfinkle’s *All of an Instant”? I’ve never finished it, but the premise is that time travel is a biological capacity, and when many people discover it, all chaos ensues.
Suppose I destroy the timeline, and create an identical one. Have I committed a moral evil? No, because nothing has been lost.
Suppose I destroy the timeline, and restart from an earlier point. Have I committed a moral evil? Very much yes. What was lost? To give only one person’s example from Flight of the Navigator out of a planet of billions, out of a whole universe, the younger brother who was left behind had spent years—of personal growth, of creating value and memories—helping his parents with their quixotic search. And then bonding with the new younger “older” brother, rejoicing with his parents, marvelling at the space ship. And then he was erased.
These experiences aren’t undone. They are stopped. There is a difference. Something happy that happens, and then is over, still counts as a happy thing.
You destroy valuable lives. You also create valuable lives. If creating things has as much value as maintaining them does, then the act of creative destruction is morally neutral. Since the only reasons that I can think of why maintaining lives might matter are also reasons that the existence of life is a good thing, I think that maintenance and creation are morally equal.
Suppose I have the opportunity to end literally all suffering in the universe, and choose not to.
I don’t disagree that things are lost. But on the other hand, there are things that the new timeline has that the old timeline didn’t as well. In the new timeline, the younger brother also has experiences that his counterpart in the old timeline did not.
By choosing not to destroy the timeline to create a new one, you deny the new-timeline younger brother his experiences, as well as everyone else in the new universe. Either way, something is lost. It seems that the only reason to treat the original universe as special is status-quo bias.
And it does it on a routine basis. After all, most of the critters are returned to the moment they were taken.
My amateur reading of QED: The Strange Theory of Light and Matter left me with the impression that the universe we live in has self-consistent time travel. Summing over histories involves summing over histories in which particles go back in time.
For example, on page 97, the caption to Figure 63 says
Over the page
I vaguely assumed that the reason we don’t observe macroscopic time travel drops out of the principle of stationary phase. All the lumps of high amplitude arise from paths such that minor deviations don’t really change the phase, allowing a bunch of similar paths to add coherently. But try to travel back in time and you create a loop. Pull the loop a little tighter and the phase changes a lot. Loops never have stationary phase and the amplitudes of similar paths fail to add coherently, averaging out to pretty well zero.
Several mathematicians I know (and, I would guess, a sizable population of physicists as well) regard Feynman sums-over-histories as mathematical abstractions only. From this perspective they don’t describe processes that are actually happening out-there-in-the-world, they’re just mathematically convenient and maybe also intuitively useful. (I haven’t thought about whether or how this position can be reconciled with what I think is the standard LW position on many-worlds.)
My limited impression of physics is that there is a tendency for mathematically convenient but “not real” descriptions to turn out to be either subtly inaccurate, or to actually correspond to something real. For example, negative frequency photons seem to have some element of reality to them, along with the quantum wave function and virtual particles. I assign some non-trivial probability weight to “either sums over histories are inaccurate descriptions of what happens, or they correspond to something that acts a lot like a real thing”, even when knowledgeable physicists say they aren’t a real thing.
Me too, but almost all of it would be concentrated at “sums over histories are inaccurate descriptions of what happens.” Sums-over-histories are conceptually unsatisfying to me in that they use the classical concept of a history in order to describe quantum phenomena. My vague intuition is that a truer theory of physics would be more “inherently quantum.”
Wait, why not ? If people can be encoded as bit strings—which is the prerequisite for any kind of a Singularity—then what’s the difference between a bit string that I obtained by scanning a person, and a completely identical bit string that I just happened to randomly generate ?
You make a surprisingly convincing argument for people not being real.
Depends what you mean by “people”, and what you mean by “real”, really.
I could apply the same argument to rocks, or stars, or any other physical object. They can be encoded as bit strings, too—well, at least hypothetically speaking.
I suppose the difference is knowing to put the number into your bit string interpreter. Whether that be a computer program or the physical universe.
It’s kind of like the arguments for “you can’t copyright a number”. Well sure, but when you stick .mp3 on the end it isn’t just a number any more—it now tells you that you should interpret it.
Agreed, but then, I still disagree with Eliezer when he says that when you generate 2^N possible bitstrings of size N, “you wouldn’t expect this procedure to generate any people or make any experiences real”. If I can generate all these strings in the first place, I could just as easily feed each one to my person-emulator, to see which of them are valid person-strings. Then I could emulate these people just as I emulate meat-based people whose brains I’d scanned.
It’s not too hard to write Eliezer’s 2^48 (possibly invalid) games of non-causal-Life to disk; but does that make any of them real? As real as the one in the article?
I am having trouble figuring out what the word “real” means when applied to the game of Life. I do know, however, that if my Life game client had a “load game” function, then it would accept any valid string of bits, regardless of where they came from—a previously saved game, or a random number generator.
Obligatory JRPG references ho!
In Chrono Cross, n fvtavsvpnag cybg gjvfg vagebqhprf guvf pbaprcg naq ersenzrf gur tnzr nf orvat nobhg svkvat gur qvfnfgre pnhfrq ol Puebab Gevttre’f cebgntbavfgf’ abg guvaxvat bs vg.
In Star Ocean: The Last Hope (in one of the only good bits of a largely terrible game), gur cebgntbavfg nppvqragnyyl qrfgeblf na nygreangr cnfg-Rnegu va n fvzvyne jnl, naq npghnyyl ernpgf gb guvf nccebcevngryl ol orvat pehfurq jvgu qrcerffvba naq thvyg. Tnzref, bs pbhefr, ungrq guvf naq pnyyrq uvz “rzb”.
I’m uncertain as to how to translate your cipher; any help would be appreciated.
This is incredibly late, because I have technical issues with forums I’m not familiar with, but thank you for the information all the same.
I’d like to report that images are currently broken. Their URLs have an unnecessary “mediawiki” section in the paths like so:
Current URL: http://wiki.lesswrong.com/mediawiki/images/4/41/WaveCause.jpgCorrect URL: https://wiki.lesswrong.com/images/4/41/WaveCause.jpg
You just made the universe that much less fun for me. ;)
Indeed! Why would we ever read mainstream philosophy papers, if not to marvel at all the panic?
I’m… really shocked to hear this from you, so maybe I’m missing something:
Yes, you’re destroying Universe A, but also creating Universe B. Given that “B” will not-exist if we don’t travel, and “A” will not-exist if we DO travel, it seems morally neutral to make such an exchange—either way there is an equal set of people-who-won’t-exist. It’s only a bad thing if you have some reason to favor the status-quo of “A exists”, or if you’re concerned about the consent of the billions of people whose lives you alter (in which case you should be equally concerned about getting their consent before killing evil villains, fixing the environment, or creating FAI, neh?)
Once you’re viewing it as an otherwise-equal exchange, it’s just a matter of the specifics of those universes. It’s generally given in time travel stories that, at least from the protagonist’s view, “B” has a higher expected utility than “A”, so it would seem that time travel is the right choice.
If we use phrases like “extinguished the world”, then people will get bothered, because most people view that as a “bad thing”, and then people would choose “A” instead, so it seems like a useful policy (in a world with time travel) to not really draw attention to this.
Values are not up for grabs. If they turn out to be asymmetrical and inelegant (like, for example, really caring more about people not getting killed than people getting born) then, well, they are asymmetrical and inelegant. Maybe the distinction between not-killing and creating is incoherent but I haven’t yet seen an argument trying to demonstrate that without appeals to philosophical parsimony.
If you time travel, “Universe A” doesn’t exist. If you don’t, then “Universe B” doesn’t exist.
They’re BOTH universes which fail to exist if you chose the other one. No one dies—there’s just a universe that doesn’t exist because you didn’t choose it.
If you time-travel, Universe A still existed once, and contrary to the preferences of the people there was then extinguished. The preferences of the people in not-yet-existent meta-future Universe B don’t matter to me yet, because they may never exist.
Once Universe B is created, and if there was some way to restore Universe A, it’d be then that the preferences of the residents of the two universes (past and present) would weigh equally to my mind, having been equally real.
Universe A still used-to-exist , it just doesn’t-exist-in-the-future. Universe B did NOT used-to-exist, and it will continue to not-exist-in-the-future unless you chose it.
In other words, both universes don’t-exist-in-the-future if you don’t chose them.
I suppose I’m lost on why one would consider “Universe A ceases to exist going forward” with “Universe A is destroyed”. It feels like a really weird variant of the sunk cost fallacy, since Universe B failing to exist going forward isn’t a big deal.
I can see arguments about time travel being complex, it’s hard to predict the results, etc., but all else being equal it seems baffling to insist on A over B just because A happened to exist in the past.
My morality has a significant “status quo bias” in this sense. I don’t feel bad about not bringing into being people who don’t currently exist, which is why I’m not on a long-term crusade to increase the population as much as possible. Meanwhile I do feel bad about ending the existence of people who do exist, even if it’s quick and painless.
More generally, I care about the process by which we get to some world-state, not just the desirability of the world-state. Even if B is better than A, getting from A to B requires a lot of deaths.
If you could push a button and avert nuclear war, saving billions, would you?
Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war?
Either way, you’re choosing between two alternate time lines. I’m failing to grasp how the “cause” of the choice being time travel changes ones valuations of the outcomes.
Because if time travel works by destroying universes, it causes many more deaths than it averts. To be explicit about assumptions, if our universe is being simulated on someone’s computer I think it’s immoral for the simulator to discard the current state of the simulation and restart it from a modified version of a past saved state, because this is tantamount to killing everyone in the current state.
[A qualification: erasing, say, the last 150 years is at least as bad as killing billions of humans, since there’s essentially zero chance that the people alive today will still exist in the new timeline. But the badness of reverting and overwriting the last N seconds of the universe probably tends to zero as N tends to zero.]
But the cost of destroying this universe has to be weighed against the benefit of creating the new universe. Choosing not to create a universe is, in utilitarian terms, no more morally justifiable than choosing to destroy one.
That seems to be exactly the principle that is under dispute.
So is the argument that we should give up utilitarianism? (If so, what should replace it?) Or is there some argument someone has in mind for why annihilation has a special disutility of its own, even when it is a necessary precondition for a slight resultant increase in utility (accompanying a mass creation)?
I compute utility as a function of the entire future history of the universe and not just its state at a given time. I don’t see why this can’t fall under the umbrella of “utilitarianism.” Anyway, if your utility function doesn’t do this, how do you decide at what time to compute utility? Are you optimizing the expected value of the state of the universe 10 years from now? 10,000? 10^100? Just optimize all of it.
I’m not disputing that we should factor in the lost utility from the future-that-would-have-been. I’m merely pointing out that we have to weigh that lost utility against the gained utility from the future-created-by-retrocausation. Choosing to go back in time means destroying one future, and creating another. But choosing not to go back in time also means, in effect, destroying one future, and creating another. Do you disagree? If we weigh the future just as strongly as the present, why should we not also weigh a different timeline’s future just as strongly as our own timeline’s future, given that we can pick which timeline will obtain?
The issue for me is not the lost utility of the averted future lives. I just assign high negative utility to death itself, whenever it happens to someone who doesn’t want to die, anywhere in the future history of the universe. [To be clear, by “future history of the universe” I mean everything that ever gets simulated by the simulator’s computer, if our universe is a simulation.]
That’s the negative utility I’m weighing against whatever utility we gain by time traveling. My moral calculus is balancing
[Future in which 1 billion die by nuclear war, plus 10^20 years (say) of human history afterwards] vs. [Future in which 6 billion die by being erased from disk, plus 10^20 years (say) of human history afterwards].
I could be persuaded to favor the second option only if the expected value of the 10^20 years of future human history are significantly better on the right side. But the expected value of that difference would have to outweigh 5 billion deaths.
Yes, I disagree. Have you dedicated your life to having as many children as possible? I haven’t, because I feel zero moral obligation toward children who don’t exist, and feel zero guilt about “destroying” their nonexistent future.
I would feel obliged to have as many children as possible, if I thought that having more children would increase everyone’s total well-being. Obviously, it’s not that simple; the quality of life of each child has to be considered, including the effects of being in a large family on each child. But I stick by my utilitarian guns. My felt moral obligation is to make the world a better place, including factoring in possible, potential, future, etc. welfares; my felt obligation is not just to make the things that already exist better off in their future occurrences.
Both of our basic ways of thinking about ethics have counter-intuitive consequences. A counter-intuitive consequence of my view is that it’s no worse to annihilate a universe on a whim than it is to choose not to create a universe on a whim. I am in a strong sense a consequentialist, in that I consider utility to be about what outcomes end up obtaining and not to care a whole lot about active vs. passive harm.
Your view is far more complicated, and leads to far more strange and seemingly underdetermined cases. Your account seems to require that there be a clear category of agency, such that we can absolutely differentiate actively killing from passively allowing to die. This, I think, is not scientifically plausible. It also requires a clear category of life vs. death, which I have a similarly hard time wrapping my brain around. Even if we could in all cases unproblematically categorize each entity as either alive or dead, it wouldn’t be obvious that this has much moral relevance.
Your view also requires a third metaphysically tenuous assumption: that the future of my timeline has some sort of timeless metaphysical reality, and specifically a timeless metaphysical reality that other possible timelines lack. My view requires no such assumptions, since the relevant calculation can be performed in the same way even if all that ever exists is a succession of present moments, with no reification of the future or past or of any alternate timeline. Finally, my view also doesn’t require assuming that there is some sort of essence each person possesses that allows his/her identity to persist over time; as far as I’m concerned, the universe might consist of the total annihilation of everything, followed by its near-identical re-creation from scratch at the next Planck time. Reality may be a perpetual replacement, rather than a continuous temporal ‘flow;’ the world would look the same either way. Learning that we live in a replacement-world would be a metaphysically interesting footnote, but I find it hard to accept that it would change the ethical calculus in any way.
Suppose we occupy Timeline A, and we’re deciding whether to replace it with Timeline B. My calculation is:
What is the net experiential well-being of Timeline A’s future?
What is the net experiential well-being of Timeline B’s future?
If 1 is greater than 2, time travel is unwarranted. But it’s not unwarranted in that case because the denizens of Timeline B don’t matter. It’s unwarranted because choosing not to create Timeline B prevents less net well-being than does choosing to destroy Timeline A.
I think that we can generate an ethical system that fits The_Duck’s intuition’s quite well without having to use any of those concepts. All we need is a principle that it is better to have a small population with high levels of individual utility than to have a large population with small levels of individual utility, even if the total level of utility in the small population is lower.
Note that this is not “average utilitarianism.” Average utilitarianism is an example of one extraordinarily bad attempt to mathematize this basic moral principle that fails due to the various unpleasant ways one can manipulate the average. Having a high average isn’t valuable in itself, it’s only valuable if it reflects that there is a smaller population of people with high individual utility.
This does not need any concept of agency. If someone dies and is replaced by a new person with equal or slightly higher levels of utility that is worse than if they had not died, regardless of whether the person died of natural causes or was killed by a thinking being. In the time travel scenario it does not matter whether one future is destroyed and replaced by a time traveler, or some kind of naturally occurring time storm, both are equally bad.
It does not need a clear-cut category of life vs. death. We can simply establish a continuum of undesirable changes that can happen to a mind, with the-thing-people-commonly-call-death being one of the most undesirable of all.
This continuum eliminates the need for this essence of identity you think is required as well. A person’s identity is simply the part of their utility function that ranks the desirability of changes to their mind. (As a general rule, nearly any concept that humans care about a lot that seems incoherent in a reductionist framework can easily be steelmanned into something coherent).
As for the metaphyisical assumption about time, I thought that was built into the way time-travel was described in the thought experiment. We are supposed to think of time travel as restoring a simulation of the universe from a save point. That means that there is one “real” timeline that was actually simulated and others that were not, and won’t be unless the save state is loaded.
Personally, I find this moral principle persuasive. The idea that all that matters is the total amount of utility is based on Parfit’s analysis of the Non-Identity Problem, and in my view that analysis is deeply flawed. It is trivially easy to construct variations of the Non-Identity Problem where it is morally better to have the child with lower utility. I think that “All that matters is total utility” was the wrong conclusion to draw from the problem.
This is underspecified. It has an obvious conclusion if every single person in the small population has more utility than every single person in the large population, but you don’t really specify what to conclude when the small population has people with varying levels of utility, some of which are smaller than those of the people in the large population. And there’s really nothing you can specify for that which won’t give you the same problem as average utilitarianism or some other well-studied form of utilitarianism.
The most obvious solution would be to regard the addition of more low utility people as a negative, and then compare whether the badness of adding those people (and the badness of decreasing the utility of the best off) outweighs the goodness of increasing the utility of the lowest utility people. Of course,in order for small population consisting of a mix of high and very low utility people to be better than a large population of somewhat low utility people, the percentage of the population who has low utility have to be very small.
The really bad conclusion of average utilitarianism (what Michael Huemer called the “Hell Conclusion”) that this approach avoids is the idea that if the average utility is very negative, it would be good to add someone with negative total lifetime utility to the population, as long as their utility level was not as low as the average. This is, in my view, a decisive argument against the classic form of average utilitarianism.
This approach does not avoid another common criticism of average utilitarianism, Arrhenius’ Sadistic Conclusion (the idea that it might be less bad to add one person of negative welfare than a huge amount of people with very low but positive welfare), but I do not consider this a problem. Another form of the Sadistic Conclusion is “Sometimes it is better for people to harm themselves instead of creating a new life with positive welfare.”
When you phrase the Sadistic Conclusion is this fashion it is obviously correct. Everyone accepts it. People harm themselves in order to avoid creating new life every time they spend money on contraceptives instead of on candy, or practice abstinence instead of having sex. The reason that the Sadistic Conclusion seems persuasive in its original form is that it concentrates all the disutilities into one person, which invokes the same sort of scope insensitivity as Torture vs. Dust Specks.
I don’t even think that maximizing utility is the main reason we create people anyway. If I had a choice between having two children, one a sociopath with a utility level slightly above the current average, and the other a normal human with a utility level slightly below the current average, I’d pick the normal human. I’d do so even after controlling for the disutilities the sociopath would inflict on others. I think a more plausible reason is to “perpetuate the values of the human race,” or something like that.
I don’t think that’s a correct form of the Sadistic Conclusion.
The problem is that the original version phrases it using a comparison of two population states. You can’t rephrase a comparison of two states in terms of creating new life, because people generally have different beliefs about creating lives and comparing states that include those same lives.
(Sometimes we say “if you add lives with this level of utility to this state, then...” but that’s really just a shorthand for comparing the state without those lives to the state with those lives—it’s not really about creating the lives.)
I’m not sure I understand you. In a consequentialist framework, if something makes the world better you should do it, and if it makes it worse you shouldn’t do it. Are you suggesting that the act of creating people has some sort of ability to add or subtract value in some way, so that it is possible to coherently say, “A world where more people are created is better than one where they aren’t, but it’s still morally wrong to create those extra people.” What the heck would be the point of comparing the goodness of differently sized populations if you couldn’t use those comparisons to inform future reproductive decisions?
The original phrasing of the SC, quoted from the Stanford Encyclopedia of Philosophy, is: “For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare.” So it is implicitly discussing adding people.
I can rephrase my iteration of the SC to avoid mentioning the act of creation if you want. “A world with a small high-utility population can sometimes be better than a world where there are some additional low-utility people, and a few of the high-utility people are slightly better off.” I would argue that the fact that people harm themselves by use of various forms of birth control proves that they implicitly accept this form of the SC.
A modified version that includes creating life is probably acceptable to someone without scope insensitivity. Simply add up all the disutility billions of people suffer from using birth control, then imagine a different world where all that disutility is compensated for in some fashion, plus there exists one additional person with a utility of −0.1. It seems to me that such a world is better than a world where those people don’t use birth control and have tons of unwanted children.
There are many people who are horribly crippled, but do not commit suicide and would not, if asked, prefer suicide. Yet intentionally creating a person who is so crippled would be wrong.
Not when phrased that way. But you can say “A world containing more people is better than one which doesn’t, but it’s still morally wrong to create those extra people.” This is because you are not comparing the same things each time.
A) A world containing extra people (with a history of those people having been created)
B) A world not containing those extra people (with a history of those people having been created)
C) A world not containing those extra people (without a history of those people having been created)
“A world containing more people is better than one which doesn’t” compares A to B
“but it’s still morally wrong to create those extra people.” is a comparison of A to C.
Okay, I think I get where our source of disagreement is. I usually think about population in a timeless sense when considering problems like this. So once someone is created they always count as part of the population, even after they die.
Thinking in this timeless framework allows me to avoid a major pitfall of average utilitarianism, namely the idea that you can raise the moral value of a population by killing its unhappiest members.
So in my moral framework (B) is not coherent. If those people were created at any point the world can be said to contain them, even if they’re dead now.
Considering timelessly, should it not also disprove helping the least happy, because they will always have been sad?
That raises another question—do we count average utility by people, or by duration? Is utility averaged over persons, or person-hours? In such a case, how would we compare the utilities of long-lived and short-lived people? Should we be more willing to harm the long-lived person, because the experience is a relatively small slice of their average utility, or treat both the long-lived and short-lived equally, as if both of their hours were of equal value?
We should count by people. We should add up all the utility we predict each person will experience over their whole lifetime, and then divide by the number of people there are.
If we don’t do this we get weird suggestions like (as you said) we should be more willing to harm the long-lived.
Also, we need to add another patch: If the average utility is highly negative (say −50) it is not good to add a miserable person with a horrible life that is slightly above the average (say a person with a utility of −45). That will technically raise the average, but is still obviously bad. Only adding people with positive lifetime utility is good (and not always even then), adding someone with negative utility is always bad.
No. Our goal is to make people have much more happiness than sadness in their lives, not no sadness at all. I’ve done things that make me moderately sad because they will later make me extremely happy.
In more formal terms, suppose that sadness is measured in negative utilons, and happiness in utilons. Suppose I am a happy person who will have 50 utilons. The only other person on Earth is a sad person with −10 utilons. The average utility is then 20 utilons.
Suppose I help the sad person. I endure −5 utilons of sadness in order to give the sad person 20 utilons of happiness. I now have 45 utilons, the sad person has 10. Now the average utility is 27.5. A definite improvement.
But then you kill sad people to get “neutral happiness” …
If someone’s entire future will contain nothing but negative utility they aren’t just “sad.” They’re living a life so tortured and horrible that they would literally wish they were dead.
Your mental picture of that situation is wrong, you shouldn’t be thinking of executing an innocent person for the horrible crime of being sad. You should be thinking of a cancer patient ravaged by disease whose every moment is agony, and who is begging you to kill them and end their suffering. Both total and average utilitarianism agree that honoring their request and killing them is the right thing to do.
Of course, helping the tortured person recover, so that their future is full of positive utility instead of negative, is much much better than killing them.
Possibly I was placing the zero point between positive and negative higher than you. I don’t see sadness as merely a low positive but a negative. But then I’m not using averages anyway, so I guess that may cover the difference between us.
I definitely consider the experience of sadness a negative. But just because someone is having something negative happen to them at the moment does not mean their entire utility at the moment is negative.
To make an analogy, imagine I am at the movie theater watching a really good movie, but also really have to pee. Having to pee is painful, it is an experience I consider negative and I want it to stop. But I don’t leave the movie to go to the bathroom. Why? Because I am also enjoying the movie, and that more than balances out the pain.
This is especially relevant if you consider that humans value many other things than emotional states. To name a fairly mundane instance, I’ve sometimes watched bad movies I did not enjoy, and that made me angry, because they were part of a body of work that I wanted to view in its complete form. I did not enjoy watching Halloween 5 or 6, I knew I would not enjoy them ahead of time, but I watched them anyway because that is what I wanted to do.
To be honest, I’m not even sure if it’s meaningful to try to measure someone’s exact utility at the moment, out of relation to their whole life. It seems like there are lots of instances where the exact time of a utility and disutility are hard to place.
For instance, imagine a museum employee who spends the last years of their life restoring paintings, so that people can enjoy them in the future. Shortly after they die, vandals destroy the paintings. This has certainly made the deceased museum employee’s life worse, it retroactively made their efforts futile. But was the disutility inflicted after their death? Was the act of restoring the paintings a disutility that they mistakenly believed was a utility?
It’s meaningful to say “this is good for someone” or “this is bad for someone,” but I don’t think you can necessarily treat goodness and badness like some sort of river whose level can be measured at any given time. I think you have to take whole events and timelessly add them up.
I agree. That does seem to be a key point in the disagreement.
There doesn’t seem to be an obvious way to compute the relevant utility function segments of the participants involved.
OTOH “destroy the universe” is not a maxim one would wish to become universal law. Nor is it virtuous. It’s clearly against the rights of those involved. Etc. Utilitiarianism seems to be performing particularly badly here. The more I read about it, the worse it gets.
I probably would, but the choice is very different. I happen to know what did happen, including all the things that didn’t happen. By changing that I am abandoning the gauruntee that something at least as good as the status quo occurs. Most critically, I risk things like delaying a nuclear war such that a war occurs a decade later with superior technology and so leads to an extinction outcome.
Why do you think that death is bad? Perhaps that would clarify this conversation. I personally can’t think of a reason that death is bad except that it precludes having good experiences in life. Nonexistence does the exact same thing. So I think that they’re rationally morally identical.
Of course, if you’re using a naturalist based intuitionist approach to morality, then you can recognize that it’s illogical that you value existing persons more than potential ones and yet still accept that those existing people really do have greater moral weight, simply because of the way you’re built. This is roughly what I believe, and why I don’t push very hard for large population increases.
I think perhaps that ‘Killing is bad’ might be a better phrasing.
I would be more specific, and say that ‘killing someone without their consent is always immoral’ as well as ‘bringing a person capable of consenting into existence without their consent is always immoral’. I haven’t figured out how someone who doesn’t exist could grant consent, but it’s there for completeness.
Of course, if you want to play that time travel is killing people, I’ll point out that normal time naturally results in omnicide every plank time, and creation of a new set of people that exist. You’re not killing people, but simply selecting a different set of people that will exist next plank time.
That’s a hell of a thing to take as axiomatic. Taken one way, it seems to define birth as immoral; taken another, it allows the creation of potentially sapient self-organizing systems with arbitrary properties as long as they start out subsapient, which I doubt is what you’re looking for.
Neither of those people are capable of consenting or refusing consent to being brought into being.
The axiom, by the way, is “Interactions between sentient beings should be mutually consensual.”
I guess we’re looking at interpretation 2, then. The main problem I see with that is that for most sapient systems, it’s possible to imagine a subsapient system capable of organizing itself into a similar class of being, and it doesn’t seem especially consistent for a set of morals to prohibit creating the former outright and remain silent on the latter.
Imagine for example a sapient missile guidance system. Your moral framework seems to prohibit creating such a thing outright, which I can see reasoning for—but it doesn’t seem to prohibit creating a slightly nerfed version of the same software that predictably becomes sapient once certain criteria are met. If you’d say that’s tantamount to creating a sapient being, then fine—but I don’t see any obvious difference in kind between that and creating a human child, aside from predicted use.
What’s wrong with creating a sapient missile guidance system? What’s the advantage of a sapient guidance system over a mere computer?
Given the existence of a sapient missile, it becomes impermissible to launch that missile without the consent of the missile. Just like it is impermissible to launch a spaceship without the permission of a human pilot...
Consider instead of time traveling from time T’ to T, that you were given a choice at time T which of the universes you would prefer: A or B. If B was better you would clearly pick it. Now consider someone gave you the choice instead between B and “B plus A until time T’ when it gets destroyed”. If A is by itself a better universe than nothing, surely having A around for a short while is better than not having A around at all. So “B plus A until time T’ when it gets destroyed” is better than B which in turn is better than A. So if you want your preferences to be transitive you should prefer the scenario where you destroy A at time T’ by time traveling to B.
There are two weaknesses in the above: perhaps A is better than oblivion, but A between the times T and T’ is really horrible (ie it is better in long term but negative value in short term). Then you wouldn’t prefer having A around for a while over not having it at all. But this is a very exceptional scenario, not the world goes on as usual but you go back and change something to the better that we seem to be discussing.
Another way this can fail is if you don’t think that saying you have both universes B and A (for a while) around is meaningful. I agree that it is not obvious what this would actually mean, since existence of universes is not something that’s measurable inside said universes. You would need to invent some kind of meta-time and meta-universe, kind of like the simulation scenario EY was describing in the main article. But if you are uncomfortable with this you should be equally uncomfortable with saying that A used to exist but now doesn’t, since this is also a statement about universes which only makes sense if we posit some kind of meta-time outside of the universes.
Suppose you needed to assign non-zero probability to any way things could conceivably turn out to be, given humanity’s rather young and confused state—enumerate all the hypotheses a superintelligent AI should ever be able to arrive at, based on any sort of strange world it might find by observation of Time-Turners or stranger things. How would you enumerate the hypothesis space of all the coherently-thinkable worlds we could remotely maybe possibly be living in, including worlds with Stable Time Loops and even stranger features?
Hmmm. Causal universes are a bit like integers; there’s an infinite number of them, but they pale as compared to thenumber of numbers as a whole.
Mostly-causal universes with some time-travel elements are more like rational numbers; there’s more than we’re ever going to use, and it looks at first like it covers all possibilities except for a few strange outliers, like pi or the square root of two.
But there’s vastly, vastly more irrational numbers than rational numbers; to the point where, if you had to pick a truly random number, it would almost certainly be irrational. Yet, aside from a few special cases (such as pi), irrational numbers are hardly even considered, never mind used; we try to approximate the universe in terms of rational numbers only. (Though a rational number can be arbitrarily close to any given number).
Irrational numbers are also uncountable, and I imagine that I’ll end up in similar trouble trying to enumerate all the universes that could exist, given “Stable Time Loops and even stranger features”.
Given that, there’s only one reasonable way to handle the situation; I need to assign some probability to “stranger things” without being able to describe, or to know, what those stranger things are.
The possibilities that I can consider include:
Physics as we know it is entirely and absolutely correct (v. low probability)
Physics as we know it is an extremely good approximation to reality (reasonable probability)
The real laws of the universe are understandable by human minds (surprisingly high probability)
Stranger Things (added to the three above potions, adds up to 100%)
The universe is entirely causal (fairly low probability)
The universe is almost entirely causal, with one or more rare and esoteric acausal features (substantially higher probability, maybe four or five times as high as the above option)
The local causality observed is merely a statistical fluke in a mostly acausal universe (extremely low probability)
Stranger Things (whatever probability remains)
The reason why the second is higher than the first, is simply that there are so many more possible universes in which the second would be true (but not the first) in which the observations observed to date would nonetheless be true. The problem with these categorisations is that, in every case, the highest probability seems to be reserved for Stranger Things...
Rationals and integers are both coutable! This is one of my favorite not-often-taught-in-elementary-schools but easily-explainable-to-elementary-school-students math facts. And they, the rationals, make a pretty tree: http://mathlesstraveled.com/2008/01/07/recounting-the-rationals-part-ii-fractions-grow-on-trees/
That’s one of my favorite mathematical constructions! Also see Ford circles.
If this universe contains agents who engage in acausal trade, does that make it partially acausal?
Nope. It’s just a terrible name.
I almost went with that answer, and didn’t ask. But then I thought about trade with future agents who have different resources and values than we do—resources and values which will be heavily influenced by what we do today. The structure seems to be at least as similar as self-consistent solutions in plasma physics.
Agents can make choices that enforce global logical constraints, using computational devices that run on local causality.
Thanks, I feel like I grok this answer: There may be higher order acausal structures in the universe, but they run on a causal substrate.
By ‘acausal trade’, do you mean:
Trading based on a present expectation of the future (such as trading in pork futures)
Trading based on data from the actual future
The first is causal (but does not preclude the possibility of the universe containing other acausal effects), the second is acausal.
A start is to choose some language for writing down axiom lists for formal systems, and a measure on strings in that language.
Lowenheim-Skolem is going to give you trouble, unless “coherently-thinkable” is meant of as a subtantive restriction. You might be able to enumerate finitely-axiomatisable models, up to isomorphism, up to aleph-w, if you limit yourself to k-categorical theories, for k < aleph-w, though. Then you could use Will’s strategy and enumerate axioms.
Edit: I realised I’m being pointlessly obscure.
The Upwards Lowenheim-Skolem means that, for every set of axioms in your list, you’ll have multiple (non-isomorphic) models.
You might avoid this if “coherantly thinkable” was taken to mean “of small cardinality”.
If you didn’t enjoy this restriction, you could, for any given set of axioms, enumerate the k-categorical models of that set of axioms—or at least enumerate the models of whose cardinality can be expressed as 2^2^...2^w, for some finite number of 2′s. This is because k-categoriciticy means you’ll only have one model of each cardinality, up to isomorphism.
So then you just enumerate all the possible countable combinations of axioms, and you have an enumeration of all countably axiomatisable, k-categorical, models.
I don’t think it’s unfair to put some restrictions on the universes you want to describe. Sure, reality could be arbitrarily weird—but if the universe cannot even be approximated within a number of bits much larger than the number of neurons (or even atoms, quarks, whatever), “rationality” has lost anyway.
(The obvious counterexample is that previous generations would have considered different classes of universes unthinkable in this fashion.)
Why? If the universe has features that our current computers can’t approximate, maybe we could use those features to build better computers.
Enumerate mathematical objects by representing them in a description language and enumerating all strings. Look for structures that are in some sense indistinguishable from “you”. (taboo “you”, and solve a few philosphical problems along the way). There’s your set of possible universes. Distribute probability in some way.
Bayesian inference falls out by aggregating sets of possible worlds, and talking about total probability.
In the same stroke with whch you solve the “you”-identification problem, solve the value-identification problem so that you can distribute utility over possible worlds, too. Excercising the logical power to actually observe the worlds that involve you on a close enough level will involve some funky shit where you end up determining/observing your entire future utility-maximizing policy/plan. This will involve crazy recursion and turning this whole thing inside-out, and novel work in math on programs deducing their own output. (see TDT, UDT, and whatever solves their problems).
Approximating this thing will be next to impossible, but we have an existence proof by example (humans), so get to it. (we don’t have prrof that lawful recursion is possible, though, if I understand correctly)
Our current half-assed version of the inference thing (Solominoff Induction) uses Turing Machines (ick) as the description language, and P’= 2^(-L), where L is the length of the strings describing the universes (that’s an improper prior, but renorm handles that quick).
We have proofs that P’ = 1 does not work (no free lunch (or is that not the right one here...)), and we can pack all of our degrees of freedom into the design of the description language if we choose the length prior. (Or is that almost all? Proof, anyone?)
This leaves just the design of the description langauge. Computable programming languages seem OK, but all have unjustified inductive bias. Basically we have to figure out which one is a close approximation for our prior. Turing machines don’t seem particularly priveledged in this respect.
EDIT: Bolded the Tl;dr.
EDIT: Downvotes? WTF? Can we please have a norm that people can speculate freely in meditation threads without being downvoted? At least point out flaws… If it’s not about logical flaws, I don’t know what it is, and the downvote carries very nearly no information.
“Non-zero probability” doesn’t seem like quite the right word. If a parameter describing the way things could conceivably turn out to be can take, say, arbitrary real values, then we really want “non-zero probability density.” (It’s mathematically impossible to assign non-zero probability to each of uncountably many disjoint hypotheses because they can’t add to 1.)
The first answer that occurred to me was “enumerate all Turing machines” but I’m worried because it seems pretty straightforward to coherently think up a universe that can’t be described by a Turing machine (either because Turing machines aren’t capable of doing computations with infinite-precision real numbers or because they can’t solve the halting problem). More generally I’m worried that “coherently-thinkable” implies “not necessarily describable using math,” and that would make me sad.
I think you can get around that by defining “describe” to mean “for some tolerance t greater than zero, simulate with accuracy within t”. Since computable numbers are dense in the reals, for any t > 0 there will always be a Turing machine that can do the job.
The halting problem is insuperable, though. Universes with initial conditions or dynamics that depend on, e.g., Chaitin’s constant are coherently thinkable but not computable.
What about a universe with really mean laws of physics, like gravity that acts in reverse on particles whose masses aren’t computable numbers?
How is that different than “within accuracy t, these particles have those computable masses, but gravity acts backwards on them”?
The intention of my example was that you couldn’t tell for a given particle which direction gravity went.
Wouldn’t you just need one additional bit of information for each particle as an initial condition to make this computable again?
I don’t think your first point solves the problem. If the universe is exponentially sensitive to initial conditions, then even arbitrarily small inaccuracies in initial conditions make any simulation exponentially worse with time.
The function exp(x—K) grows exponentially in x, but is nevertheless really, really small for any x << K. Unbounded resources for computing means that the analogue of K may be made as large as necessary to satisfy any fixed tolerance t.
For a fixed amount of time. What if you wanted to simulate a universe that runs forever?
Yes, for a fixed amount of time. I should have made that explicit in my definition of “describe”: for some tolerance t greater than zero, simulate results at time T with accuracy within t. Then for any t > 0 and any T there will always be a Turing machine that can do the job.
this is my first time approaching a meditation, and I’ve actually only now decided to de-lurk and interact with the website.
One way to enumerate them would be, as CCC has just pointed out, with integers where irrationality denotes acausal worlds and rationality denotes causal worlds.
This however doesn’t leaves space for Stranger Things; I suppose we could use the alphabet for that. 1 If, however, and like I think, you mean enumerate as “order in which the simulation for universes can be run” then all universes would have a natural number assigned to them, and they could be arranged in order of complexity; this would mean our own universe would be fairly early in the numbering, if causal universes are indeed simpler than acausal ones, if I’ve understood things correctly.
This would mean we’d have a big gradient of “universes which I can run with a program” followed by a gradient of “universes which I can find by sifting through all possible states with an algorithm” and weirder stuff elsewhere (it’s weird; thus it’s magic, and I don’t know how it works; thus it can be simpler or more complex because I don’t know how it works).
In the end, the difference between causal and acausal universes is that one asks you only the starting state, while the other discriminates between all states and binds them together.
AAAAANNNNNNNND I’ve lost sight of the original question. Dammit.
It would be nice if there was some topology where the causal worlds were dense in the acausal ones.
Why would that be nice?
Unfortunately, this strikes me as unlikely.
Yes, and I forgot to put it in.
Wait, causal worlds are dense IN acausal ones?
Is that a typo, and you meant “causal worlds were denser than acausal ones” or did I just lose a whole swath of conversation?
I mean the class of causal worlds be dense in the class of worlds, where worlds consists of causal and acausal worlds. The same way we understand a lot of things in functional analysis: prove the result for the countable case, prove that taking compactifications/completions preserves the property, and then you have it for all separable spaces.
Well, I admit that I had originally considered that Stranger Things would most likely be either causal or acausal; I can’t really imagine anything that’s neither, given that the words are direct opposites.
In the case of a Stranger Thing that’s strange enough that I can’t imagine it, we could always fall back on the non-real complex numbers, which are neither rational nor irrational (and are numerous enough to make real numbers look trivial by comparison).
Well, I appear to be somewhat confused. Here is the logic that I’m using so far:
1: A hypothesis space can contain mathematical constants,
2: Those mathematical constants can be irrational numbers,
3: The hypothesis space allows those mathematical constants to set to any irrational number,
4: And the set of irrational numbers cannot be ennumerated.
5: A list of hypothesis spaces is impossible to enumerate.
So If I assume 5 is incorrect (and that it is possible to enumerate the list) I seem to either have put together something logically invalid or one of my premises is wrong. I would suspect it is premise 3 because it seems to be a bit less justifiable then the others.
On the other hand, it’s possible premise 3 is correct, my logic is valid, and this is a rhetorical question where the answer is intended to be “That’s impossible to enumerate.”
I think the reason that I am confused is likely because I’m having a hard time figuring out where to proceed from here.
If you ever plan on talking about your hypothesis, you need to be able to describe it in a language with a finite alphabet (such as English or a programming language). There are only countably many things you can say in a language with a finite alphabet, so there are only countably many hypotheses you can even talk about (unambiguously).
This means that if there are constants floating around which can have arbitrary real values, then you can’t talk about all but countably many of those values. (What you can do instead is, for example, specify them to arbitrary but finite precision.)
Only if you live in a universe where you’re limited to writing finitely many symbols in finite space and time.
If I lived in such a universe, then it seems like I could potentially entertain uncountably many disjoint hypotheses about something, all of which I could potentially write down and potentially distinguish from one another. But I wouldn’t be able to assign more than countably many of them nonzero probability (because otherwise they couldn’t add to 1) as long as I stuck to real numbers. So it seems like I would have to revisit that particular hypothesis in Cox’s theorem…
It looks like you’re right, but let’s not give up there. How could we parametrize the hypothesis space, given that the parameters may be real numbers (or maybe even higher precision than that).
Well I suppose starting with the assumption that my superintelligent AI is merely turing complete, I think that we can only say our AI has “hypothesis about the world” if it has a computable model of the world. Even if the world weren’t computable, any non-computable model would be useless to our AI, and the best it could do is a computable approximation. Stable time loops seem computable through enumeration as you show in the post.
Now, if you claim that my assumption that the AI is computable is flawed, well then I give up. I truly have no idea how to program an AI more powerful than turing complete.
Suppose the AI lives in a universe with Turing oracles. Give it one.
Again, what distinguishes a “turing oracle” from a finite oracle with a bound well above the realizable size of a computer in the universe? They are indistinguishable hypotheses. Giving a turing complete AI a turing oracle doesn’t make it capable of understanding anything more than turing complete models. The turing-transcendant part must be an integral part of the AI for it to have non-turing-complete hypotheses about the universe, and I have no idea what a turing-transcendant language looks like and even less of an idea of how to program in it.
Suppose the AI lives in a universe where infinitely many computations can be performed in finite time...
(I’m being mildly facetious here, but in the interest of casting the “coherently-thinkable” net widely.)
I don’t see how this changes the possible sense-data our AI could expect. Again, what’s the difference between infinitely many computations being performed in finite time and only the computations numbered up to a point too large for the AI to query being calculated?
If you can give me an example of a universe for which the closest turing machine model will not give indistinguishable sense-data to the AI, then perhaps this conversation can progress.
Well, for starters, an AI living in a universe where infinitely many computations can be performed in finite time can verify the responses a Turing oracle gives it. So it can determine that it lives in a universe with Turing oracles (in fact it can itself be a Turing oracle), which is not what an AI living in this universe would determine (as far as I know).
As mentioned below, we you’d need to make infinitely many queries to the Turing oracle. But even if you could, that wouldn’t make a difference.
Again, even if there was a module to do infinitely many computations, the code I wrote still couldn’t tell the difference between that being the case, and this module being a really good computable approximation of one. Again, it all comes back to the fact that I am programming my AI on a turing complete computer. Unless I somehow (personally) develop the skills to program trans-turing-complete computers, then whatever I program is only able to comprehend something that is turing complete. I am sitting down to write the AI right now, and so regardless of what I discover in the future, I can’t program my turing complete AI to understand anything beyond that. I’d have to program a trans-turing complete computer now, if I ever hoped for it to understand anything beyond turing completeness in the future.
Ah, I see. I think we were answering different questions. (I had this feeling earlier but couldn’t pin down why.) I read the original question as being something like “what kind of hypotheses should a hypothetical AI hypothetically entertain” whereas I think you read the original question as being more like “what kind of hypotheses can you currently program an AI to entertain.” Does this sound right?
Yes, I agree. I can imagine some reasoning being concieving of things that are trans-turing complete, but I don’t see how I could make an AI do so.
I was reading a lesswrong post and I found this paragraph which lines up with what I was trying to say
I don’t think that’s different, unless it can also make infinitely many queries of the Turing oracle in finite time. Or make one query of a program of infinite length. In any case, I think it needs to perform infinite communication with the oracle.
I’ll grant that it seems likely that a universe with infinite computation capability will also have infinite communication capability using the same primitives, but I don’t think it’s a logical requirement.
Yes, let’s replace “computations” with “actions,” I guess.
The hypothesis that should interest an AI are not necessarily limited to those it can compute but to those it could test. A hypothesis is useless if it does not tell us something about how the world looks when it’s true as opposed to when it’s false. So if there is a way for the AI to interact with the world such that it expects different probabilities of outcomes depending on whether the (possibly uncomputable) hypothesis holds or not then it is something worth having a symbol for, even if the exact dynamics of this universe cannot be computed.
Let’s consider the case of our AI encountering a Turing Oracle. Two possible hypotheses of the AI could be A = This is in fact a Turing Oracle and for every program P it will output either the time until halting or 0 if no halting, and B = This is not a Turing Oracle but some computable machine Q. The AI could feed the supposed oracle a number of programs and if it was told any of them would halt it could try to run them for the specified number of steps to see if they did indeed halt. After each program had halted it would have to increase it’s probability that this was in fact a Turing Oracle using Bayes’ Theorem and estimates of the probabilities of guessing this right, or computationally deriving these numbers. If it did this for long enough and this was in fact a Turing Oracle it would gain higher and higher certainty of this fact.
What is it that the AI is doing? We can view the whole above process as a program which given one of a limited set of experimental outcomes outputs the probability that this experimental outcome would be the real one if H held. In the case of the Turing Oracle above the set of outcomes is the set of pairs (P,n) where P is a program and n a positive integer, and the program will output 1 if P halts after n steps and 0 otherwise. I think this captures in full generality all possibilities a computable agent would be able to recognise.
What if the AI later on gains some extra computational capacity which makes it non-computable? Say for example that it finds a Turing Oracle in like in the above example and integrates it into its main processor. But this is essentially everything that could happen: for the AI to become uncomputable, it would have to integrate an uncomputable physical process into its own processing. But for the AI to know it was actually uncomputable and not only incorporating the results of some computational process it didn’t recognise it would have to preform above test. So when it now preforms some uncomputable test on a new process we can see this simply as the composite of the tests of the original and the new process viewing all the message passing between the uncomputable processes as a part of the experimental setup rather than internal computation.
Hrmm… Well, if the AI is computable, it can only ever arrive at computable hypotheses, so we can enumerate them with any complete program specification language. I feel like I want to say that anything that isn’t computable, doesn’t matter. What I mean is, if the AI encounters something that is truly outside of its computable hypothesis space, then there’s nothing it can do about it. For concreteness:
TL;DR for paragraph below: our FAI encounters an Orb, which seems to randomly display red or green, and which our FAI really really wants to model accurately.
However, try as it might, our poor computable AI cannot do even an epsilon better than random in predicting the Orb. This is because the Orb is not computable. My point is that this is not distinguishable from a problem that the AI just can’t solve given its current resources. If the prediction problem is really hard, but nevertheless the AI can gain useful information about how the aliens will behave… then either the AI is modelling the alien species, plus a Truly Random variable (in the classical statistics sense) for the Orb, or the AI can do better than random at predicting the Orb.
Therefore, if our AI ever encountered something Truly Random or otherwise Really Weird (that is, something that is coherent in whatever way it has to be in order to be real, but not computable), then the AI would not and could not do better than it would by just reacting as though it was a problem too hard to solve, and modelling it as a random variable. For things that seemed random or weird but that were actually computable, the AI would naturally (if we’ve done our job) become smart enough or think long enough to solve the problem, or at least work around it. For things that were really truly unpredictable by a computable hypothesis, the same thing would happen. It’s just a special case where the AI never gets around to solving it.
Declaring uncomputable things irrelevant hopefully isn’t too crippling in practice; the universe looks computable, Time Turners can maybe be brute-forced, etc. Now, that doesn’t really answer the question. What do we do about uncomputable universes? Again, nothing… except if there is a chance of hypercomputation. But even if an AI is trying to somehow harness a hypercomputation to do better than chance at dealing with an uncomputable facet of reality, it still has to do figure out how to do the harnessing using its current computable hypotheses and the rest of its computable self.
In other words, hypercomputation isn’t a special case. It’s still a part of reality that correlates in some way with another part of reality (right? I’m, like, totally out of my depth here, but as long as we’re speculating...). The AI can notice and use this, while still only working from computable hypotheses. It should do this naturally, even operating under computable hypotheses, if it sees some way of expanding its (hyper)computational abilities.
TL;DR: whether or not the universe is computable, the AI can’t do better than computable hypotheses. The differences between reality and the best hypotheses that the AI can muster will be unavoidable, since the AI is computable. It can harness hypercomputation, but it still does so working from its computable hypotheses. Unless we program an uncomputable AI. Are you trying to ask how the AI should write an uncomputable extension of itself if it encounters hypercomputation?
Since I’m asking about a superintelligent AI’s model of the world, and the world of an AI is digital input and output, I first enumerate all possible programs, then enumerate all input strings of finite length, then count diagonally over both.
Then I convert the bits into ASCII, compile them in LOLCODE (since I’m already doing this for the lulz), and throw out the ones that give me compiler errors or duplicates.
Then I sum over the countable number of computable things using inverse squares and divide by pi squared over six (minus whatever I’ve thrown out).
I hope you didn’t want this information to be compiled in a way that is at all helpful to anyone, ever.
But if you did, I guess I might attempt to organize the information as the set of graphs on N vertices for all natural numbers N, or attempt to classify the set of categories of modules of objects with models in Grothendieck’s second universe, so that I could do all possible linear algebra. And then I would say that if I can’t use linear algebra the object I’m studying doesn’t have local consistency and so it doesn’t make sense to think about it as a continuous universe and I no longer no what thought means so I have more important issues to deal with.
This seems an odd question to ask in the comments like this. I know how I’d go about figuring out the answer, but it involves doing lots and lots of really hard math. Coming up with an answer i thee 5 minutes that anyone is going to realistically spend on this seems almost disrespectful, and certainly not very productive.
Or I just misunderstood what you were asking.
come up with an example of a “strange world” which could not “conceivably turn out to” include this one
construct a world that does include both
I think the problem of enumerating these possibilities is impossible. You should notice that even the conventional possibility, quantum field theory somehow modified to have gravity and cosmology, is incomplete. It describes a mathematical construct, but it doesn’t describe how our experiences fit into that construct. It’s possible that just by looking at this mathematical object in a different way, you can find a different universe. That’s why this point-of-view information is actually important. Looking just at the possibilities where the universe is computable, enumerating Turing machines looks sufficient, but it is not. Turing machines don’t describe where we should look for ourselves in them, which is the most important part of the business. If we allow this, we should also allow the universe to be described by finite binary strings, which at times code for a Turing machine where we can be found in a certain point of view, but at other times code for various more powerful modes of computation. We can even say there is only one possibility, the totality of mathematical objects being the universe, which we can find ourselves in in very many different ways (this is the Tegmark level 4 multiverse theory).
So we can’t truly enumerate all the possibilities, even assuming a casual universe, since a casual diagram isn’t really capable of fully describing a possibility. It might be reasonable at certain times to enumerate these things anyways, and deal with this degeneracy in a ad hoc way. In that case, there would be nothing wrong with also making an ad hoc assumption along the lines of saying that the universe must be Turing computable (in which case you can simply list Turing machines).
I am now starting to REALLY lament my lack of formal education, because I JUST NOW managed to grasp why the whole “speed of light” thing makes sense. Stupid poverty, ruin my fun. :D
I would very much like to see an abstract at the beginning of this article. It is interesting, but rather long, and when the Game of Life example started, I was kind of lost what the intention of the article is supposed to be. I admit that I haven’t read the post this is a follow-up to, but given that one of the largest criticism for the sequences is their inaccessibility to newcomers, there might be room for improvement in this new series of posts.
The Ed stories by Sam Hughes might be interesting to you. (Warning: long.)
Edit: If you want to skip to the bit that your post reminded me of, that’s the chapter titled “Hotel Infinity”, specifically starting at the first instance of the words “Time travel”.
Is there a word for time travel that works like this? I’m writing a novel that has it, and would like to be able to succinctly describe it to people who ask what it’s about or how the time travel works.
(I’m not invoking computer simulation, but the effects as far as the characters see are like this—or rather, the characters see time travelers from the future but never get to see the versions of the universe where they get to remember seeing someone leave to travel to the past.)
Yes, that’s a type 3 plot.
Such numbering isn’t however very meaningful or intuitive… I’d just say “timeline-overwriting”.
This is the standard model of time travel / prophecy in Greek myths, isn’t it? Maybe I’m overgeneralizing from Cassandra.
 Eliezer calls it Stable Time Loops, which is a term I’ve seen before.
My understanding is that Stable Time Loops work differently: basically, the universe progresses in such a way that any and all time traveling makes sense and is consistent with the observed past. Under the above model, you will never witness another copy of yourself traveling from the future, though you might witness another copy of yourself traveling from an alternate past future that will now never have been. With STL, you can totally witness a copy of yourself traveling from the future, and you will definitely happen to travel back in time to then and do whatever they did. That’s my understanding, at least.
Of course, there’s no reason to strictly believe that what you thought was a future version of yourself wasn’t either lying or a simulacrum of some kind, or that any note you receive after intending to send a note back to yourself hasn’t been intercepted and subverted.
Which leads to interesting stories when those expectations are subverted, but only after they’ve been established.
True! That’s why every twelve-year-old establishes elaborate passphrases for identifying alternate / time-displaced selves.
What makes you think that elaborate passphrases are uncountably infinite? Any loop that includes a ‘we await confirmation that the plan has succeeded before we implement it’ clause at the beginning is virtually foolproof. In order to foil such a plan, one needs to overcome the adversary, prevent them from signaling failure (ever!), and then manage to signal success. (So that the plan is set into motion.)
I think that’s because people change into their analog in the new timeline, rather than disappearing in a butterfly effect—the resulting person is still “the same person”, and thus they have not died (much like we don’t conclude people have dies when we cut to “twenty years later” and they all have grey hair.) Unless, that is, the intent was to “kill them in the past”, in which case it’s treated as murder, or time travel is A Bad Thing, in which case it’s sometimes inevitable that meddling with it results in people never being born and that’s terrible.
Although the characters being fine with it all doesn’t hurt, either …
The fact that you can’t think of a way to compute the behavior of such a universe is no reason to conclude that it can’t be done.
In particular, it’s easy enough to come up with simplistic billiard ball models where you can compute events without ‘backtracking’. Now such models are certainly weird in the sense that in order to compute what happens in the future one naturally relies on counterfactual claims about what one might have done.
However, Quantum Mechanics looks a great deal like this. The existence of objects like time turners creates the opportunity for multiple solutions to otherwise deterministic mechanics and if microscopic time turners were common one might develop a model of reality that looked like wave functions to represent the space of possible future paths that can interfere constructively/destructively via interaction from time turner type effects.
That’s the transactional interpretation, right?
I’m not sure we’re in a causal universe.
According to the theory of timeless physics, our universe is governed by Schrödinger’s time-independent equation. If the universe can expand without limit, and the potential energy eventually decreases below the total energy, kinetic energy will be driven below zero. This means that the amplitude will change exponentially as the universe expands. There are two ways this can work. Either the amplitude will increase exponentially, in which case we’d expect to be in one of those negative-energy configuration states, or the amplitude will exponentially approach zero, in which case the universe would look pretty much like it does now. If you just set boundary conditions at the big bang, you’d almost certainly end up in the former case. In order to get the latter case, you have to have part of your boundary conditions that as the universe expands the amplitude approaches zero. This means that the universe is partially governed by how it ends, rather than just how it begins.
There are other explanations, of course. Perhaps amplitude doesn’t matter after a while. Perhaps the multiverse is finite. Perhaps timeless physics was wrong in the first place. Perhaps it’s something else I haven’t thought of. I’m just not sure that the expanding universe boundary condition should be rejected quite yet.
Also, if you accept SSA, the universe is acausal. The probability of being now is dependent on the number of people in the future.
I guess you could still build a causal graph if the universe is defined by initial and end states—you’d just have two disconnected nodes at the top. But you’d have to give up the link between causality and what we call “time”.
“But you’d have to give up the link between causality and what we call ‘time’.”
You’d just have to make it slightly weaker. Entropy will still by and large increase in the direction we call “forward in time”. So long as entropy is increasing, causality works. I don’t think the errors would be enough to notice in any feasible experiment.
So, there’s direct, deterministic causation, like people usually talk about. Then there’s stochasitic causation, where stuff has a probabilistic influence on other stuff. Then there’s pure spontenaity, things simply appearing out of no-where for no reason, but according to easily modeled rules and probabilities. Even that last is at least theorised to exist in our universe—in particular as long as the total energy and time multiply to less than planck’s constant (or something like that). At no point in this chain have we stopped calling our universe causal and deterministic, no matter how it strains the common use meaning of those terms. I don’t see why time turners need to make us stop either.
To take your Game of Life example, at each stage, the next stage can be chosen by calculating all self-consistent futures and picking one at random. The game is not a-causal, it’s just more complicated. The next state is still a function of the current (and past) states, it’s just a more complicated one. A time-turner universe can also still have the property that it has a current state, and it chooses a future state taking the current state as (causal) input. Or indeed the continuous analogue. It just means that choosing the future state involves looking ahead a number of steps, and choosing (randomly or otherwise) among self-consistent states. The trick for recovering causation is that instead of saying Harry Potter appearing in the room was caused by Harry Potter deciding to turn the time turner in the future, you say Harry Potter appearing in the room was caused by the universe’s generate-a-consistent-story algorithm. And if you do something that makes self-consistent-stories with Harry Potter appearing than otherwise then you are having a stochastic causal influence on this appearance. Causality is in-tact, it’s just that the rules of the universe are more complicated.
Which brings me to the meditation. Dealing with the idea of a universe with time turners is no different to a universe with psychics. In either case, the universe itself would need to be substantially more complicated in order for these things to work. Both involve a much larger increase in complexity required than they intuitively seem to, because they’re a small modification to the universe “as we see it”, but making that change requires a fundamental reworking of the underlying physics to something subtantially more complicated. Thus until substantially strong evidence of their existance comes to light, they languish in the high-Kolmogorov-complexity land of theories which have no measurable impact on an agent’s choices, non-zero probability or otherwise.
Who’s to say there aren’t time-turners in the universe by the way? Positrons behave exactly like electrons travelling backwards in time. A positron-electron pair spontaneously appearing and soon annhiliating could also be modelled as a time-loop with no fundamental cause. You can make a time-turner situation as well out of them, going forward then backwards then forwards again. Of course, information isn’t travelling backwards in time here, but what exactly does that mean in the first place anyway?
Just a general comment on how to make people think harder about what you write, given the multitude of poorly thought out comments. There is a standard technique in quality control, where a small amount of defective products is inserted into the stream in order to keep the controller’s focus. (Can someone find a link for me, please?) There is a similarly standard, though rarely used technique in teaching, where the instructor gives a false statement in the course of a lecture and the students are expected to find it (in some variations sometimes there is none).
The material you describe is fairly involved and non-trivial, so making people sift through all your statements and how they fit (or do not fit) together by inserting an occasional intentional falsehood (and being upfront about it, of course), then fixing it a few days later, while leaving a trace of what got edited and when) strikes me as a reasonable way to make sure that your readers pay attention.
I’d rather not have this.
Instead of making up a high-status rationalization, let’s just say that I am neither the brightest nor the most diligent reader, and thus the article without intentional errors gives me more value that an article with intentional errors. I would probably just not notice the error.
I might be some kind of monster but: I don’t see what is bad about my timeline ending. There’s no suffering involved (indeed, much less than the timeline continuing.) It’s not like we had a civilizational fuckup that lowers our status relative to our modal counterparts; what we would have gone on to do remains unchanged. People would be denied experiences but I don’t see how you can endorse that without coming to the repugnant conclusion (which does seem genuinely horrific.)
The question is simply whether annihilating the universe has higher expected value than letting it continue. To determine that, you can’t just compare the expected suffering of our future survival against the zero-level suffering of our nonexistence. You also have to factor in the positive experiences that are attainable should we survive, but not if we die.
My hypothesis is that universes that allow macroscopic time travel are very unlikely to have life intelligent enough to exploit the time travel. The hypothesis depends on two points: 1) policing time travel is likely to be extremely difficult or impossible, and 2) at least some members of the species will want to cause trouble with a time machine, including the sort of trouble that causes the entire species to have never evolved in the first place. Therefore I take the fact that we exist as evidence that arbitrary amounts of time travel aren’t possible in our universe...
Larry Niven once commented that, if past-changing time travel is possible, the most stable universes will be those in which time travel is never invented...
Doesn’t follow. The fact that you exist in a certain way isn’t evidence about prior probability of your existing in this way. (Your points (1) and (2) are arguments about prior probability that don’t have this problem; using observations that depend on the value of prior probability also works.)
Hm. It looks like my intuition has a bug. I’ll have to think about it more.
Alright. To preface this, let me say that I’m sorry if this is a stupid issue that’s been addressed elsewhere. I’m still working my way through the sequences.
But… the jump from discrete causality to continuous causality seems to be hiding a big issue for the argument against time travel. It’s not an insoluble issue, but the only solution that I see does pose problems for the locality of this definition of “causal universe”.
To start from the beginning: the argument in the discrete case relies heavily on the computability of the universe. In a causal universe, we can compute time t=8 based on complete knowledge about time t=7, and then we can compute time t=9 on our newly found knowledge about time t=8.
But as far as I can see, there’s not similar computability once we move into a continuous universe. In particular, if we have complete knowledge about a subspace K of spacetime, even with infinite computing power we can only find out the state of the universe along what I’ll call the “future boundary” of the space K: in particular, we can only compute the state of points P in space-time whose past light cone of height d lies entirely inside K, for some d. That means that we can never compute time t=8 based on knowledge that we can have at time t=7, in fact, we can’t compute time t=8 unless we have complete knowledge about time t<8.
So computability doesn’t seem to pose a problem for Time Turners, because even without Time Turners the (non-local) future is not computable. To put it another way, continuous causality has been defined in an entirely local way, which means that non-local cycles don’t seem to be a problem. In fact, positing a Time-Turner jump from times t<9 to time t=8 merely requires redefining the topology of time in such a way that point P at times t<9 is in the past light cone of the same point P at time t=8.
The obvious reply is that (by definition) point P at time t=8 must be in the past light cone of the same point P at all times 8<t<9. So we have the past light cone of point P intersecting the future light cone of point P. So should our definition of a “causal universe” exclude that? That seems perfectly reasonable, but doing so seems to destroy the locality of our definition of a causal universe. Because the intersection of the light cones that we’re objecting to is not a local phenomenon: for a point P at time t<9 arbitrarily close to 9 (i.e. in the local past time cone of P at t=8), point P at time t=8 is only its non-local past time cone.
Does the issue I’m trying to get at make any sense? I can rephrase if that would help, and I’d be happy to read anything that addresses this.
This probably sounds like a dumb question but given the assumptions of many worlds, timeless physics, no molecular identity and then adding that time travel is possible, why would that even be specially interesting?
In particular, why would that be different than 2D land that suddenly works out that instead of always having to move forward in the Z dimension you can move backwards as well.
“But if you were scared of being wrong, then assigning probability literally zero means you can’t change your mind, ever, even if Professor McGonagall shows up with a Time-Turner tomorrow.”
Doesn’t this assume that every mental state of mine has to be causally connected to a prior mental state? If we live in an acausal reality, I’m willing for my beliefs to be more related to a causal events than to Beysian updating. I don’t know how clear that is, but it is your fault for bringing up time travel ;)
P(A)=0; P(B)=0 P(A|B)=1
If we define probability to be continuous on [0,1], the math works. In practice, however, the probability of Professor McGonagall showing up with a Time-Turner tomorrow, given that I see and talk to her and try out the time-turner myself and it has the expected results, including being able to solve NP-complete problems in constant time, remains zero. The odds of spontaneous creation of Professor McGonagall and a device which causes me to perceive that I have traveled through time and which solves any NP-hard problems that I choose to give it in finite time is epsilon. The odds of a self-consistent hallucination that the above events have happened is an epsilon of a higher order.
Therefore, given impossible evidence one should conclude that one is insane. Once one has concluded that one is insane, one should reconsider all of one’s prior judgements in light of the fact that one cannot tell real evidence from hallucinatory evidence- which brings all of the evidence regarding the impossibility of any event into question.
In other words, there is epsilon chance that all of your experience is fake, and therefore at least epsilon uncertainty in any prediction you make, even predictions about pure hypothetical situations where mathematical proof exists.
But didn’t you already answer this? The computer needed to find-and-mark a universe with closed time loops is much, much larger, computationally speaking, than the one you need to, say, find-and-mark our universe. If you give me no information other than “a computer is simulating a universe”, I’ll still rate it more likely that it’s doing something that doesn’t require iterating the totality of predecessor search space.
But if the computer simulating operates in an acausal universe, the limitations on complexity we see in our causal-universe computers may not hold, and so the point may be self-defeating.
Finding that acausal universe still requires computational resources equivalent to (at least) traditional causality. It just moves the problem outwards. (If I understand this right)
No. My point is that computational complexity as understood in this universe depends on causality. Without causality, the logic falls apart.
If a universe is acausal, there could, potentially, be O(0) algorithms for basically anything. (I can just input a condition that terminates only if the answer is correct, and have the answer immediately based on the fact that it worked.) If so, the simulation could be instantaneous, and therefore any simulation would have the same cost.
My point is that it’s not very useful to know that a universe exists, if we don’t have a method for locating it in configuration space—and that method would eat the costs of causality, even if the universe itself doesn’t. Like in the GAZP.
I’d suggest that if this is a meaningful question at all, it’s a question about morality. There’s no doubt about the outcome of any empirical test we could perform in this situation. The only reason we care about the answer to such questions is to decide whether it’s morally right to run this sort of simulation, and what moral obligations we would have to the simulated people.
Looked at this way, I think the answer to the original question is to write out your moral code, look at the part where it talks about something like “the well-being of conscious entities,” taboo “conscious entities,” and then rewrite that section of your moral code in clearer language. If you do this properly you will get something that tells you whether the simulated people are morally significant.
Why can’t you change your mind ever? Is this because of the conservation of expected evidence?
No. This isn’t conservation of expected evidence but a simple consequence of Bayes theorem. If your prior probability is zero, then you end up with a zero in the numerator of the theorem (since P(A) is zero). So your final result is still zero.
Of course, if you also assigned a probability of zero to the event you just observed, now you have 0⁄0 error, which is more awkward to deal with. The case of having a posterior probability of zero in contradiction to the evidence is not particularly problematic for the agent’s thinking, it just isn’t very useful. But a true 0⁄0 event might well cause serious issues.
In practice, you conclude you hallucinated the event.
(There’s also a prophecy. True, a prophecy could simply indicate a Meddling Force that nudges events in a particular direction, rather than someone receiving information from the future, but once you have one obviously-causality-violating thing, there’s much less call to be extremely suspicious of other, apparently-causality-violating things.)
Rowling is on record as stating that prophecies can be just-walked-away-from which makes their dynamics less clear and not obviously a self-consistency thing.
What does this even mean?
I think it means that prophecies may be merely “very good magically-derived estimations of the future”—we are not sure they bind the past with the future as tightly as Time-Turners seem to do.
Oh, I didn’t know that, thanks.
Guys, I am not a physicist but I have problems with understanding this:
Is that a proven known thing that space is really continuous and indefinitely divisible?
It’s empirically unproveable, but it is an assumption of standard QM and standard relativity.
(Another non-physicist here) I thought quanta were the living proof that on a fundamental level the universe was discontinuous.
(An ex-physicist here) Quantization of measured energy levels in a bound system has nothing to do with the potential discontinuity of spacetime. The latter is hypothesized, but by no means proven or even tested. As for the original quote, it states that one has to leave room for models other than “a discrete causal graph”.
The position operator has a real valued spectrum.
Question—isn’t the sheer abundance of cyclic graphs something of a large argument we are in an acausal universe? If time travel is simply very difficult it’s probable that we’d never see it in our past light cone by chance (or at all barring intelligent intervention), and locally such a universe looks causal: events have causes even if they don’t have a First Cause.
How did you decide that our prior regarding the causal structure of the universe should be a somewhat uniform distribution over all directed graphs??
I didn’t—but I did do a back-of-the-envelope calculation, which predicts that there are something like a googolplex times more graphs with one cycle than there are acyclic paths, assuming 10^60 nodes (the number of Planck times since the beginning of the universe.)
And I don’t have a prior that says that an acausal universe should have a probability penalty of one over googolplex.
(I assume you meant “acyclic graphs”)
If this sort of reasoning worked, you could find strong arguments for all sorts of (contradictory) hypotheses. For instance:
I mean, your observation is interesting, but I don’t think it constitutes a “large argument”. You can’t just slap reasonable-ish priors onto spaces of mathematical objects, and in general using math for long chains of inference often only works if it’s exactly the right sort of math.
No, I meant acyclic paths—I am not at all sure that that is the correct term, but I meant something like “no branches”—there is only one possible path through the graph and it covers all the nodes.
And, well, point granted. Honestly I was expecting something like that, but I couldn’t see where the problem was*, so I went ahead and asked the question.
… yeah, in retrospect this allows for silly things like “surely rocks must fall up somewhere in the universe.”
Perhaps something is simply wrong with my moral processing module, but I really don’t see any morality issues associated with this sort of thing. Morality is, in my opinion, defined by society and environment; sure, killing people in the here and now in which we live is (in general) morally wrong, but if you go too away from that here and now, morality as we know it breaks down.
One area where I feel this applies is the “countless universes” arena. In most of these cases, we’re bandying about entire universes in such a fashion that you can effectively stamp them with an (admittedly very large) numerical index that describes them completely. At that point, you’re so far out of the context in which our morality is geared that I don’t feel it makes sense.
Suppose an entire universe full of people (described by a huge number we’ll denote as X) is destroyed, and another one also full of people (described by another huge number we’ll denote as Y) is created in it’s place—just exactly what moral context am I supposed to operate on here? We’ve basically defined people as chunks of a bitstream which can be trivially created, destroyed, and reproduced without loss. This is completely out of scope of the standard basis for our morality: that people and intelligent thought are somehow special and sacred.
Intelligence might be special and sacred when operating inside of a universe where it is uncommon and/or ultra-powerful. However when talking about universes as mere objects to be tossed around, to be created and destroyed and copied and modified like data files, moral qualms about the contents suddenly seem a less valid.
Note: Even just reducing the “specialness” of intelligence has similar effects for me. Consider the situation where you could create identical copies of yourself for zero cost, but that each had maintenance and upkeep costs (food, housing, etc.) and that you did not know which one was the “original”.
In this environment, if I had sufficient belief in the copying process, I would have no moral qualms whatsoever about creating a copy of myself to take out the trash, and then self destructing that copy. The mechanism to determine the “trash copy”? A coin flip. If I lose, I take out the trash and shut myself off; otherwise, I keep doing the important stuff. I don’t even flinch away from the idea of myself “taking out the trash then self destructing”. It seems like a perfectly normal, day to day thing, as long as the tasks I planned out continue to get done.
Any inference about “what sort of thingies can be real” seems to me premature. If we are talking about causality and space-time locality, it seems to me that the more parsimonious inference regards what sort of thingies a conscious experience can be embedded in, or what sort of thingies a conscious experience can be of.
The suggested inference seems to privilege minds too much, as if to say that only the states of affairs that allow a particular class of computation can possibly be real. (This view may reduce to empiricism, which people like, but stated this way I think it’s pretty hard to support! What’s so special about conscious experience?)
EDIT: Hmm, here is a rather similar comment. Hard to process this whole discussion.
EDIT EDIT: maybe even this comment is about the same issue, although its argument is being applied to a slightly different inference than the one suggested in the main article.
I am having my doubts that time travel is even a coherent concept. Actually, I have my doubts about time itself. At non-relativistic speeds and over small distances we can kid ourselves that two events not in the same place can both happen “at” a particular time. But we know that in general that’s only a convenient simplification. There’s no objectively real “t” axis in spacetime independent of the observer’s frame of reference.
But Eliezer gave you a constructive example in the post!
OK then, I am having doubts that my mind is coherent enough to discuss time travel usefully.
Understandable. Your brain shipped with a built-in module that models time as a property of reality in order to simplify other processes. Most people have to bludgeon it to near-death in order to just barely avoid the basic failure modes of thinking about time travel.
In fairness, his example assumed a universal timeframe (experienced by the simulators.)
There’s no x,y, or z axis independent of the observer’s frame of reference either. Does that mean that spatial travel is not a coherent concept?
If the coherent concept is ‘spacetime travel’, why is it required that there exist an ordering over all points in spacetime? Every pair of points (A,B) falls into one of three categories: events at point A can directly or indirectly have an effect/be observed at point B, but not vice versa; events at point A cannot have an effect or be observed at point B, directly or indirectly; events at point B can be observed or have an effect at point A, directly or indirectly.
It is difficult for different observers to communicate where points are, but they divide all points in spacetime into the same three categories.
This whole post strongly reminds me of “A New Kind of Science” , where Stephen Wolfram tries to explain the workings of the universe using simple computational structures like Cellular Automata, network systems, etc. I know that Wolfram is not highly regarded for many different reasons (mostly related to personal traits), but I got a very similar feeling when reading both NKS and this post—that there is something in the idea, that the fabric of the universe might actually be found to be best described by a simple computational model.
 - http://www.wolframscience.com/nksonline/toc.html
One of the neat things the Fringe TV show does is a thing with the part of time travel where the previous timeline and all the people in it get eradicated.
Here’s my problem. I thought we were looking for a way to categorize meaningful statements. I thought we had agreed that a meaningful statement must be interpretable as or consistent with at least one DAG. But now it seems that there are ways the world can be which can not be interpreted even one DAG because they require a directed cycle. SO have we now decided that a meaningful sentence must be interpretable as a directed, cyclic or acyclic, graph?
In general, if I say all and only statements that satisfy P are meaningful, then any statement that doesn’t satisfy P must be meaningless, and all meaningless statements should be unobservable, and therefor a statement like “all and only statements that satisfy P are meaningful” should be unfalsifiable.
I’m not sure this is necessarily correct. We typically model quantum configurations as functions defined over a continuous domain, but it’s yet possible that quantum configurations could be representable by a finite set of numbers (more precisely: that all possible configurations of our universe could be expressed as f(x) for some arbitrary but fixed f and some finite vector x). This would follow if the amount of information in the universe is finite, since we know that information is neither created nor destroyed over time. In this case we could represent states of the universe as a finite set of numbers and draw causal arrows between these arrows over time. Of course, such a representation might be much less convenient than thinking about continuous wavefunctions etc.
There is a hugely successful webcomic called Homestuck (maybe you’ve heard of it; it raised over $2 million in one month to make a game out of it) and a significant part of the comic’s events are reliant on time travel. The comic itself is dense and insanely complex, so I will do my best to spoil as little as possible, because to my knowledge there are no plot holes, and in the end it all makes sense if you keep reading through to Act 5 and beyond.
The basic idea is that the four main characters are playing an immersive video game called Sburb, and the game takes place within their universe. In Act 4, Dave is shown in a Bad Future where something happened and made the game unwinnable. At some point he had created time-tables that let him go back and forth in the timeline, became the Knight of Time, and eventually decided to go back and Make Things Right. We then learn that Sburb, the game-universe, encourages the use and abuse of stable time loops, but punishes those who try to change fate by killing the errant traveler. This leads to a handful of dead Daves piling up, the equivalent to Dumbledore discovering his own sticky notes, except more gruesome.
The biggest mystery of Act 4 was the fact that the Dave from the Bad Future came from a Doomed timeline, but his interference in the Alpha timeline was critical to winning the game. In fact, his interference caused a grandfather paradox, as he prevented the events that caused him to go back. Normal causality had to be thrown out the window. This caused a hurricane of argument on the MSPA forums over which time travel theory used Occam’s Razor the best.
My personal favorite was developed by myself and BlastYoBoots, and we called it Wobble Theory. It worked like a turing machine: Sburb steps through the initial seed of the universe and tries to see if certain conditions are met, chief among them being the WIN or LOSE condition, but there are additional caveats fhpu nf erdhvevat gung nyy vagre-frffvba pbairefngvbaf gnxr cynpr, gur cynagvat bs gur Sebt Grzcyr, gur cnffntr bs Wnpx Abve naq gur Pebfolgbc/Srqben, rgp. If these conditions were not met, the universe would mark down places where things had gone right (i.e. adhered to the Alpha timeline as decided), constrain those, and see what else it could tweak. The most-often changed variable would be a Time player’s decisions, as they were literal butterfly effects who could bring huge changes back to the present after their decisions had propagated into a Doomed future. In this way, the Alpha timeline was like a vigorously shaken wet noodle, grabbed from one end and pinched along its length until it stopped wobbling.
Eventually, the comic explained that Immutable Timeline Theory was the winner. Sburb had literally calculated the entire timeline AND all of its off-shoots in one go; that something happened was “AN IMMUTABLE FACT THAT WE ARE STATING FOR THE RECORD.” The various problems with this are explained away with a turtles-all-the-way-down demeanor, and going into those explanations would only spoil more than I want to.
The short version is that Andrew Hussie literally exists in-universe as the author of the comic, and if he says the timeline ought to go this way, it will.
There is a hugely successful webcomic called Homestuck (maybe you’ve heard of it; it raised over $2 million in one month to make a game out of it) and a significant part of the comic’s events are reliant on time travel. The comic itself is dense and insanely complex, so I will do my best to spoil as little as possible, because to my knowledge there are no plot holes, and in the end it all makes sense if you keep reading through to Act 5 and beyond.
The basic idea is that the four main characters, the Kids, are playing an immersive video game called Sburb, and the game takes place within your universe. In fact, the game takes over the entire timeline of your universe for its own purposes, that is, to create a new universe. In Act 4, Dave is shown in a Bad Future where something happened and made the game unwinnable. At some point he had created time-tables that let him go back and forth in the timeline, became the Knight of Time, and eventually decided to go back and Make Things Right. We then learn that Sburb, the game-universe, encourages the use and abuse of stable time loops, but punishes those who try to change fate by killing the errant traveler. This leads to a handful of dead Daves piling up, the equivalent to Dumbledore discovering his own sticky notes, except more gruesome.
The biggest mystery of Act 4 was the fact that the Dave from the Bad Future came from a Doomed timeline, but his interference in the Alpha timeline was critical to winning the game. In fact, his interference caused a grandfather paradox, as he prevented the events that caused him to go back. Normal causality had to be thrown out the window. This caused a hurricane of argument on the MSPA forums over which time travel theory used Occam’s Razor the best.
My personal favorite was developed by myself and BlastYoBoots, and we called it Wobble Theory. It worked like a turing machine: Sburb steps through the initial seed of the universe and tries to see if certain conditions are met, chief among them being the WIN or LOSE condition, but there are additional caveats such as requiring that all inter-session conversations take place, the planting of the Frog Temple, the passage of Jack Noir and the Crosbytop/Fedora, etc. If these conditions were not met, the universe would mark down places where things had gone right (i.e. adhered to the Alpha timeline as decided), constrain those, and see what else it could tweak. The most-often changed variable would be a Time player’s decisions, as they were literal butterfly effects who could bring huge changes back to the present after their decisions had propagated into a Doomed future. In this way, the Alpha timeline was like a vigorously shaken wet noodle, grabbed from one end and pinched along its length until it stopped wobbling.
Eventually, the comic explained that Immutable Timeline Theory was the winner. Sburb had literally calculated the entire timeline AND all of its off-shoots in one go; that something happened was “AN IMMUTABLE FACT THAT WE ARE STATING FOR THE RECORD.” The various problems with this are explained away with a turtles-all-the-way-down demeanor, and going into those explanations would only spoil more than I want to.
The short version is that Andrew Hussie literally exists in-universe as the author of the comic, and if he says the timeline ought to go this way, it will.
One thing this model ignores, so far as I could tell, is the reflective point of view of third person / past tense narration. Both Rowlings HP and Eliezer’s HPMOR are past tense narrations, stories told progressively, but always reflectively (“Harry said” instead of “Harry says”).
So, what if the causal links are already updated to the new information introduced with a 9pm|8pm turn, and this in turn updates the memories of agents within the universe, including the narrator (note: Narrator and Author are seperate entities in this concept; most literature students would not argue with this. At least, not too much)
As such, it’s not a cycle of updating information that the universe has to remain consistant; the new universe continues on from 8pm as normal, but new information is added to the collective memories of anyone who interacts with the Time Traveller, as well as the Time Traveller themselves.
The only way this doesn’t work is with Harry in MOR pulling a Bill and Ted with the Remembrall. Anyone with better maths want to tackle that one?
Ooops, shouldn’t have posted before reading the whole thing. Still, my arguement stands; what defines the “goodness” or “badness” of universe being destroyed and immediately replaced with another identical to it in all the most miniscule ways (the memories of agents interacting with the time machine), or for that matter the “betterness” of the previous universe.
Technically, this means “time travel” is less accurate that “history re-writer”, but to me, that doesn’t sound any worse.
I just got an idea for an interesting fictional model of time travel, based on a combination of probabilities and consistent histories.
The simplest example would go like this. Imagine you step into the time machine, travel a minute into the past, and kill your younger self. At the moment of your arrival, the universe branches into two. Since the number (total weight?) of killers should be equal to the number of victims, the branches have probability 50% each. In one branch you live and become a killer, in the other you die.
Now let’s take a more complex scenario. You flip a coin to decide whether you should step into the time machine, and another coin to kill or spare your past self. (Of course you have to travel to the moment before the first coinflip, otherwise this reduces to the previous scenario.) To figure out the probabilities, imagine that n people survive to flip the first coin. Then n/2 of them will step into the time machine and n/4 will become killers, which gives us n/4 victims. So you have a 1⁄5 chance of dying in this situation.
Is this model new? How far can we extend it consistently? What kinds of paradoxes can arise?
Scott Aaronson’s model, that Eliezer refers to here is basically this.
I fail to see how this is different from the standard “parallel timelines” model. It seems like you just applied probabilistic reasoning to figure out the relative occurrences of certain timelines.
Perhaps I’m misinterpreting what you mean by branching, but for all intents and purposes in the first example there are two parallel timelines which happen to be identical until in one of them a copy of the you from the other appears and kills you in this timeline, and later you disappear from the other one the killer came from.
Yeah. Basically, I’m trying to figure out how subjective probabilities would work consistently in the “parallel timelines” model, e.g. the probability of meeting a time traveler vs becoming that time traveler, when everyone’s decisions can also be probabilistic. The question interests me because in some cases it seems to have a unique but non-obvious answer.
Last time I tried reasoning on this one I came up against an annoying divide-by-infinity problem.
Suppose you have a CD with infinite storage space—if this is not possible in your universe, use a normal CD with N bits of storage, it just makes the maths more complicated. Do the following:
If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.
If a CD arrives from the future, read the number on it. Call this number X. Write X+1 on your own CD and send it back in time.
What is the probability distribution of the number on your CD? What is the probability that you didn’t receive a CD from the future?
Once you’ve worked that one out, consider this similar algorithm:
If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.
If a CD arrives from the future, read the number on it. Call this number X. Write X on your own CD and send it back in time.
What is the probability distribution of the number on your CD? What is the probability that you didn’t receive a CD from the future?
The Novikov Self-Consistency Principle can help answer that. It is one of my favorite things. I don’t think it was named in the post, but the concept was there.
The idea is that contradictions have probability zero. So the first scenario, the one with the paradox, doesn’t happen. It’s like the Outcome Pump if you hit the Emergency Regret Button. Instead of saying “do the following,” it should say “attempt the following.” If it is one self-consistent timeline, then you will fail. I don’t know why you’ll fail, probably just whatever reason is least unlikely, but the probability of success is zero. The probability distribution is virtually all at “you send the same number you received.” (With other probability mass for “you misread” and “transcription error” and stuff).
If your experiment succeeds, then you are not dealing with a single, self-consistent universe. The Novikov principle has been falsified. The distribution of X depends on how many “previous” iterations there were, which depends on the likelihood that you do this sequence given that you receive such a CD. I think it would be a geometric distribution?
The second one is also interesting. Any number is self-consistent. So (back to Novikov) none of them are vetoed. If a CD arrives, the distribution is whatever distribution you would get if you were asked “Write a number.” More likely, you don’t receive a CD from the future. That’s what happened today. And yesterday. And the day before. If you resolve to send the CD to yourself the previous day, then you will fail if self-consistency applies
Have you read HPMoR yet? I also highly recommend this short story.
I wasn’t reasoning under NSCP, just trying to pick holes in cousin_it’s model.
Though I’m interested in knowing why you think that one outcome is “more likely” than any other. What determines that?
I said not receiving a CD from the future is the most likely because that’s what usually happens. But I do have a pretty huge sampling bias of mainly talking to people who don’t have time machines.
i would expect “no CD” to be the most common even if you do have one, just because I feel like a closed time loop should take some effort to start. But this is probably a generalization from fiction, since if they happen in the real universe they do “just happen” with no previous cause. So I guess I can’t support it well enough to justify my intuition. I will say that if I’m wrong about this, any time traveller should be prepared for these to happen all the time on totally trivial things.
These are pretty strong arguments, but maybe the idea can still be rescued by handwaving :-)
In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn’t seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn’t receive anything.
Seconding A113′s recommendation of “Be Here Now”, that story along with the movie Primer was my main inspiration for the model.
This is precisely why trying to avoid exponentially-long compute times for PSPACE problems through the use of a time machine requires a computer with exponentially high MTBF.
Why exponentially, precisely?
(Leaving soon, will post math later if anyone is interested in the details.)
Short version: Suppose for simplicity of argument that all the probability of failure is in the portion of the machine that checks whether the received answer is correct, and that it has equal chance of producing a false positive or negative. (Neither of these assumptions is required, but I found it made the math easier to think about when I did it.) Call this error rate e.
Consider the set of possible answers received. For an n-bit answer, this set has size 2^n. Take a probability distribution over this set for the messages received, treat the operation of the machine as a Markov process and find the transition matrix, then set the output probability vector equal to the input, and you get that the probability vector is the eigenvector of the transition matrix (with the added constraint that it be a valid distribution).
You’ll find that the maximum value of e for which the probability distribution concentrates some (fixed) minimum probability at the correct answer goes down exponentially with n.
Neat! I still need to give some thought to the question of where we’re getting our probability distribution, though, when the majority of the computation is done by the universe’s plothole filter.
You get it as the solution to the equation. In a non-time-travel case, you have a fixed initial state (probability distribution is zero in all but one place), and a slightly spread out distribution for the future (errors are possible, if unlikely). If you perform another computation after that, and want to know what the state of the computer will be after performing two computations, you take the probability distribution after the first computation, transform it according to your computation (with possible errors), and get a third distribution.
All that changes here is that we have a constraint that two of the distributions need to be equal to each other. So, add that constraint, and solve for the distribution that fits the constraints.
The later Ed Stories were better.
Good point, but not actually answering the question. I guess what I’m asking is: given a single use of the time machine (Primer-style, you turn it on and receive an object, then later turn it off and send an object), make a list of all the objects you can receive and what each of them can lead to in the next iteration of the loop. This structure is called a Markov chain. Given the entire structure of the chain, can you deduce what probability you have of experiencing each possibility?
Taking your original example, there are only 2 states the timeline can be in:
A: Nothing arrives from the future. You toss a coin to decide whether to go back in time. Next state: A (50% chance) or B (50% chance)
*B: A murderous future self arrives from the future. You and him get into a fight, and don’t send anything back. Next state: A (100% chance).
Is there a way to calculate from this what the probability of actually getting a murderous future self is when you turn on the time machine?
I’m inclined to assume it would be a stationary distribution of the chain, if one exists. That is to say, one where the probability distribution of the “next” timeline is the same as the probability distribution of the “current” timeline. In this case, that would be (A: 2⁄3, B: 1⁄3). (Your result of (A: 4⁄5, B: 1⁄5) seems strange to me: half of the people in A will become killers, and they’re equal in number to their victims in B.)
There are certain conditions that a Markov chain needs to have for a stationary distribution to exist. I looked them up. A chain with a finite number of states (so no infinitely dense CDs for me :( ) fits the bill as long as every state eventually leads to every other, possibly indirectly (i.e. it’s irreducible). So in the first scenario, I’ll receive a CD with a number between 0 and N distributed uniformly. The second scenario isn’t irreducible (if the “first” timeline has a CD with value X, it’s impossible to ever get a CD with value Y in any subsequent timeline), so I guess there needs to be a chance of the CD becoming corrupted to a different value or the time machine exploding before I can send the CD back or something like that.
Teal deer: This model works but the probability of experiencing each outcome can easily depend on the tiny chance of an unexpected outcome. I like it a lot because it’s more intuitive than NSCP but the structure makes more sense than branching-multiverse. I may have to steal it if I ever write a time-travel story.
My original comment had two examples, one had no coinflips, and the other had two coinflips. You seem to be talking about some other scenario which has one coinflip?
The structure I have in mind is a branching tree of time, where each branch has a measure. The root (the moment before any occurrences of time travel) has measure 1, and the measure of each branch is the sum of measures of its descendants. An additional law is that measure is “conserved” through time travel, i.e. when a version of you existing in a branch with measure p travels into the past, the past branches at the point of your arrival, so that your influence is confined to a branch of measure p (which may or may not eventually flow into the branch you came from, depending on other factors). So for example if you’re travelling to prevent a disaster that happened in your past, your chance of success is no higher than the chance of the disaster happening in the first place.
In the scenarios I have looked at, these conditions yield enough linear equations to pin down the measure of each branch, with no need to go through Markov chains. But the general case of multiple time travelers gets kinda hard to reason about. Maybe Markov chains can give a proof for that case as well?
Since each time-travel event forks the universe, with multiple time travelers it’s a question of whether the the second time-traveler is “fork-traveling” as well.
In the first scenario, the answer seems to depend on the chance of you failing to resend the CD. In the second, on the chance of you deciding to send a CD even if you haven’t received anything. So as long as you can’t make these probabilities literally zero, I think the system can still be made to work.
And yeah, seconding A113′s recommendation of “Be Here Now”. That story, along with the movie Primer, was my inspiration for the model.
Disagree. This example depends fundamentally on having infinite storage density.
Edit: would whoever downvoted this care to provide an example with finite storage density.
You can apply the brute-force/postselection method to CGoL without timetravel too… But in that case verifying that a proposed history obeys the laws of CGoL involves all the same arithmetic ops as simulating forwards from the initial state. (The ops can, but don’t have to, be in the same order.) Likewise if there are any linear-time subregions of CGoL+timetravel. So I might guess that the execution of such a filter could generate observers in some of the rejected worlds too.
There are laws of which verification is easier than simulation, but CGoL isn’t one of them.
Re your checking method to construct/simulate an acausal universe, won’t work near as I can tell.
Specifically, the very act of verifying a string to be a life (or life + time travel or whatever) history requires actually computing the CA rules, doesn’t it? So in the act of verification, if nothing else, all the computing needed to make a string that contains minds actually contain the minds would have to occur, near as I can make out.
One model for time travel might be a two dimensional piece of paper with a path or paths drawn wiggling around on it. If you scan a “current moment” line across the plane, then you see points dancing. If a line and its wiggles are approximately perpendicular to the line of the current moment, then the dancing is local and perhaps physical. Time travel would be sigmoid line, first a “spontaneous” creation of a pair of points, then the cancellation of one (“reversed”) point with the original point.
An alternative story is of a line next to a loop—spontaneous creation of two “virtual” particles, one reversed, followed by those two cancelling.
Would J.K. Rowling’s book be causal if we add to the lore that Time Turners are well understood to cause (unasked) “virtual bearers” very like their bearers? The virtual bearers could be reified by the real bearer using the time turner, or if they are not reified, they will “cancel” themselves by using their own, virtual, time turner.
I think this addition changes the time turner from a global / acausal constraint on possible universes to a local / causal constraint, but I could very well be mistaken. Note that the reversed person is presumably invisible, but persuasive—perhaps they are a disjunctive geas of some sort.