We are probably in a historical simulation. Most historical simulations are not of everyone but just of historically important people. Update on this hypothesis to increase your estimate that your life is historically significant. Look for clues as to why you might be important. For all of us it might be that Eliezer succeeds and we are one of the 10^(big number) simulations of his life and everything surrounding him.
One consequence for ethics in this case is that you can create conscious being by performing interactions equivalent of Turing tests on persons you get in contact with. Bonus points for spreading this meme to bring lots of conscious beings into existence (and put heavy load on the simulator).
But wouldn’t increasing load on the simulator increase the chances of the simulation being turned off, thus negating ALL the conscious and potentially conscious beings it was simulating?
Yeah, I wondered to what degree that could be optimized. But if you interact repeatedly and in complex ways than shouldn’t you notice that? Kind of a long-duration Turing test.
I understand why most historical simulations would be of historically important people, but why would most or even a lot of simulations be historical simulations?
The set of all simulations is irrelevant in this case. What matters for us is the set of simulations that match our observations. For this set, historical simulations of various forms are naturally expected to be predominate.
The past can’t simulate the future, so we must be in a sim from a future timeline. Loosely speaking this leaves open historical sims and ‘fictional’ sims. From the inside they may be hard to differentiate (consider that harry potter’s world looks historical from his perspective, etc.)
If multiple levels of sim are likely, I have a simple argument that fictional sims are more likely than you’d think: for us to be in a historical sim with respect to the root physical universe, every sim level in the stack must be historical. If even one sim in the tree/stack/chain is fictional, then everything below that level is also fictional.
So ‘fiction’ is something that only increases with sim levels.
See this. Basically, if the future goes well it will have lots of computing power and if a tiny fraction of this power is used to make historical simulations most people in our situation will be living in historical simulations.
I just publish simulation map, in which I conclude that most likely I live in one person me-simulation of a period near AI creation.
In fact, there is two possible variants:
This is a simulation of Eliezer’s life, and I just onу of thousands people who are simulated for it with enough details to be conscious observer.
It is only me-simulation, there I am the only really simulated observer, and others are p-zombie and simplified models.
Hypothesis 2 is favoured by some kind of power law in simulation world, that says that simpler and cheaper simulations are more abundant. (e.g. there are more novels than movies in our world).
But if it is true I should do something really important in FAI or other x-risks topics. I did many things, like map of x-risks prevention, but it is not enough to be simulated.
EY was here only an example. Now so many players on field that AGI will probably created by someone else.
And also it seems that he is not working on coding AI.
Bostrom’s paper doesn’t purport to show that we are probably in a simulation, but only the weaker claim that one of these things is true:
Humanity is likely to fail to develop to superpowered “post-human” levels.
Conditional on attaining superpowered post-human civilization, humanity is unlikely to run a lot of historical simulations.
We are probably in a historical simulation.
(Bostrom puts it slightly differently; I think what I’ve written above is clearer and has fewer little holes.)
You will observe that this argument is more or less a triviality; Bostrom’s contribution is thinking of making such an argument rather than filling in difficult steps in the reasoning once the argument is thought of.
I confess that my own response to this is indifference; I think there’s a very good chance that the sort of computational superpowers needed to run a lot of faithful historical simulations will never be ours, and I don’t see why a post-human civilization would bother to run a lot of simulations of their ancestors, so the most the argument can tell me is that it’s not completely impossible that I might be in a simulation. Fair enough, but so what?
(Elaborating on that not-seeing-why: it’s not very clear why our posthuman successors would bother running any ancestor-simulations, but to get “I’m probably in a simulation” out of Bostrom’s argument what’s necessary is either that the bit of my life I’m experiencing right now has been simulated not just once but many many times, or else that the posthumans are going to simulate not only their actual ancestors but many many people very like their ancestors in situations similar to their ancestors’. I see no reason to expect either of those.)
and I don’t see why a post-human civilization would bother to run a lot of simulations of their ancestors
Have you heard of the Resurrection? In many belief systems (of the western mid-east flavour specifically) it is the greatest goal that humanity could ever achieve. Historical simulation could implement it—in fact it is the only way to implement it.
Go find an average Christian or Muslim or other believer-in-the-Resurrection, and say to them “Great news! You and your friends and family are indeed going to be raised from the dead. What’ll happen is that you’ll get to live exactly the same life you’re living now all over again. You will have no recollection of having lived it before, suffering and disease and so on will be the same as ever, and you’ll die at the end. Isn’t that great?”
If they take you seriously enough to bother answering at all, do you really think they’ll say “Yeah, that’s exactly what I’m currently hoping for”?
I think that jacob_cannell’s implication was this but without “you’ll die at the end.” You die at the end physically but the point of the simulation is to obtain your mental state at the end of your life, so you can transfer that to heaven.
(I don’t believe there will ever be any possibility of rerunning a particular human being’s life in any manner that would be even close to his actual life.)
Yeah, I wondered about that. But I don’t think it makes sense. If you can get enough information about particular ancestors to simulate them (as opposed to simulating other people who happen to resemble them) then surely you have enough to put them directly in heaven / paradise / the New Jerusalem / whatever.
I don’t believe there will ever be any possibility of rerunning a particular human being’s life
I’m inclined to agree. But, since I am the person I am largely because of the life I’ve lived, how can running a simulation that doesn’t replicate my life help to determine the proper mental state to send me to heaven with?
Let me try to imagine the process working as well as possible. I’ve kept a journal for the past ten years, and screenshots of my computer every 30 seconds for nearly the last four (as well as webcam shots that can indicate exactly when I was and was not present at the computer). If someone were to simulate me they would have to simulate someone who went through the experiences and thoughts described in the journal, and who used his computer in the way implied by the screen and webcam shots.
Does all that info actually imply that someone could simply describe my current state, or would you get something more accurate by such a simulation? Perhaps an AI could simply use the info to directly produce a current state, but how would it do that, without simulating something like a process that passes through all that info? In other words, it’s not clear to me that a simulation couldn’t help.
Regarding the last point, basically I was saying that ultimately I don’t expect jacob_cannell’s idea to work, but I don’t think it is unintelligible.
OK, so if I’m understanding correctly your suggestion is that in order to reconstruct your mind it would be necessary to do lots of simulations of you-like minds in order to adjust the (unfathomably many) parameters to find a mind that behaves in the right ways. I concede that that might be so.
It’s an interesting (and disturbing) idea because it suggests that (little bits of?) our lives might be simulated billions of times, with small variations, in the process of trying to reconstruct us. (If, that is, anyone is so interested in reconstructing us at all.) This seems to me to make a big difference to the moral calculus of attempted simulated resurrection—“we can reconstruct your mind-state and put a new instantiation of it somewhere wonderful” sounds like quite a different deal from “we can reconstruct your mind-state and put a new instantiation of it somewhere wonderful—but the reconstruction process will involve billions of simulated minds that more or less closely resemble yours passing through good approximations to all the events of your life that we could find out about”, and I’d be much less happy about the latter.
I have to say that it seems unlikely that enough information exists to do the reconstruction for anyone—even people who save as much information about themselves as you do, which most of us don’t. I mean, in some sense maybe it’s still there since everything we do has effects on everything else in our future light cone, but I’d expect the information to be unusable in something like the way that energy becomes unusable when it turns into waste heat in rough thermal equilibrium with its surroundings.
Yes, there could be moral objections to such a process apart from its likeliness of success. And I agree that there is unlikely to be enough information for it to work in any case.
The point at which someone dies is the point at which their mind no longer causally effects the simulation. Naturally they can be copied out before then, but historical accuracy requires at least one version to remain in the sim until death.
And why should the AI care about historical accuracy?
I guess the real question is the difference between minds simulated on the basis of historical data (=”previously existing”) and minds simulated de novo, just plausible human minds invented out of thin air. Why should the AI favour previously existing minds?
And why should the AI care about historical accuracy?
We are assuming an FAI, The FAI cares about historical accuracy to the degree that people care about resurrecting accurate versions of dead family/friends/ancestors, where accuracy is subjective and relative to memories and beliefs.
More generally, the resources available will determine some finite number of minds that can be created. Some individuals will choose to create lots of ‘children’ (generalized to include de novo minds), some will choose to resurrect lots of ancestors, others will choose to use resources only to expand/clone their existing mind, many will probably choose some mix.
The FAI cares about historical accuracy to the degree that people care about resurrecting accurate versions of dead family/friends/ancestors, where accuracy is subjective and relative to memories and beliefs.
Oh, boy, that’s such a can of worms. Let’s resurrect grandpa, except we’ll delete some features of him that we don’t like and try to forget about. Or let’s resurrect my girlfriend from college but let’s make her a nympho.
I would venture a guess that people rarely care about accurate versions of dead people, they would prefer improved ones.
All in all, this just looks like a silicon version of ancestor worship. If you venerate your ancestors or, say, if you are a Mormon you convert them the Mormonism, isn’t that acausal trade in practice? They begat you, you do things for their souls...
Let’s resurrect grandpa, except we’ll delete some features of him that we don’t like and try to forget about. Or let’s resurrect my girlfriend from college but let’s make her a nympho.
Other friends/family/descendants—as well as society in general—is unlikely to want these changes.
I would venture a guess that people rarely care about accurate versions of dead people, they would prefer improved ones.
People alive today will want accurate versions of themselves to exist in the future. Society/future FAI will also consider this.
All in all, this just looks like a silicone version of ancestor worship.
Other friends/family/descendants—as well as society in general—is unlikely to want these changes
Really? Is there anyone who would prefer an incontinent grandpa raving about today’s degeneracy which the Good Lord will burn out? Or a grandpa who lived to a really advanced stage of Alzheimer’s?
Avoid naive pattern matching.
Oh, I do, I do :-) I pick insightful pattern matching instead.
If I’m being simulated, I have already been “resurrected”. But what is the point of resurrection? You yourself say “so you can transfer … to heaven” and given that, what is the reason for running the simulation at all instead of not collecting $200 and going directly to heaven?
If the simulation isn’t run all the way through, the simulators couldn’t be sure they were resurrecting you instead of someone else (since the mind they were simulating might suddenly have started to do other things that you wouldn’t have done, if they had continued to the simulation.) For example, suppose they base the simulation in part on your Less Wrong comments. If they manage to produce a mind that produces the first half of your comments, then they say, “good enough, let’s move that to heaven,” it might be that the mind they put in heaven would have gone on to produce a second set of comments totally diverse from the real ones that you made. So it ended up being someone else in heaven, not you.
I, a year ago, was a slightly different person than I am now. Both past-me and current-me are me. You are essentially saying that me-who-died is the version that should go to heaven and all the previous versions should not. Why?
We can also reverse the issue—in a simulation, I don’t have to die. If I am hit by a bus, insert a one-minute delay somewhere and the simulated-I will continue to live. Should that longer-lived version go to heaven, then?
Historical consistency—an intervention like that quickly leads to a fictional world that is ranked low in the ress utility function (because people from that fictional world don’t go on to create the actual future resurrection).
In part but there are also can be regular causal trades between simulators within each world. For example a future simulation physically located in say china will necessarily be separate from located in canada. These simulators can trade in the more regular sense.
You die at the end physically but the point of the simulation is to obtain your mental state at the end of your life, so you can transfer that to heaven.
Right. The simulation is the forward time sweep of an inference engine recreating historical people for the purpose of future resurrection.
(I don’t believe there will ever be any possibility of rerunning a particular human being’s life in any manner that would be even close to his actual life.)
If humanity survives to singularity level superintelligence, it’s a rather obvious possibility. Doesn’t even require any advanced violations of physics. It’s actually a nearer term tech than most people think—the simplest forms of it will be possible not long after AGI.
It depends of course on one’s definition of ‘close’ and the currently available information. Identity is subjective though—and that is what makes the approach viable. There is no such thing as the singular canonical correct version of a person. We are distributions over mindspace across the multiverse.
We are distributions over mindspace across the multiverse.
I am a distribution over mindspace..? across the multiverse..? Funny, I don’t feel like a distribution. Do you have any evidence to support or that’s just a word salad?
Identity in general can refer to current self, past self, and future selves all as the same ‘person’. That is a set. Mindspace is just the space of all possible minds, so the person-defining set is a distribution over mindspace.
I’m using ‘multiverse’ in the most general sense (nothing QM specific) to refer to all possible universes/futures etc.
Sure, although it doesn’t have much temporal evolution.
But still—for some specific rock, we can’t describe/model/understand it exactly, so we specify it abstractly in a compressed form, said compressed form specifies a distribution over the space of rock-like objects—rockspace.
You say “we can’t describe/model/understand it exactly, so we specify it abstractly”—does that mean we’re talking solely about maps and not about the territory?
What exactly do you mean by a “distribution”? Is it a probability distribution? You made the argument that as things move through time, they are a set of past, present, and (hopefully) future states. Since time is unidirectional, we might even call that an ordered set, a sequence. But a sequence is not a distribution.
The approach “X is a distribution over X-space across the multilverse” seem to be applicable to absolutely everything. If that is so, what is the use of this approach?
Probability distributions can be defined as subsets over possible logical worlds.
The approach “X is a distribution over X-space across the multilverse” seem to be applicable to absolutely everything. If that is so, what is the use of this approach?
Although general, it’s not a typical everyday mode of thought. I invoked it specifically in response to the parent comment:
(I don’t believe there will ever be any possibility of rerunning a particular human being’s life in any manner that would be even close to his actual life.)
It depends of course on one’s definition of ‘close’ and the currently available information. Identity is subjective though—and that is what makes the approach viable. There is no such thing as the singular canonical correct version of a person. We are distributions over mindspace across the multiverse.
So in some future resurrection, there would be potentially many versions of each mind from different possible worlds. Across the multiverse, many different simulators will recreate many different past historical sims. Each simulation doesn’t need to exactly recreate it’s own specific history as long as it recreates a specific history. Instead success simply requires adequate coverage across the space of all sims over the multiverse. If you aren’t thinking in terms of distributions over mindspace across the multiverse, you can’t really understand or reason about these concepts.
Probability distributions can be defined as subsets over possible logical worlds.
I still don’t understand.
Let’s drastically simplify things. Consider an ordered set of two mes—me one minute ago and me now. In which sense this set is a probability distribution? What does it mean?
So in some future resurrection, there would be potentially many versions of each mind from different possible worlds. Across the multiverse, many different simulators will recreate many different past historical sims. … success simply requires adequate coverage across the space of all sims over the multiverse
So are you arguing that future resurrections will be, basically, a brute-force approach? In the sense of “We can’t be sure whether A or B happened, so we’ll simulate both A and B branches”? That doesn’t require much in the way of sophisticated concepts, it’s sufficient to see it as exhaustive search, I think.
Also, what counts as “success” and what are the incentives and consequences for succeeding or failing?
Consider an ordered set of two mes—me one minute ago and me now. In which sense this set is a probability distribution? What does it mean?
In the sense that everything is—we have uncertainty over the physical configurations.
So are you arguing that future resurrections will be, basically, a brute-force approach?
No.
In the sense of “We can’t be sure whether A or B happened, so we’ll simulate both A and B branches”?
That’s just how complex multi-modal inference works in general. The multiverse complexity comes in from realizing that it is the whole set of similar future worlds creating past simulations.
in the sense that everything is—we have uncertainty over the physical configurations.
But this has nothing to do with physical configurations. We have a set of two things—to make things even simpler, let’s make it a rock—that differ in time. Unless you’re going to posit some Time Lord who soars above the time line, assigning probabilities to time snapshots does not make any sense to me.
True, but I think there are reasons beyond mere lack of capability why those games don’t involve neuron-level simulation of billions of specific past people.
We are probably in a historical simulation. Most historical simulations are not of everyone but just of historically important people. Update on this hypothesis to increase your estimate that your life is historically significant. Look for clues as to why you might be important. For all of us it might be that Eliezer succeeds and we are one of the 10^(big number) simulations of his life and everything surrounding him.
Simulation argument case 3 obviously.
One consequence for ethics in this case is that you can create conscious being by performing interactions equivalent of Turing tests on persons you get in contact with. Bonus points for spreading this meme to bring lots of conscious beings into existence (and put heavy load on the simulator).
But wouldn’t increasing load on the simulator increase the chances of the simulation being turned off, thus negating ALL the conscious and potentially conscious beings it was simulating?
That’s exactly what an agent of the simulator would say.
Cue the rooftop chase.
But just like HPMOR’s hat, the conscious being might switch back to nonsentience once the interaction ends.
Yeah, I wondered to what degree that could be optimized. But if you interact repeatedly and in complex ways than shouldn’t you notice that? Kind of a long-duration Turing test.
Hm, I wonder what the best place to find really happy people is?
Could you elaborate whether you mean in general, in simulations or elsewhere? And how this related to my comment?
The thought was to induce the simulation of good experiences by being in close proximity to happy people.
Ah yes. Interesting idea. But I think it only ‘counts’ if the happyness is conscious. One has to work a bit harder for that.
I understand why most historical simulations would be of historically important people, but why would most or even a lot of simulations be historical simulations?
The set of all simulations is irrelevant in this case. What matters for us is the set of simulations that match our observations. For this set, historical simulations of various forms are naturally expected to be predominate.
The past can’t simulate the future, so we must be in a sim from a future timeline. Loosely speaking this leaves open historical sims and ‘fictional’ sims. From the inside they may be hard to differentiate (consider that harry potter’s world looks historical from his perspective, etc.)
If multiple levels of sim are likely, I have a simple argument that fictional sims are more likely than you’d think: for us to be in a historical sim with respect to the root physical universe, every sim level in the stack must be historical. If even one sim in the tree/stack/chain is fictional, then everything below that level is also fictional.
So ‘fiction’ is something that only increases with sim levels.
See this. Basically, if the future goes well it will have lots of computing power and if a tiny fraction of this power is used to make historical simulations most people in our situation will be living in historical simulations.
I just publish simulation map, in which I conclude that most likely I live in one person me-simulation of a period near AI creation. In fact, there is two possible variants:
This is a simulation of Eliezer’s life, and I just onу of thousands people who are simulated for it with enough details to be conscious observer.
It is only me-simulation, there I am the only really simulated observer, and others are p-zombie and simplified models.
Hypothesis 2 is favoured by some kind of power law in simulation world, that says that simpler and cheaper simulations are more abundant. (e.g. there are more novels than movies in our world). But if it is true I should do something really important in FAI or other x-risks topics. I did many things, like map of x-risks prevention, but it is not enough to be simulated.
The simulation map:
http://lesswrong.com/r/discussion/lw/mv0/simulations_map_what_is_the_most_probable_type_of/
I’m surprised you think he actually has a high change of creating AGI.
EY was here only an example. Now so many players on field that AGI will probably created by someone else. And also it seems that he is not working on coding AI.
Seem to be implying that you are more likely to be in a simulation if historixcally impt. Interesting
Evidence?
In most stories, the majority of the population are NPCs.
There’s a paper on this called the “simulation argument”. It’s not evidence based but logic based.
Bostrom’s paper doesn’t purport to show that we are probably in a simulation, but only the weaker claim that one of these things is true:
Humanity is likely to fail to develop to superpowered “post-human” levels.
Conditional on attaining superpowered post-human civilization, humanity is unlikely to run a lot of historical simulations.
We are probably in a historical simulation.
(Bostrom puts it slightly differently; I think what I’ve written above is clearer and has fewer little holes.)
You will observe that this argument is more or less a triviality; Bostrom’s contribution is thinking of making such an argument rather than filling in difficult steps in the reasoning once the argument is thought of.
I confess that my own response to this is indifference; I think there’s a very good chance that the sort of computational superpowers needed to run a lot of faithful historical simulations will never be ours, and I don’t see why a post-human civilization would bother to run a lot of simulations of their ancestors, so the most the argument can tell me is that it’s not completely impossible that I might be in a simulation. Fair enough, but so what?
(Elaborating on that not-seeing-why: it’s not very clear why our posthuman successors would bother running any ancestor-simulations, but to get “I’m probably in a simulation” out of Bostrom’s argument what’s necessary is either that the bit of my life I’m experiencing right now has been simulated not just once but many many times, or else that the posthumans are going to simulate not only their actual ancestors but many many people very like their ancestors in situations similar to their ancestors’. I see no reason to expect either of those.)
Have you heard of the Resurrection? In many belief systems (of the western mid-east flavour specifically) it is the greatest goal that humanity could ever achieve. Historical simulation could implement it—in fact it is the only way to implement it.
Go find an average Christian or Muslim or other believer-in-the-Resurrection, and say to them “Great news! You and your friends and family are indeed going to be raised from the dead. What’ll happen is that you’ll get to live exactly the same life you’re living now all over again. You will have no recollection of having lived it before, suffering and disease and so on will be the same as ever, and you’ll die at the end. Isn’t that great?”
If they take you seriously enough to bother answering at all, do you really think they’ll say “Yeah, that’s exactly what I’m currently hoping for”?
I think that jacob_cannell’s implication was this but without “you’ll die at the end.” You die at the end physically but the point of the simulation is to obtain your mental state at the end of your life, so you can transfer that to heaven.
(I don’t believe there will ever be any possibility of rerunning a particular human being’s life in any manner that would be even close to his actual life.)
Yeah, I wondered about that. But I don’t think it makes sense. If you can get enough information about particular ancestors to simulate them (as opposed to simulating other people who happen to resemble them) then surely you have enough to put them directly in heaven / paradise / the New Jerusalem / whatever.
I’m inclined to agree. But, since I am the person I am largely because of the life I’ve lived, how can running a simulation that doesn’t replicate my life help to determine the proper mental state to send me to heaven with?
Let me try to imagine the process working as well as possible. I’ve kept a journal for the past ten years, and screenshots of my computer every 30 seconds for nearly the last four (as well as webcam shots that can indicate exactly when I was and was not present at the computer). If someone were to simulate me they would have to simulate someone who went through the experiences and thoughts described in the journal, and who used his computer in the way implied by the screen and webcam shots.
Does all that info actually imply that someone could simply describe my current state, or would you get something more accurate by such a simulation? Perhaps an AI could simply use the info to directly produce a current state, but how would it do that, without simulating something like a process that passes through all that info? In other words, it’s not clear to me that a simulation couldn’t help.
Regarding the last point, basically I was saying that ultimately I don’t expect jacob_cannell’s idea to work, but I don’t think it is unintelligible.
OK, so if I’m understanding correctly your suggestion is that in order to reconstruct your mind it would be necessary to do lots of simulations of you-like minds in order to adjust the (unfathomably many) parameters to find a mind that behaves in the right ways. I concede that that might be so.
It’s an interesting (and disturbing) idea because it suggests that (little bits of?) our lives might be simulated billions of times, with small variations, in the process of trying to reconstruct us. (If, that is, anyone is so interested in reconstructing us at all.) This seems to me to make a big difference to the moral calculus of attempted simulated resurrection—“we can reconstruct your mind-state and put a new instantiation of it somewhere wonderful” sounds like quite a different deal from “we can reconstruct your mind-state and put a new instantiation of it somewhere wonderful—but the reconstruction process will involve billions of simulated minds that more or less closely resemble yours passing through good approximations to all the events of your life that we could find out about”, and I’d be much less happy about the latter.
I have to say that it seems unlikely that enough information exists to do the reconstruction for anyone—even people who save as much information about themselves as you do, which most of us don’t. I mean, in some sense maybe it’s still there since everything we do has effects on everything else in our future light cone, but I’d expect the information to be unusable in something like the way that energy becomes unusable when it turns into waste heat in rough thermal equilibrium with its surroundings.
Yes, there could be moral objections to such a process apart from its likeliness of success. And I agree that there is unlikely to be enough information for it to work in any case.
Why “at the end of … life”? If you’re simulating someone, what’s special about a particular point when the physical body died?
The point at which someone dies is the point at which their mind no longer causally effects the simulation. Naturally they can be copied out before then, but historical accuracy requires at least one version to remain in the sim until death.
And why should the AI care about historical accuracy?
I guess the real question is the difference between minds simulated on the basis of historical data (=”previously existing”) and minds simulated de novo, just plausible human minds invented out of thin air. Why should the AI favour previously existing minds?
BTW, affects the simulation, not effects.
We are assuming an FAI, The FAI cares about historical accuracy to the degree that people care about resurrecting accurate versions of dead family/friends/ancestors, where accuracy is subjective and relative to memories and beliefs.
More generally, the resources available will determine some finite number of minds that can be created. Some individuals will choose to create lots of ‘children’ (generalized to include de novo minds), some will choose to resurrect lots of ancestors, others will choose to use resources only to expand/clone their existing mind, many will probably choose some mix.
Oh, boy, that’s such a can of worms. Let’s resurrect grandpa, except we’ll delete some features of him that we don’t like and try to forget about. Or let’s resurrect my girlfriend from college but let’s make her a nympho.
I would venture a guess that people rarely care about accurate versions of dead people, they would prefer improved ones.
All in all, this just looks like a silicon version of ancestor worship. If you venerate your ancestors or, say, if you are a Mormon you convert them the Mormonism, isn’t that acausal trade in practice? They begat you, you do things for their souls...
Other friends/family/descendants—as well as society in general—is unlikely to want these changes.
People alive today will want accurate versions of themselves to exist in the future. Society/future FAI will also consider this.
Avoid naive pattern matching.
Really? Is there anyone who would prefer an incontinent grandpa raving about today’s degeneracy which the Good Lord will burn out? Or a grandpa who lived to a really advanced stage of Alzheimer’s?
Oh, I do, I do :-) I pick insightful pattern matching instead.
Because that’s the time when you would want to be resurrected.
If I’m being simulated, I have already been “resurrected”. But what is the point of resurrection? You yourself say “so you can transfer … to heaven” and given that, what is the reason for running the simulation at all instead of not collecting $200 and going directly to heaven?
If the simulation isn’t run all the way through, the simulators couldn’t be sure they were resurrecting you instead of someone else (since the mind they were simulating might suddenly have started to do other things that you wouldn’t have done, if they had continued to the simulation.) For example, suppose they base the simulation in part on your Less Wrong comments. If they manage to produce a mind that produces the first half of your comments, then they say, “good enough, let’s move that to heaven,” it might be that the mind they put in heaven would have gone on to produce a second set of comments totally diverse from the real ones that you made. So it ended up being someone else in heaven, not you.
That goes to the issue of who is “you”.
I, a year ago, was a slightly different person than I am now. Both past-me and current-me are me. You are essentially saying that me-who-died is the version that should go to heaven and all the previous versions should not. Why?
We can also reverse the issue—in a simulation, I don’t have to die. If I am hit by a bus, insert a one-minute delay somewhere and the simulated-I will continue to live. Should that longer-lived version go to heaven, then?
Historical consistency—an intervention like that quickly leads to a fictional world that is ranked low in the ress utility function (because people from that fictional world don’t go on to create the actual future resurrection).
So is this whole “res utility function” based on obligations arising out of acausal trades?
In part but there are also can be regular causal trades between simulators within each world. For example a future simulation physically located in say china will necessarily be separate from located in canada. These simulators can trade in the more regular sense.
Right. The simulation is the forward time sweep of an inference engine recreating historical people for the purpose of future resurrection.
If humanity survives to singularity level superintelligence, it’s a rather obvious possibility. Doesn’t even require any advanced violations of physics. It’s actually a nearer term tech than most people think—the simplest forms of it will be possible not long after AGI.
It depends of course on one’s definition of ‘close’ and the currently available information. Identity is subjective though—and that is what makes the approach viable. There is no such thing as the singular canonical correct version of a person. We are distributions over mindspace across the multiverse.
I am a distribution over mindspace..? across the multiverse..? Funny, I don’t feel like a distribution. Do you have any evidence to support or that’s just a word salad?
Identity in general can refer to current self, past self, and future selves all as the same ‘person’. That is a set. Mindspace is just the space of all possible minds, so the person-defining set is a distribution over mindspace.
I’m using ‘multiverse’ in the most general sense (nothing QM specific) to refer to all possible universes/futures etc.
In the same way, is a rock a distribution over rockspace across the multiverse?
Sure, although it doesn’t have much temporal evolution.
But still—for some specific rock, we can’t describe/model/understand it exactly, so we specify it abstractly in a compressed form, said compressed form specifies a distribution over the space of rock-like objects—rockspace.
A few follow-on questions, then.
You say “we can’t describe/model/understand it exactly, so we specify it abstractly”—does that mean we’re talking solely about maps and not about the territory?
What exactly do you mean by a “distribution”? Is it a probability distribution? You made the argument that as things move through time, they are a set of past, present, and (hopefully) future states. Since time is unidirectional, we might even call that an ordered set, a sequence. But a sequence is not a distribution.
The approach “X is a distribution over X-space across the multilverse” seem to be applicable to absolutely everything. If that is so, what is the use of this approach?
Probability distributions can be defined as subsets over possible logical worlds.
Although general, it’s not a typical everyday mode of thought. I invoked it specifically in response to the parent comment:
So in some future resurrection, there would be potentially many versions of each mind from different possible worlds. Across the multiverse, many different simulators will recreate many different past historical sims. Each simulation doesn’t need to exactly recreate it’s own specific history as long as it recreates a specific history. Instead success simply requires adequate coverage across the space of all sims over the multiverse. If you aren’t thinking in terms of distributions over mindspace across the multiverse, you can’t really understand or reason about these concepts.
I still don’t understand.
Let’s drastically simplify things. Consider an ordered set of two mes—me one minute ago and me now. In which sense this set is a probability distribution? What does it mean?
So are you arguing that future resurrections will be, basically, a brute-force approach? In the sense of “We can’t be sure whether A or B happened, so we’ll simulate both A and B branches”? That doesn’t require much in the way of sophisticated concepts, it’s sufficient to see it as exhaustive search, I think.
Also, what counts as “success” and what are the incentives and consequences for succeeding or failing?
In the sense that everything is—we have uncertainty over the physical configurations.
No.
That’s just how complex multi-modal inference works in general. The multiverse complexity comes in from realizing that it is the whole set of similar future worlds creating past simulations.
But this has nothing to do with physical configurations. We have a set of two things—to make things even simpler, let’s make it a rock—that differ in time. Unless you’re going to posit some Time Lord who soars above the time line, assigning probabilities to time snapshots does not make any sense to me.
Lots of people today play video games that contain characters from the past.
True, but I think there are reasons beyond mere lack of capability why those games don’t involve neuron-level simulation of billions of specific past people.
Not if you weight each character by the number of words her or she speaks.