Thank you for your comment, and for taking a skeptical approach towards this. I think that trying to punch holes in it is how we figure out if it is worth considering further. I honestly am not sure myself.
I think that my own thoughts on this are a bit like Bostrom’s skepticism of the simulation hypothesis, where I do not think it is likely, but I think it is interesting, and it has some properties I like. In particular, I like the “feedback loop” aspect of it being tied into metaphysical credence. The idea that the more people buy into an idea, the more likely it seems that it “has already happened” shows some odd properties of evidence. It is a bit like if I was standing outside of the room where people go to pick up the boxes that Omega dropped off. If I see someone walk out with two unopened boxes, I expect their net wealth has increased ~$1000, if I see someone walk out with one unopened box, I expect them to have increased their wealth ~$1,000,000. That is sort of odd isn’t it? If I see a small, dedicated group of people working on how they would structure simulations, and raising money and trusts to push it a certain political way in the future (laws requiring all simulated people get a minimum duration of afterlife meeting certain specifications, no AIs simulating human civilization for information gathering purposes without “retiring” the people to a heaven afterward, etc.) I have more reason to think I might get a heaven after I die.
As far as the “call to action” I hope that my post was not really read that way. I might have been clearer, and apologize. I think that running simulations followed by afterlife might be a worthwhile thing to do in the future, but I am not even sure it should be done for many reasons. It is worth discussing. One could also imagine that it might be determined, if we overcome and survive the AI intelligence explosion with a good outcome, that it is a worthwhile goal to create more human lives, which are pleasant, throughout our cosmological endowment. Sending off von Neumann probes to build simulations like this might be a live option. Honestly, it is an important question to figure out what we might want from a superintelligent AI, and especially if we might want to not just hand it the question. Coherent extrapolated volition sounds like a best tentative idea, but one we need to be careful with. For example, AI might only be able to produce such a “model” of what we want by running a large number of simulated worlds (to determine what we are all about). If we want simulated worlds to end with a “retirement” for the simulated people in a pleasant afterlife, we might want to specify it in advance, otherwise we are inadvertently reducing the credence we have of our own afterlife as well. Also, if there is an existent acausal trade regime on heaven simulations (this will be another post later) we might get in trouble for not conforming in advance.
As far as simulated hell, I think that fear of this as a possibility keeps the simulated heaven issue even more alive. Someone who would like a pleasant afterlife… which is probably almost all of us, might want to take efforts early to secure that such an afterlife is the norm in cases of simulation, and “hell” absolutely not permitted. Also, the idea that some people might run bad afterlives should probably further motivate people to try to also create as many good simulations as possible, to increase credence that “we” are in one of the good ones. This is like pouring white marbles into the urn to reduce the odds of drawing the black one. You see why the “loop” aspect of this can be kind of interesting. Especially for one-boxer-types, who try to “act out” the correct outcome after-the-fact. For one-boxers, this could be, from a purely and exclusively selfish perspective, the best thing they could possibly do with their life. Increasing the odds of a trillion-life-duration afterlife of extreme utility from 0.001 to 0.01 might be very selfishly rational.
I am not trying to “sell” this, as I have not even bought it myself, I am just sort of playing with it as a live idea. If nothing else, this seems like it might have some importance on considerations going forward. I think that people’s attitudes and approaches to religion suggest that this might be a powerful force for human motivation, and the second disjunct of the simulation argument shows that human motivation might have significant bearing both on our current reality, and on our anticipated future.
I have been lurking around LW for a little over a year. I found it indirectly through the Simulation Argument > Bostrom > AI > MIRI > LW. I am a graduate of Yale Law School, and have an undergraduate degree in Economics and International Studies focusing on NGO work. I also read a lot, but in something of a wandering path that I realize can and should be improved upon with the help, resources, and advice of LW.
I have spent the last few years living and working in developing countries around the world in various public interest roles, trying to find opportunities to do high-impact work. This was based around a vague and undertheorized consequentialism that has been pretty substantially rethought after finding FHI/MIRI/EA/LW etc. Without knowing about the larger effective altruism movement (aside from vague familiarity with Singer, QALY cost effectiveness comparisons between NGOs, etc.) I had been trying to do something like effective altruism on my own. I had some success with this, but a lot of it was just the luck of being in the right place at the right time. I think that this stuff is important enough that I should be approaching it more systematically and strategically than I had been. In particular, I am spending a lot of time moving my altruism away from just the concrete present and into thinking about “astronomical waste” and the potential importance of securing the future for humanity. This is sort of difficult, as I have a lot of experiential “availability” from working on the ground in poor countries which pulls on my biases, especially when faced with a lot of abstraction as the only counterweight. However, as stated, I feel this is too important to do incorrectly, even if it means taming intuitions and the easily available answer.
I have also been spending a lot of time recently thinking about the second disjunct of the simulation argument. Unless I am making a fundamental mistake, it seems as though the second disjunct, by bringing in human decision making (or our coherent extrapolated volition, etc.) into the process, sort of indirectly entangles the probable metaphysical reality of our world with our own decision making. This is true as a sort of unfolding of evidence if you are a two-boxer, but it is potentially sort-of-causally true if you are a one-boxer. Meaning if we clear the existential hurdle, this is seemingly the next thing between us and the likely truth of being in a simulation. I actually have a very short write-up on this which I will post in the discussion area when I have sufficient karma (2 points, so probably soon…) I also have much longer notes on a lot of related stuff which I might turn into posts in the future if, after my first short post, this is interesting to anyone.
I am a bit shy online, so I might not post much, but I am trying to get bolder as part of a self-improvement scheme, so we will see how it goes. Either way, I will be reading.
Thank you LW for existing, and providing such rigorous and engaging content, for free, as a community.