I have to say, quila, I’m pleasantly surprised that your response above is both plausible and logically coherent—qualities I couldn’t find in any of the Reddit responses. Thank you.
However, I have concerns and questions for you.
Most importantly, I worry that if we’re currently in a simulation, physics and even logic could be entirely different from what they appear to be. If all our senses are illusory, why should our false map align with the territory outside the simulation? A story like your “Mutual Anthropic Capture” offers hope: a logically sound hypothesis in which our understanding of physics is true. But why should it be? Believing that a simulation exactly matches reality sounds to me like the privileging the hypothesis fallacy.
By the way, I’m also somewhat skeptical of a couple of your assumptions in Mutual Anthropic Capture. Still, I think it’s a good idea overall, and some subtle modifications to the idea would probably make logically sound. I won’t bother you about those small issues here, though; I’m more interested in your response to my concern above.
If we have no grasp on anything outside our virtualized reality, all is lost. Therefore I discard my attempts to control those possible worlds.
However, the simulation argument relies on reasoning. To go through requires a number of assumptions hold. Those in turn rely on: why would we be simulated? It seems to me the main reason is because we’re near a point of high influence in original reality and they want to know what happened—the simulations then are effectively extremely high resolution memories. Therefore, thank those simulating us for the additional units of “existence”, and focus on original reality where there’s influence to be had; that’s why alien or our future superintelligences would care what happened.
Basically, don’t freak out about simulations. It’s not that different from the older concept “history is watching you”. Intense, but not world shatteringly intense.
I think I understand your point. I agree with you: the simulation argument relies on the assumption that physics and logic are the same inside and outside the simulation. In my eyes, that means we may either accept the argument’s conclusion or discard that assumption. I’m open to either. You seem to be, too—at least at first. Yet, you immediately avoid discarding the assumption for practical reasons:
If we have no grasp on anything outside our virtualized reality, all is lost.
I agree with this statement, and that’s my fear. However, you don’t seem to be bothered by the fact. Why not? The strangest thing is that I think you agree with my claim: “The simulation argument should increase our credence that our entire understanding of everything is flawed.” Yet somehow, that doesn’t frighten you. What do you see that I don’t see? Practical concerns don’t change the territory outside our false world.
Second:
It seems to me the main reason is because we’re near a point of high influence in original reality and they want to know what happened—the simulations then are effectively extremely high resolution memories.
That’s surely possible, but I can imagine hundreds of other stories. In most of those stories, altruism from within the simulation has no effect on those outside it. Even worse, is that there are some stories in which inflicting pain within a simulation is rewarded outside of it. Here’s a possible hypothetical:
Imagine humans in base reality create friendly AI. To respect their past, the humans ask the AI to create tons of sims living in different eras. Since some historical info was lost to history, the sims are slightly different from base reality. Therefore, in each sim, there’s a chance AI never becomes aligned. Accounting for this possibility, base reality humans decide to end sims in which AI becomes misaligned and replace those sims with paradise sims where everyone is happy.
In the above scenario, both total and average utilitarianism would recommend intentionally creating misaligned AI so that paradise ensues.
I’m sure you can craft even more plausible stories.
My point is, even if our understanding of physics and logic is correct, I don’t see why we ought to privilege the hypothesis that simulations are memories. I also don’t see why we ought to privilege the idea that it’s in our interest to increase utility within the simulation. Can you please clarify why you’re so confident about these notions?
I’ve been poking at the philosophy of math recently. It really seems like there’s no way to conceive of a universe that is beyond the reach of logic except one that also can’t support life. Classic posts include unreasonable effectiveness of mathematics, what numbers could not be, a few others. So then we need epistemology.
We can make all sorts of wacky nested simulations and any interesting ones, ones that can support organisms (that is, ones that are Turing complete), can also support processes for predicting outcomes in that universe, and those processes appear to necessarily need to do reasoning about what is “simple” in some sense in order to work. So that seems to hint that algorithmic information theory isn’t crazy (unless I just hand waved over a dependency loop, which I totally might have done, it’s midnight), which means that we can use the equivalence of Turing complete structures to assume we can infer things about the universe. Maybe not solononoff induction, but some form of empirical induction. And then we’ve justified ordinary reasoning about what’s simple.
Okay, so we can reason normally about simplicity. What universes produce observers like us and arise from mathematically simple rules? Lots of them, but it seems to me the main ones produce us via base physics, and then because there was an instance in base physics, we also get produced in neighboring civilizations’ simulations of what other things base physics might have done in nearby galaxies so as to predict what kind of superintelligent aliens they might be negotiating with before they meet each other. Or, they produce us by base physics, and then we get instantiated again later to figure out what we did. Ancestor sims require very good outcomes which seem rare, so those branches are lower measure anyway, but also ancestor sims don’t get to produce super ai separate from the original causal influence.
Point is, no, what’s going on in the simulations is nearly entirely irrelevant. We’re in base physics somewhere. Get your head out of the simulation clouds and choose what you do in base physics, not based on how it affects your simulators’ opinion of the simulation’s moral valence. Leave that sort of crazy stuff to friendly ai, you can’t understand superintelligent simulators which we can’t even get evidence exist besides plausible but very galaxy brain abstract arguments.
(Oh, might be relevant that I’m a halfer when making predictions, thirder when choosing actions—see anthropic decision theory for an intuition on that.)
Thank you, I feel inclined to accept that for now.
But I’m still not sure, and I’ll have to think more about this response at some point.
Edit: I’m still on board with what you’re generally saying, but I feel skeptical of one claim:
It seems to me the main ones produce us via base physics, and then because there was an instance in base physics, we also get produced in neighboring civilizations’ simulations of what other things base physics might have done in nearby galaxies so as to predict what kind of superintelligent aliens they might be negotiating with before they meet each other.
My intuition tells me there will probably be superior methods of gathering information about superintelligent aliens. To me, it seems like the most obvious reason to create sims would be to respect the past for some bizarre ethical reason, or for some weird kind of entertainment, or even to allow future aliens to temporarily live in a more primitive body. Or perhaps for a reason we have yet to understand.
I don’t think any of these scenarios would really change the crux of your argument, but still, can you please justify your claim for my curiosity?
Sims are very cheap compared to space travel, and you need to know what you’re dealing with in quite a lot of detail before you fly because you want to have mapped the entire space of possible negotiations in an absolutely ridiculous level of detail.
Sims built for this purpose would still be a lot lower detail than reality, but of course that would be indistinguishable from inside if the sim is designed properly. Maybe most kinds of things despawn in the sim when you look away, for example. Only objects which produce an ongoing computation that has influence on the resulting civ would need modeling in detail. Which I suspect would include every human on earth, due to small world effects, the internet, sensitive dependence on initial conditions, etc. Imagine how time travel movies imply the tiniest change can amplify—one needs enough detail to have a good map of that level of thing. Compare weather simulation.
Someone poor in Ghana might die and change the mood of someone working for ai training in Ghana, which subtly affects how the unfriendly AI that goes to space and affects alien civs is produced, or something. Or perhaps there’s an uprising when they try to replace all human workers with robots. Modeling what you thought about now helps predict how good you’ll be at the danceoff in your local town which affects the posts produced as training data on the public internet. Oh, come to think of it, where are we posting, and on what topic? Perhaps they needed to model your life in enough detail to have tight estimates of your posts, because those posts affect what goes on online.
But most of the argument for continuing to model humans seems to me to be the sensitive dependence on initial conditions, because it means you need an unintuitively high level of modeling detail in order to estimate what von Neumann probe wave is produced.
Still cheap—even in base reality earth right now is only taking up a little more energy than its tiny silhouette against the sun’s energy output in all directions. A kardashev 2 civ would have no problem fuelling an optimized sim with a trillion trillion samples of possible aliens’ origin processes. Probably superintelligent kardashev 1 even finds it quite cheap, could be less then earth’s resources to do the entire sim including all parallel outcomes.
I’m pretty worried that we can’t understand the universe “properly” even if we’re in base physics! It’s not yet clearly forbidden that the foundations of philosophy contain unanswerable questions, things where there’s a true answer that affects our universe in ways that are not exposed in any way physically, and can only be referred to by theoretical reasoning; which then relies on how well our philosophy and logic foundations actually have the real universe as a possible referent. Even if they do, things could be annoying. In particular, one possible annoying hypothesis would be if the universe is in Turing machines, but is quantum—then in my opinion that’s very weird but hey at least we have a set in which the universe is realizable. Real analysis and some related stuff gives us some idea things can be reasoned about from within a computation based understanding of structure, but which are philosphically-possibly-extant structures beyond computation, and whether true reality can contain “actual infinities” is a classic debate.
So sims are small potatoes, IMO. Annoying simulators that want to actively mess up our understandings are clearly possible but seem not particularly likely by models I believe right now; seems to me they’d rather just make minds within their own universe; sims are for pretending to be another timeline or universe to a mind you want to instantiate, whatever your reason for that pretense. If we can grab onto possible worlds well enough, and they aren’t messing up our understanding on purpose, then we can reason about plausible base realities and find out we’re primarily in a sim by making universe sims ourselves and discovering the easiest way to find ourselves is if we first simulate some alien civ or other.
But if we can’t even in principle have a hypothesis space which relates meaningfully to what structures a universe could express, then phew, that’s pretty much game over for trying to guess at tegmark 4 and who might simulate us in it or what other base physics was possible or exists physically in some sense.
My giving up on incomprehensible worlds is not a reassuring move, just an unavoidable one. Similar to accepting that if you die in 3 seconds, you can’t do much about it. Hope you don’t, btw.
But yeah currently seems to me that the majority of sim juice comes from civs who want to get to know the neighbors before they meet, so they can prepare the appropriate welcome mat (tone: cynical). Let’s send an actualized preference for strong egalitarianism, yeah? (doesn’t currently look likely that we will, would be a lot of changes from here before that became likely.)
(Also, hopefully everything I said works for either structural realism or mathematical universe. Structural realism without mathematical universe would be an example of the way things could be wacky in ways permanently beyond the reach of logic, while still living in a universe where logic mostly works.)
if we’re currently in a simulation, physics and even logic could be entirely different from what they appear to be.
I have another obscure shortform about this! Physical vs metaphysical contingency, about what it would mean for metaphysics (e.g. logic) itself to have been different. (In the case of simulations, it could only be different in a way still capable of containing our metaphysics as a special case, like how in math a more expressive formal system can contain a less expressive one, but not the reverse)
I agree a metaphysically different base world is possible, but I’m not sure how to reason about it. (I think apparent metaphysical paradoxes are some evidence for it, though we might also just be temporarily confused about metaphysics)
Just physics being different is easier to imagine. For example, it could be that the base world is small, and it contains exactly one alien civilization running a simulation in which we appear to observe a large world. But if the base world is small, arguments for simulations which rely on the vastness of the world, like Bostrom’s, would no longer hold. And at that point there doesn’t seem much reason to expect it, at least for any individual small world.[1] Though it could also be that the base world is large and physically different, and we’re in a simulation where we appear to observe a different large world.
Ultimately, while it could be true that there are 0 unsimulated copies of us, still we can have the best impact in the possibilities where there are at least one.[2]
By the way, I’m also somewhat skeptical of a couple of your assumptions in Mutual Anthropic Capture. Still, I think it’s a good idea overall, and some subtle modifications to the idea would probably make logically sound. I won’t bother you about those small issues here, though
I’m interested in what they are, I wouldn’t be bothered (if you meant that literally). If you want you can reply about it here or on the original thread.
If we’re instead reasoning over the space of all possible mathematical worlds which are ‘small’ compared to what our observations look like they suggest, then we’d be reasoning about very many individual small worlds (which basically reintroduces the ‘there are very many contexts which could choose to simulate us’ premise). Some of those small math-worlds will probably run simulations (for example, if some have beings which want to manipulate “the most probable environment” of an AI in a larger mathematical world, to influence that larger math-world)
In other words: “Conditional on {some singular ‘real world’ that is somehow special compared to merely mathematical worlds} being small, it probably doesn’t contain simulations. But there are certainly many math-worlds that do, because the space of math-worlds is so vast (to the point that some small math-worlds would randomly contain a simulation as part of their starting condition)”
And there’s probably not anything we can do to change our situation in case of possibilities where we don’t exist in base reality. Although I do think ‘look for bugs’ is something an aligned ASI would want to try, especially when considering that our physics apparently has some simple governing laws, i.e. may have a pretty short program length[3], and it’s plausible for a process we’d describe with a short program length to naturally / randomly occur as a process of physical interaction in a much larger base world—that is to say, there are plausible origins of a simulation which don’t involve a superintelligent programmer ensuring there are no edge cases)
I have to say, quila, I’m pleasantly surprised that your response above is both plausible and logically coherent—qualities I couldn’t find in any of the Reddit responses. Thank you.
However, I have concerns and questions for you.
Most importantly, I worry that if we’re currently in a simulation, physics and even logic could be entirely different from what they appear to be. If all our senses are illusory, why should our false map align with the territory outside the simulation? A story like your “Mutual Anthropic Capture” offers hope: a logically sound hypothesis in which our understanding of physics is true. But why should it be? Believing that a simulation exactly matches reality sounds to me like the privileging the hypothesis fallacy.
By the way, I’m also somewhat skeptical of a couple of your assumptions in Mutual Anthropic Capture. Still, I think it’s a good idea overall, and some subtle modifications to the idea would probably make logically sound. I won’t bother you about those small issues here, though; I’m more interested in your response to my concern above.
If we have no grasp on anything outside our virtualized reality, all is lost. Therefore I discard my attempts to control those possible worlds.
However, the simulation argument relies on reasoning. To go through requires a number of assumptions hold. Those in turn rely on: why would we be simulated? It seems to me the main reason is because we’re near a point of high influence in original reality and they want to know what happened—the simulations then are effectively extremely high resolution memories. Therefore, thank those simulating us for the additional units of “existence”, and focus on original reality where there’s influence to be had; that’s why alien or our future superintelligences would care what happened.
https://arxiv.org/pdf/1110.6437
Basically, don’t freak out about simulations. It’s not that different from the older concept “history is watching you”. Intense, but not world shatteringly intense.
I think I understand your point. I agree with you: the simulation argument relies on the assumption that physics and logic are the same inside and outside the simulation. In my eyes, that means we may either accept the argument’s conclusion or discard that assumption. I’m open to either. You seem to be, too—at least at first. Yet, you immediately avoid discarding the assumption for practical reasons:
I agree with this statement, and that’s my fear. However, you don’t seem to be bothered by the fact. Why not? The strangest thing is that I think you agree with my claim: “The simulation argument should increase our credence that our entire understanding of everything is flawed.” Yet somehow, that doesn’t frighten you. What do you see that I don’t see? Practical concerns don’t change the territory outside our false world.
Second:
That’s surely possible, but I can imagine hundreds of other stories. In most of those stories, altruism from within the simulation has no effect on those outside it. Even worse, is that there are some stories in which inflicting pain within a simulation is rewarded outside of it. Here’s a possible hypothetical:
Imagine humans in base reality create friendly AI. To respect their past, the humans ask the AI to create tons of sims living in different eras. Since some historical info was lost to history, the sims are slightly different from base reality. Therefore, in each sim, there’s a chance AI never becomes aligned. Accounting for this possibility, base reality humans decide to end sims in which AI becomes misaligned and replace those sims with paradise sims where everyone is happy.
In the above scenario, both total and average utilitarianism would recommend intentionally creating misaligned AI so that paradise ensues.
I’m sure you can craft even more plausible stories.
My point is, even if our understanding of physics and logic is correct, I don’t see why we ought to privilege the hypothesis that simulations are memories. I also don’t see why we ought to privilege the idea that it’s in our interest to increase utility within the simulation. Can you please clarify why you’re so confident about these notions?
Thank you
We have to infer how reality works somehow.
I’ve been poking at the philosophy of math recently. It really seems like there’s no way to conceive of a universe that is beyond the reach of logic except one that also can’t support life. Classic posts include unreasonable effectiveness of mathematics, what numbers could not be, a few others. So then we need epistemology.
We can make all sorts of wacky nested simulations and any interesting ones, ones that can support organisms (that is, ones that are Turing complete), can also support processes for predicting outcomes in that universe, and those processes appear to necessarily need to do reasoning about what is “simple” in some sense in order to work. So that seems to hint that algorithmic information theory isn’t crazy (unless I just hand waved over a dependency loop, which I totally might have done, it’s midnight), which means that we can use the equivalence of Turing complete structures to assume we can infer things about the universe. Maybe not solononoff induction, but some form of empirical induction. And then we’ve justified ordinary reasoning about what’s simple.
Okay, so we can reason normally about simplicity. What universes produce observers like us and arise from mathematically simple rules? Lots of them, but it seems to me the main ones produce us via base physics, and then because there was an instance in base physics, we also get produced in neighboring civilizations’ simulations of what other things base physics might have done in nearby galaxies so as to predict what kind of superintelligent aliens they might be negotiating with before they meet each other. Or, they produce us by base physics, and then we get instantiated again later to figure out what we did. Ancestor sims require very good outcomes which seem rare, so those branches are lower measure anyway, but also ancestor sims don’t get to produce super ai separate from the original causal influence.
Point is, no, what’s going on in the simulations is nearly entirely irrelevant. We’re in base physics somewhere. Get your head out of the simulation clouds and choose what you do in base physics, not based on how it affects your simulators’ opinion of the simulation’s moral valence. Leave that sort of crazy stuff to friendly ai, you can’t understand superintelligent simulators which we can’t even get evidence exist besides plausible but very galaxy brain abstract arguments.
(Oh, might be relevant that I’m a halfer when making predictions, thirder when choosing actions—see anthropic decision theory for an intuition on that.)
Thank you, I feel inclined to accept that for now.
But I’m still not sure, and I’ll have to think more about this response at some point.
Edit: I’m still on board with what you’re generally saying, but I feel skeptical of one claim:
My intuition tells me there will probably be superior methods of gathering information about superintelligent aliens. To me, it seems like the most obvious reason to create sims would be to respect the past for some bizarre ethical reason, or for some weird kind of entertainment, or even to allow future aliens to temporarily live in a more primitive body. Or perhaps for a reason we have yet to understand.
I don’t think any of these scenarios would really change the crux of your argument, but still, can you please justify your claim for my curiosity?
Sims are very cheap compared to space travel, and you need to know what you’re dealing with in quite a lot of detail before you fly because you want to have mapped the entire space of possible negotiations in an absolutely ridiculous level of detail.
Sims built for this purpose would still be a lot lower detail than reality, but of course that would be indistinguishable from inside if the sim is designed properly. Maybe most kinds of things despawn in the sim when you look away, for example. Only objects which produce an ongoing computation that has influence on the resulting civ would need modeling in detail. Which I suspect would include every human on earth, due to small world effects, the internet, sensitive dependence on initial conditions, etc. Imagine how time travel movies imply the tiniest change can amplify—one needs enough detail to have a good map of that level of thing. Compare weather simulation.
Someone poor in Ghana might die and change the mood of someone working for ai training in Ghana, which subtly affects how the unfriendly AI that goes to space and affects alien civs is produced, or something. Or perhaps there’s an uprising when they try to replace all human workers with robots. Modeling what you thought about now helps predict how good you’ll be at the danceoff in your local town which affects the posts produced as training data on the public internet. Oh, come to think of it, where are we posting, and on what topic? Perhaps they needed to model your life in enough detail to have tight estimates of your posts, because those posts affect what goes on online.
But most of the argument for continuing to model humans seems to me to be the sensitive dependence on initial conditions, because it means you need an unintuitively high level of modeling detail in order to estimate what von Neumann probe wave is produced.
Still cheap—even in base reality earth right now is only taking up a little more energy than its tiny silhouette against the sun’s energy output in all directions. A kardashev 2 civ would have no problem fuelling an optimized sim with a trillion trillion samples of possible aliens’ origin processes. Probably superintelligent kardashev 1 even finds it quite cheap, could be less then earth’s resources to do the entire sim including all parallel outcomes.
I should also add:
I’m pretty worried that we can’t understand the universe “properly” even if we’re in base physics! It’s not yet clearly forbidden that the foundations of philosophy contain unanswerable questions, things where there’s a true answer that affects our universe in ways that are not exposed in any way physically, and can only be referred to by theoretical reasoning; which then relies on how well our philosophy and logic foundations actually have the real universe as a possible referent. Even if they do, things could be annoying. In particular, one possible annoying hypothesis would be if the universe is in Turing machines, but is quantum—then in my opinion that’s very weird but hey at least we have a set in which the universe is realizable. Real analysis and some related stuff gives us some idea things can be reasoned about from within a computation based understanding of structure, but which are philosphically-possibly-extant structures beyond computation, and whether true reality can contain “actual infinities” is a classic debate.
So sims are small potatoes, IMO. Annoying simulators that want to actively mess up our understandings are clearly possible but seem not particularly likely by models I believe right now; seems to me they’d rather just make minds within their own universe; sims are for pretending to be another timeline or universe to a mind you want to instantiate, whatever your reason for that pretense. If we can grab onto possible worlds well enough, and they aren’t messing up our understanding on purpose, then we can reason about plausible base realities and find out we’re primarily in a sim by making universe sims ourselves and discovering the easiest way to find ourselves is if we first simulate some alien civ or other.
But if we can’t even in principle have a hypothesis space which relates meaningfully to what structures a universe could express, then phew, that’s pretty much game over for trying to guess at tegmark 4 and who might simulate us in it or what other base physics was possible or exists physically in some sense.
My giving up on incomprehensible worlds is not a reassuring move, just an unavoidable one. Similar to accepting that if you die in 3 seconds, you can’t do much about it. Hope you don’t, btw.
But yeah currently seems to me that the majority of sim juice comes from civs who want to get to know the neighbors before they meet, so they can prepare the appropriate welcome mat (tone: cynical). Let’s send an actualized preference for strong egalitarianism, yeah? (doesn’t currently look likely that we will, would be a lot of changes from here before that became likely.)
(Also, hopefully everything I said works for either structural realism or mathematical universe. Structural realism without mathematical universe would be an example of the way things could be wacky in ways permanently beyond the reach of logic, while still living in a universe where logic mostly works.)
I have another obscure shortform about this! Physical vs metaphysical contingency, about what it would mean for metaphysics (e.g. logic) itself to have been different. (In the case of simulations, it could only be different in a way still capable of containing our metaphysics as a special case, like how in math a more expressive formal system can contain a less expressive one, but not the reverse)
I agree a metaphysically different base world is possible, but I’m not sure how to reason about it. (I think apparent metaphysical paradoxes are some evidence for it, though we might also just be temporarily confused about metaphysics)
Just physics being different is easier to imagine. For example, it could be that the base world is small, and it contains exactly one alien civilization running a simulation in which we appear to observe a large world. But if the base world is small, arguments for simulations which rely on the vastness of the world, like Bostrom’s, would no longer hold. And at that point there doesn’t seem much reason to expect it, at least for any individual small world.[1] Though it could also be that the base world is large and physically different, and we’re in a simulation where we appear to observe a different large world.
Ultimately, while it could be true that there are 0 unsimulated copies of us, still we can have the best impact in the possibilities where there are at least one.[2]
I’m interested in what they are, I wouldn’t be bothered (if you meant that literally). If you want you can reply about it here or on the original thread.
If we’re instead reasoning over the space of all possible mathematical worlds which are ‘small’ compared to what our observations look like they suggest, then we’d be reasoning about very many individual small worlds (which basically reintroduces the ‘there are very many contexts which could choose to simulate us’ premise). Some of those small math-worlds will probably run simulations (for example, if some have beings which want to manipulate “the most probable environment” of an AI in a larger mathematical world, to influence that larger math-world)
In other words: “Conditional on {some singular ‘real world’ that is somehow special compared to merely mathematical worlds} being small, it probably doesn’t contain simulations. But there are certainly many math-worlds that do, because the space of math-worlds is so vast (to the point that some small math-worlds would randomly contain a simulation as part of their starting condition)”
And there’s probably not anything we can do to change our situation in case of possibilities where we don’t exist in base reality. Although I do think ‘look for bugs’ is something an aligned ASI would want to try, especially when considering that our physics apparently has some simple governing laws, i.e. may have a pretty short program length[3], and it’s plausible for a process we’d describe with a short program length to naturally / randomly occur as a process of physical interaction in a much larger base world—that is to say, there are plausible origins of a simulation which don’t involve a superintelligent programmer ensuring there are no edge cases)
(but no longer short when considering its very complex starting state? ig it could turn out that that itself is predicted by some simple rule)