The linked argument doesn’t require blue-tentacle-like psi phenomena. See the three bullet points that apply when there’s no superintelligent influence. The planetarium hypothesis is completely disjunctive with psi arguments, and explains the Fermi paradox even in the absence of psi. It’s also not just my hypothesis—there’s historical precedent, as has been linked to in the post. ETA: I hope that the second, Fermi-centric half of the linked post can be judged on its own terms and inspire debate about its arguments, regardless of the various theological or paranormal claims that might exist elsewhere on the blog.
[My primary interpretation of the downvotes for this comment is basically: “I want to discourage people from talking about psi, parapsychology, or anything like that—we all know that magic doesn’t exist, so we should try to explain phenomena that actually exist and that are therefore actually interesting. Admittedly you (Will_Newsome) didn’t spontaneously bring up psi in your comment, and your comment is a more-or-less reasonable reply to its parent, but downvoting this comment is the easiest way to punish you for associating LessWrong with blatantly irrational speculation.”]
I’m a tad annoyed that it apparently breaks my space bar—arrow keys and pgup/pgdwn work, but space does nothing.
Anyway, my basic reaction is that you give no interesting reasons for preferring a planetarium over a simulation besides philosophy of mind (most of which theories, I believe, would not predict any output difference in the absence of real qualia in a simulation) or efficiency (which to the extent we can analyze at all, weighs in strongly for simulation being more efficient).
I also don’t understand how such an entity would even build a planetarium in the first place. Wouldn’t any physical shell badly interfere with predictions of planetary or cometary orbits? Or cause parallax? etc. What would the timing be, and is there really no natural records that would throw off a planetarium constructed just in time for humans to be fooled (akin to testing the fine structure constant by looking at natural nuclear reactors from millions/billions of years ago)?
Existing matter seems highly redundant, and building a full-scale 1:1 replica, as it were, means you cannot opt for any amount of approximation by definition or possible optimization.
I would draw an analogy to NP problems: yes, the best way to solve the pathologically hardest instances of any NP problem is brute force, just like there are probably arrangements of matter which cannot be calculated more efficiently by computronium than the actual arrangement of matter. But nevertheless, SAT solvers run remarkably fast on many real-world problem and far faster than anyone focused on the general asymptotic behavior would expect, and we have no reason to believe the world itself is a pathological instance of worlds.
One possible objection: what if humans are doing hypercomputation? E.g., being created by evolution (which is fundamentally “tied into” reality’s computation) lets humans tap into the latent computation of the universe in a way that an algorithmic AI can’t emulate, so it keeps humans around to use as hypercomputers. Various people have proposed similar hypotheses. I think this objection can be met, though.
The usual anti-Penrose point comes to mind: if quantum microtubules are really that useful, we can probably just build them into chips, and better, and the problem goes away.
Unless you mean the “tieing into” somehow requires a prefrontal cortex, at least 1 kidney, a working gallbladder, etc, in which case I think that’s just sheer privileging of hypothesis with not a scrap of evidence for it.
Former, not the latter. And yes, the anti-Penrose point applies, but we can skirt it by postulating that the superintelligence is limited in its decision theory—it can recognize good results when it seems them, much as TDT can recognize that UDT beats it at counterfactual mugging, but it’s architecturally constrained not to self-modify into the winning thing. So humans might run native hypercomputation or native super-awesome decision theory that an AI could exploit but that the AI would know it couldn’t emulate given its knowledge of its own limited architecture.
I guess you’re distantly alluding to the old discussion of ‘what would AIXI do if it ran into a hypercomputing oracle?’ in modern guise. I’m afraid I know too little about TDT or UDT to appreciate the point. It just seems a little far-fetched—so not only are we thinking about hypercomputation, which I believe is generally regarded as being orders of magnitude less likely than say P=NP, we’re also thinking about a superintelligent and superpowerful agent with a decision theory that just happens to be broken in the right way?
If we were being mined for our computational potential, I can’t help but feel human lives ought to be less repetitive than they are.
I believe is generally regarded as being orders of magnitude less likely than say P=NP
Haven’t seen any surveys, but I don’t think so. I think hypercomputation is considered by some important people to be more likely than P=NP. I believe very few people have really considered it, so you shouldn’t take anyone’s off-the-cuff impressions as meaning very much unless you know they’ve thought a lot about the limitations of theoretical computer science. I don’t really have any ax to grind on the matter, but I think hypercomputation is neglected.
we’re also thinking about a superintelligent and superpowerful agent with a decision theory that just happens to be broken in the right way?
I think my points were supposed to be disjunctive, not conjunctive. A broken decision theory or a limited theory of computation can both result in humans outcompeting superintelligences on certain very specific decision problems or (pseudo-)computations. Wei Dai’s “Metaphilosophical Mysteries” is relevant.
If we were being mined for our computational potential, I can’t help but feel human lives ought to be less repetitive than they are.
Given some models, yes. Given other models, the AI might not be able to locate what parts of the system have the special sauce and what parts don’t, so it’s more likely to let humans be.
Your link isn’t a stupid person, but to some extent, the lack of interest in hypercomputation says what the field thinks of it. Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.
Not sure, but it seems that whenever I get into discussions with you it’s usually about some potentially-important edge case or something. Strange.
But anyway, yeah. I just want to flag hypercomputation as a speculative thing that it might be worth taking an interest in, much like mirror matter. One or two of my default models are probably very similar to yours when it comes down to betting odds.
Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.
But only after it was discovered that the theory of quantum mechanics implied it was theoretically possible.
Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.
My understanding of the history is that everyone believed the extended Church-Turing thesis until someone noticed that the (already established) theory of quantum mechanics contradicted it.
I meant “apply” in the sense that one applies a mathematical model to a phenomenon. Specifically, it was implicitly assumed the the notion of polynomial time captured what was actually possible to compute in polynomial time.
It just seems a little far-fetched—so not only are we thinking about hypercomputation, which I believe is generally regarded as being orders of magnitude less likely than say P=NP
Um, you do realize you’re comparing apples and oranges there, since one is a statement about physics and the other a statement about mathematics.
As someone who understands computational theory, I strongly suspect you’re seriously confused about how computational complexity theory works. As I don’t have the time or interest to give a course in computational complexity, might I recommend asking the original question on mathoverflow if you are interested.
I don’t find this argument persuasive or even strong. n qubits can’t simulate n+1 qubits in general. In fact, n qubits can’t even in general simulate n+1 bits. This suggests that if our understanding of the laws of physics are close to correct for our universe and the larger universe (whether holographic planetarium or simulationist), simulation should be tough.
That may be, but such a general point would be about arbitrary qubits or bits, when a simulation doesn’t have to work over all or even most arrangements.
Hmm, so thinking about this more, I think that Holevo’s theorem can probably be interpreted in a way that much more substantially restricts what one would need to know about the other n bits in order to simulate them, especially since one is apparently simulating not just bits but qubits. But I don’t really have a good understanding of this sort of thing at all. Maybe someone who knows more can comment?
Another issue which backs up simulation being easier- if one cares primarily about life forms one doesn’t need a detailed simulation then of the inside of planets and stars. The exact quantum state of every iron atom in the core of the planet for example shouldn’t matter that much. So if one is mainly simulating the surface of a single planet in full detail, or even just the surfaces of a bunch of planets, that’s a lot less computation.
One other issue is that I’m not sure you can have simulations run that much faster than your own physical reality (again assuming that the simulated universe uses the same basic physics as the underlying universe). See for example this paper which shows that most classical algorithms don’t get major speedup from a quantum computer beyond a constant factor. That constant factor could be big, but this is a pretty strong result even before one is talking about general quantum algorithms. Of course, if the external world didn’t quite work the same (say different constants for things like the speed of light) this might not be much of an issue at all.
Hmm, that’s a good point. So it would then come down to how much of an expectation of what the simulation is likely to do do you need in order to get away with using fewer qubits. I don’t have a good intuition for that, but the fact that BQP is likely to be fairly small compared to all of PSPACE suggests to me that one can’t really get that much out of it. But that’s a weak argument. Your remark makes me update in favor of simulationism being more plausible.
I’m a tad annoyed that it apparently breaks my space bar—arrow keys and pgup/pgdwn work, but space does nothing.
Google’s fault. Thanks for letting me know, though.
Anyway, my basic reaction is that you give no interesting reasons for preferring a planetarium over a simulation
Right—the argument is pretty modest. It’s mostly just that the planetarium hypothesis is on par with other hypotheses like the simulation argument.
I also don’t understand how such an entity would even build a planetarium in the first place.
Yeah, I left this to “a wizard did it”—if you accept simulation, then you can mix and match bigger and smaller planetariums around your brain or around the solar system to pose various physical problems. The planetarium hypothesis is sort of continuous with the simulation hypothesis if you like simulationistic assumptions. [ETA: And I didn’t address any of those problems at any scale, because there’s a problem for each scale. Factor your intuitions about the improbability of actually engineering a planetarium into your a posteriori estimate, to get a custom fit probability.]
The linked argument doesn’t require blue-tentacle-like psi phenomena. See the three bullet points that apply when there’s no superintelligent influence. The planetarium hypothesis is completely disjunctive with psi arguments, and explains the Fermi paradox even in the absence of psi. It’s also not just my hypothesis—there’s historical precedent, as has been linked to in the post. ETA: I hope that the second, Fermi-centric half of the linked post can be judged on its own terms and inspire debate about its arguments, regardless of the various theological or paranormal claims that might exist elsewhere on the blog.
[My primary interpretation of the downvotes for this comment is basically: “I want to discourage people from talking about psi, parapsychology, or anything like that—we all know that magic doesn’t exist, so we should try to explain phenomena that actually exist and that are therefore actually interesting. Admittedly you (Will_Newsome) didn’t spontaneously bring up psi in your comment, and your comment is a more-or-less reasonable reply to its parent, but downvoting this comment is the easiest way to punish you for associating LessWrong with blatantly irrational speculation.”]
I’m a tad annoyed that it apparently breaks my space bar—arrow keys and pgup/pgdwn work, but space does nothing.
Anyway, my basic reaction is that you give no interesting reasons for preferring a planetarium over a simulation besides philosophy of mind (most of which theories, I believe, would not predict any output difference in the absence of real qualia in a simulation) or efficiency (which to the extent we can analyze at all, weighs in strongly for simulation being more efficient).
I also don’t understand how such an entity would even build a planetarium in the first place. Wouldn’t any physical shell badly interfere with predictions of planetary or cometary orbits? Or cause parallax? etc. What would the timing be, and is there really no natural records that would throw off a planetarium constructed just in time for humans to be fooled (akin to testing the fine structure constant by looking at natural nuclear reactors from millions/billions of years ago)?
Can you expand on this? This isn’t obvious to me.
Existing matter seems highly redundant, and building a full-scale 1:1 replica, as it were, means you cannot opt for any amount of approximation by definition or possible optimization.
I would draw an analogy to NP problems: yes, the best way to solve the pathologically hardest instances of any NP problem is brute force, just like there are probably arrangements of matter which cannot be calculated more efficiently by computronium than the actual arrangement of matter. But nevertheless, SAT solvers run remarkably fast on many real-world problem and far faster than anyone focused on the general asymptotic behavior would expect, and we have no reason to believe the world itself is a pathological instance of worlds.
One possible objection: what if humans are doing hypercomputation? E.g., being created by evolution (which is fundamentally “tied into” reality’s computation) lets humans tap into the latent computation of the universe in a way that an algorithmic AI can’t emulate, so it keeps humans around to use as hypercomputers. Various people have proposed similar hypotheses. I think this objection can be met, though.
The usual anti-Penrose point comes to mind: if quantum microtubules are really that useful, we can probably just build them into chips, and better, and the problem goes away.
Unless you mean the “tieing into” somehow requires a prefrontal cortex, at least 1 kidney, a working gallbladder, etc, in which case I think that’s just sheer privileging of hypothesis with not a scrap of evidence for it.
Former, not the latter. And yes, the anti-Penrose point applies, but we can skirt it by postulating that the superintelligence is limited in its decision theory—it can recognize good results when it seems them, much as TDT can recognize that UDT beats it at counterfactual mugging, but it’s architecturally constrained not to self-modify into the winning thing. So humans might run native hypercomputation or native super-awesome decision theory that an AI could exploit but that the AI would know it couldn’t emulate given its knowledge of its own limited architecture.
I guess you’re distantly alluding to the old discussion of ‘what would AIXI do if it ran into a hypercomputing oracle?’ in modern guise. I’m afraid I know too little about TDT or UDT to appreciate the point. It just seems a little far-fetched—so not only are we thinking about hypercomputation, which I believe is generally regarded as being orders of magnitude less likely than say P=NP, we’re also thinking about a superintelligent and superpowerful agent with a decision theory that just happens to be broken in the right way?
If we were being mined for our computational potential, I can’t help but feel human lives ought to be less repetitive than they are.
Haven’t seen any surveys, but I don’t think so. I think hypercomputation is considered by some important people to be more likely than P=NP. I believe very few people have really considered it, so you shouldn’t take anyone’s off-the-cuff impressions as meaning very much unless you know they’ve thought a lot about the limitations of theoretical computer science. I don’t really have any ax to grind on the matter, but I think hypercomputation is neglected.
I think my points were supposed to be disjunctive, not conjunctive. A broken decision theory or a limited theory of computation can both result in humans outcompeting superintelligences on certain very specific decision problems or (pseudo-)computations. Wei Dai’s “Metaphilosophical Mysteries” is relevant.
Given some models, yes. Given other models, the AI might not be able to locate what parts of the system have the special sauce and what parts don’t, so it’s more likely to let humans be.
Your link isn’t a stupid person, but to some extent, the lack of interest in hypercomputation says what the field thinks of it. Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.
Wei Dai’s link is pretty controversial.
Not sure, but it seems that whenever I get into discussions with you it’s usually about some potentially-important edge case or something. Strange.
But anyway, yeah. I just want to flag hypercomputation as a speculative thing that it might be worth taking an interest in, much like mirror matter. One or two of my default models are probably very similar to yours when it comes down to betting odds.
But only after it was discovered that the theory of quantum mechanics implied it was theoretically possible.
My understanding of the history is that everyone believed the extended Church-Turing thesis until someone noticed that the (already established) theory of quantum mechanics contradicted it.
I don’t think I’ve ever seen anyone invoke the extended Church-Turing thesis by either name or substance before quantum computing came around.
People were talking about P-time before quantum computing and implicitly assuming that it applied to any computer they could build.
I don’t see how one would apply “P-time” to “any computer they could build”.
I meant “apply” in the sense that one applies a mathematical model to a phenomenon. Specifically, it was implicitly assumed the the notion of polynomial time captured what was actually possible to compute in polynomial time.
Um, you do realize you’re comparing apples and oranges there, since one is a statement about physics and the other a statement about mathematics.
In this area, I do not think there is such a hard and fast distinction.
So, how would you phrase the existence of hypercomputation as a mathematical statement?
Presumably something involving recursively enumerable functions...
As someone who understands computational theory, I strongly suspect you’re seriously confused about how computational complexity theory works. As I don’t have the time or interest to give a course in computational complexity, might I recommend asking the original question on mathoverflow if you are interested.
Apologies if that came off as rude.
I don’t find this argument persuasive or even strong. n qubits can’t simulate n+1 qubits in general. In fact, n qubits can’t even in general simulate n+1 bits. This suggests that if our understanding of the laws of physics are close to correct for our universe and the larger universe (whether holographic planetarium or simulationist), simulation should be tough.
That may be, but such a general point would be about arbitrary qubits or bits, when a simulation doesn’t have to work over all or even most arrangements.
Hmm, so thinking about this more, I think that Holevo’s theorem can probably be interpreted in a way that much more substantially restricts what one would need to know about the other n bits in order to simulate them, especially since one is apparently simulating not just bits but qubits. But I don’t really have a good understanding of this sort of thing at all. Maybe someone who knows more can comment?
Another issue which backs up simulation being easier- if one cares primarily about life forms one doesn’t need a detailed simulation then of the inside of planets and stars. The exact quantum state of every iron atom in the core of the planet for example shouldn’t matter that much. So if one is mainly simulating the surface of a single planet in full detail, or even just the surfaces of a bunch of planets, that’s a lot less computation.
One other issue is that I’m not sure you can have simulations run that much faster than your own physical reality (again assuming that the simulated universe uses the same basic physics as the underlying universe). See for example this paper which shows that most classical algorithms don’t get major speedup from a quantum computer beyond a constant factor. That constant factor could be big, but this is a pretty strong result even before one is talking about general quantum algorithms. Of course, if the external world didn’t quite work the same (say different constants for things like the speed of light) this might not be much of an issue at all.
Hmm, that’s a good point. So it would then come down to how much of an expectation of what the simulation is likely to do do you need in order to get away with using fewer qubits. I don’t have a good intuition for that, but the fact that BQP is likely to be fairly small compared to all of PSPACE suggests to me that one can’t really get that much out of it. But that’s a weak argument. Your remark makes me update in favor of simulationism being more plausible.
Google’s fault. Thanks for letting me know, though.
Right—the argument is pretty modest. It’s mostly just that the planetarium hypothesis is on par with other hypotheses like the simulation argument.
Yeah, I left this to “a wizard did it”—if you accept simulation, then you can mix and match bigger and smaller planetariums around your brain or around the solar system to pose various physical problems. The planetarium hypothesis is sort of continuous with the simulation hypothesis if you like simulationistic assumptions. [ETA: And I didn’t address any of those problems at any scale, because there’s a problem for each scale. Factor your intuitions about the improbability of actually engineering a planetarium into your a posteriori estimate, to get a custom fit probability.]