To summarize my reasons for downvoting, after first reading the entire contents of the linked blog:
There are standard scenarios in which our world is a hoax, e.g. a computer simulation or stage-managed by aliens. These are plausible enough to be non-negligible in their most general form, although claims of weird specific hoaxes are unlikely. Given some weird observation, like waking up with a blue tentacle, a claim of a weird specific hoax is the most likely non-delusory explanation.
Because of the schizophrenia you have previously mentioned here, you make a lot of weird observations, and have trouble interpreting mundane coincidences as mundane. You also picked up a lot of ideas from the Less Wrong community. So you reach out to the hoax hypotheses to justify your delusions and hallucinations, and go on to encrust them with theological language. This is both a common tendency in paranoid schizophrenics, and a way to assert opposition to and claim superiority to Less Wrong, per your usual self-admitted trolling.
This approach seem unlikely to lead to fruitful or pleasant reading. And empirically, the ratio of nonsense, “raving crank style,” and insanity to interesting ideas (all available elsewhere) is far too high. The situation is sad, but I want to see less of this, including posts linking to it, so I downvoted.
Perhaps I should also note that I disagree with your analysis on various points.
Because of the schizophrenia you have previously mentioned here, you make a lot of weird observations, and have trouble interpreting mundane coincidences as mundane.
I’m schizotypal I suppose, but not schizophrenic given the standard definition. I don’t think I have any trouble interpreting mundane coincidences as mundane.
You also picked up a lot of ideas from the Less Wrong community.
Not especially so, actually.
So you reach out to the hoax hypotheses to justify your delusions and hallucinations
No, I honestly prefer something like Thomism to tricky hoaxes.
go on to encrust them with theological language
At Computational Theology I haven’t even really gotten into theology yet, and I certainly haven’t claimed that any supposed paranormal influences are or aren’t related to God.
This is both a common tendency in paranoid schizophrenics
I’m not sure what “this” is that you’re referring to. Theological language? I don’t think schizophrenics commonly try to “justify” their delusions by couching them in terms of theological language. What would the point be? I don’t get it. Note that talking about the abstract nature of God and so on is completely unrelated to common schizophrenic symptoms like thinking one is God or that one is somehow an ontologically privileged person.
a way to assert opposition to and claim superiority to Less Wrong
No, I don’t represent LessWrong as a thing in that way. Some on LessWrong are very interesting, some aren’t. I try to only talk to the interesting folk, even if they have serious disagreements with me. I certainly don’t think I’m “superior” to sundry people who participate on LessWrong.
per your usual self-admitted trolling.
I rarely troll—few of my LessWrong comments are downvoted. Is trolling relevant to the post? I don’t think the writing style and content of the post smacks of superiority, and I don’t think it’s trolling. It seems to me to be an argument made in good faith in the hopes of calling attention to a hypothesis that is rightly or wrongly seen as neglected.
This approach seem unlikely to lead to fruitful or pleasant reading.
Which approach? I don’t think I’m trolling, or condescend-ing. Regarding pleasantness, is there something else wrong with my writing style? Regarding fruitfulness, is it that you’re not interested in the things I discuss for whatever reason, or, more likely, is it that I generally don’t come up with ideas that catalyze further fruit-bearing insights for you? If the latter, I agree this is a problem, which is why I’ve created Computational Theology to have some place to plant seeds in the process of conceptual gardening. Hopefully having my own blog will allow me to share various interesting and significant ideas that I’ve had for a long time but that I’ve never had a chance to share on LessWrong. Hanging out at SingInst for a few years led me to have a lot of cool thoughts that ideally should be shared with the greater LessWrong community.
And empirically, the ratio of nonsense, “raving crank style,” and insanity to interesting ideas (all available elsewhere) is far too high.
What are you referring to? Few of my comments here are downvoted, and many are heavily upvoted. Also, I’ve put forth many original ideas that have been upvoted by the LessWrong community. Presumably those comments would not be “available elsewhere”.
The situation is sad, but I want to see less of this, including posts linking to it, so I downvoted.
I rarely troll—few of my LessWrong comments are downvoted.
(Empirical data: According to a karma histogram program someone posted some months ago (I saved a copy locally, but regrettably have forgotten the author’s identity), 294 of 2190 of your recent comments (about 13.4%) have negative karma as of around 1735 PDT today.)
[Edited to add: However, as Will points out in the child, it might be misleading to simply count downvoted comments, because it is believed that some users mass-downvote the comments of certain others rather than judging each comment individually; only 80 out of the 2190 comments under consideration (about 3.7%) were voted to −4 or below.]
Note that much of that is likely due to karmassassination, not legitimate downvoting.
Disagree. I approve downvoting of most of your comments that were downvoted to −2 or below, for reasons triggered by those particular comments. This makes it plausible that they were downvoted for similar reasons, rather than in a way insensitive to qualities of individual comments.
Right, but I also know that karmassassination has occurred at various points, and any karmassassination is likely to take up a disproportionate chunk of the downvotes. No?
Zack’s statistic of −4 or below is the most pertinent. It’s at 3.7%.
People will naturally wish to compare this with the percentage of my comments that are +4 or more. Zack tells us that this percentage is 19.2%.
So there’s clearly a very large asymmetry. What one makes of it depends on a lot of other background stuff.
I also know that karmassassination has occurred at various points, and any karmassassination is likely to take up a disproportionate chunk of the downvotes. No?
Not necessarily. Taboo “karmassassination”, what were you actually observing? One scenario is that some comments you make draw attention and people look over the recent N of your posts, judge them individually, but it turns out that the judgment is mostly negative. Another is that people who want to discourage a certain type of comments downvote multiple already-downvoted posts without paying too much attention, expecting that downvotes that are already present carry sufficient evidence in the context. Both cases result in surges of negative votes which remain sensitive to qualities of individual comments.
People will naturally wish to compare this with the percentage of my comments that are +4 or more. Zack tells us that this percentage is 19.2%.
You’re drifting from the topic, I’m not discussing a net perception of your participation, only explanations for the negatively judged contributions. Your writing them off as not-particularly-meaningful (effect of “karmassassination” rather than of comments’ negative qualities) seems like a rationalization, given the observations above.
Like, I’m not trying to avoid the knowledge that I often make contributions to LessWrong that aren’t well-received. It happens, more for me than for others. I was just pointing out that I’ve also noticed strict karmassassination sometimes, not necessarily often in my 2190 most recent comments. It’s just a thing to take into account. The karmassassination I have experience with is often not of the sort that you describe. But I’m perfectly willing to accept such explanations sometimes, and I’ve already noticed that they explain a few big chunks lost a few months back.
I don’t write all of them off as meaningless, of course! Didn’t mean to imply that. Some comments just aren’t positive contributions to LessWrong. It happens, and it happpens to me more than others. I’m not denying that at all.
I have a request, which you’re not at all obligated to fulfill of course. But could you tell me what percentage of my 2190 most recent comments have received 4 or more upvotes?
(And I am sorry if it was rude of me to have initiated this exchange at all, but surely it will be understood that this is the type of venue where if someone uses a word like most or few and one happens to have the actual data easily available, then one should be encouraged to share it.)
The linked argument doesn’t require blue-tentacle-like psi phenomena. See the three bullet points that apply when there’s no superintelligent influence. The planetarium hypothesis is completely disjunctive with psi arguments, and explains the Fermi paradox even in the absence of psi. It’s also not just my hypothesis—there’s historical precedent, as has been linked to in the post. ETA: I hope that the second, Fermi-centric half of the linked post can be judged on its own terms and inspire debate about its arguments, regardless of the various theological or paranormal claims that might exist elsewhere on the blog.
[My primary interpretation of the downvotes for this comment is basically: “I want to discourage people from talking about psi, parapsychology, or anything like that—we all know that magic doesn’t exist, so we should try to explain phenomena that actually exist and that are therefore actually interesting. Admittedly you (Will_Newsome) didn’t spontaneously bring up psi in your comment, and your comment is a more-or-less reasonable reply to its parent, but downvoting this comment is the easiest way to punish you for associating LessWrong with blatantly irrational speculation.”]
I’m a tad annoyed that it apparently breaks my space bar—arrow keys and pgup/pgdwn work, but space does nothing.
Anyway, my basic reaction is that you give no interesting reasons for preferring a planetarium over a simulation besides philosophy of mind (most of which theories, I believe, would not predict any output difference in the absence of real qualia in a simulation) or efficiency (which to the extent we can analyze at all, weighs in strongly for simulation being more efficient).
I also don’t understand how such an entity would even build a planetarium in the first place. Wouldn’t any physical shell badly interfere with predictions of planetary or cometary orbits? Or cause parallax? etc. What would the timing be, and is there really no natural records that would throw off a planetarium constructed just in time for humans to be fooled (akin to testing the fine structure constant by looking at natural nuclear reactors from millions/billions of years ago)?
Existing matter seems highly redundant, and building a full-scale 1:1 replica, as it were, means you cannot opt for any amount of approximation by definition or possible optimization.
I would draw an analogy to NP problems: yes, the best way to solve the pathologically hardest instances of any NP problem is brute force, just like there are probably arrangements of matter which cannot be calculated more efficiently by computronium than the actual arrangement of matter. But nevertheless, SAT solvers run remarkably fast on many real-world problem and far faster than anyone focused on the general asymptotic behavior would expect, and we have no reason to believe the world itself is a pathological instance of worlds.
One possible objection: what if humans are doing hypercomputation? E.g., being created by evolution (which is fundamentally “tied into” reality’s computation) lets humans tap into the latent computation of the universe in a way that an algorithmic AI can’t emulate, so it keeps humans around to use as hypercomputers. Various people have proposed similar hypotheses. I think this objection can be met, though.
The usual anti-Penrose point comes to mind: if quantum microtubules are really that useful, we can probably just build them into chips, and better, and the problem goes away.
Unless you mean the “tieing into” somehow requires a prefrontal cortex, at least 1 kidney, a working gallbladder, etc, in which case I think that’s just sheer privileging of hypothesis with not a scrap of evidence for it.
Former, not the latter. And yes, the anti-Penrose point applies, but we can skirt it by postulating that the superintelligence is limited in its decision theory—it can recognize good results when it seems them, much as TDT can recognize that UDT beats it at counterfactual mugging, but it’s architecturally constrained not to self-modify into the winning thing. So humans might run native hypercomputation or native super-awesome decision theory that an AI could exploit but that the AI would know it couldn’t emulate given its knowledge of its own limited architecture.
I guess you’re distantly alluding to the old discussion of ‘what would AIXI do if it ran into a hypercomputing oracle?’ in modern guise. I’m afraid I know too little about TDT or UDT to appreciate the point. It just seems a little far-fetched—so not only are we thinking about hypercomputation, which I believe is generally regarded as being orders of magnitude less likely than say P=NP, we’re also thinking about a superintelligent and superpowerful agent with a decision theory that just happens to be broken in the right way?
If we were being mined for our computational potential, I can’t help but feel human lives ought to be less repetitive than they are.
I believe is generally regarded as being orders of magnitude less likely than say P=NP
Haven’t seen any surveys, but I don’t think so. I think hypercomputation is considered by some important people to be more likely than P=NP. I believe very few people have really considered it, so you shouldn’t take anyone’s off-the-cuff impressions as meaning very much unless you know they’ve thought a lot about the limitations of theoretical computer science. I don’t really have any ax to grind on the matter, but I think hypercomputation is neglected.
we’re also thinking about a superintelligent and superpowerful agent with a decision theory that just happens to be broken in the right way?
I think my points were supposed to be disjunctive, not conjunctive. A broken decision theory or a limited theory of computation can both result in humans outcompeting superintelligences on certain very specific decision problems or (pseudo-)computations. Wei Dai’s “Metaphilosophical Mysteries” is relevant.
If we were being mined for our computational potential, I can’t help but feel human lives ought to be less repetitive than they are.
Given some models, yes. Given other models, the AI might not be able to locate what parts of the system have the special sauce and what parts don’t, so it’s more likely to let humans be.
Your link isn’t a stupid person, but to some extent, the lack of interest in hypercomputation says what the field thinks of it. Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.
Not sure, but it seems that whenever I get into discussions with you it’s usually about some potentially-important edge case or something. Strange.
But anyway, yeah. I just want to flag hypercomputation as a speculative thing that it might be worth taking an interest in, much like mirror matter. One or two of my default models are probably very similar to yours when it comes down to betting odds.
Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.
But only after it was discovered that the theory of quantum mechanics implied it was theoretically possible.
Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.
My understanding of the history is that everyone believed the extended Church-Turing thesis until someone noticed that the (already established) theory of quantum mechanics contradicted it.
I meant “apply” in the sense that one applies a mathematical model to a phenomenon. Specifically, it was implicitly assumed the the notion of polynomial time captured what was actually possible to compute in polynomial time.
It just seems a little far-fetched—so not only are we thinking about hypercomputation, which I believe is generally regarded as being orders of magnitude less likely than say P=NP
Um, you do realize you’re comparing apples and oranges there, since one is a statement about physics and the other a statement about mathematics.
As someone who understands computational theory, I strongly suspect you’re seriously confused about how computational complexity theory works. As I don’t have the time or interest to give a course in computational complexity, might I recommend asking the original question on mathoverflow if you are interested.
I don’t find this argument persuasive or even strong. n qubits can’t simulate n+1 qubits in general. In fact, n qubits can’t even in general simulate n+1 bits. This suggests that if our understanding of the laws of physics are close to correct for our universe and the larger universe (whether holographic planetarium or simulationist), simulation should be tough.
That may be, but such a general point would be about arbitrary qubits or bits, when a simulation doesn’t have to work over all or even most arrangements.
Hmm, so thinking about this more, I think that Holevo’s theorem can probably be interpreted in a way that much more substantially restricts what one would need to know about the other n bits in order to simulate them, especially since one is apparently simulating not just bits but qubits. But I don’t really have a good understanding of this sort of thing at all. Maybe someone who knows more can comment?
Another issue which backs up simulation being easier- if one cares primarily about life forms one doesn’t need a detailed simulation then of the inside of planets and stars. The exact quantum state of every iron atom in the core of the planet for example shouldn’t matter that much. So if one is mainly simulating the surface of a single planet in full detail, or even just the surfaces of a bunch of planets, that’s a lot less computation.
One other issue is that I’m not sure you can have simulations run that much faster than your own physical reality (again assuming that the simulated universe uses the same basic physics as the underlying universe). See for example this paper which shows that most classical algorithms don’t get major speedup from a quantum computer beyond a constant factor. That constant factor could be big, but this is a pretty strong result even before one is talking about general quantum algorithms. Of course, if the external world didn’t quite work the same (say different constants for things like the speed of light) this might not be much of an issue at all.
Hmm, that’s a good point. So it would then come down to how much of an expectation of what the simulation is likely to do do you need in order to get away with using fewer qubits. I don’t have a good intuition for that, but the fact that BQP is likely to be fairly small compared to all of PSPACE suggests to me that one can’t really get that much out of it. But that’s a weak argument. Your remark makes me update in favor of simulationism being more plausible.
I’m a tad annoyed that it apparently breaks my space bar—arrow keys and pgup/pgdwn work, but space does nothing.
Google’s fault. Thanks for letting me know, though.
Anyway, my basic reaction is that you give no interesting reasons for preferring a planetarium over a simulation
Right—the argument is pretty modest. It’s mostly just that the planetarium hypothesis is on par with other hypotheses like the simulation argument.
I also don’t understand how such an entity would even build a planetarium in the first place.
Yeah, I left this to “a wizard did it”—if you accept simulation, then you can mix and match bigger and smaller planetariums around your brain or around the solar system to pose various physical problems. The planetarium hypothesis is sort of continuous with the simulation hypothesis if you like simulationistic assumptions. [ETA: And I didn’t address any of those problems at any scale, because there’s a problem for each scale. Factor your intuitions about the improbability of actually engineering a planetarium into your a posteriori estimate, to get a custom fit probability.]
To summarize my reasons for downvoting, after first reading the entire contents of the linked blog:
There are standard scenarios in which our world is a hoax, e.g. a computer simulation or stage-managed by aliens. These are plausible enough to be non-negligible in their most general form, although claims of weird specific hoaxes are unlikely. Given some weird observation, like waking up with a blue tentacle, a claim of a weird specific hoax is the most likely non-delusory explanation.
Because of the schizophrenia you have previously mentioned here, you make a lot of weird observations, and have trouble interpreting mundane coincidences as mundane. You also picked up a lot of ideas from the Less Wrong community. So you reach out to the hoax hypotheses to justify your delusions and hallucinations, and go on to encrust them with theological language. This is both a common tendency in paranoid schizophrenics, and a way to assert opposition to and claim superiority to Less Wrong, per your usual self-admitted trolling.
This approach seem unlikely to lead to fruitful or pleasant reading. And empirically, the ratio of nonsense, “raving crank style,” and insanity to interesting ideas (all available elsewhere) is far too high. The situation is sad, but I want to see less of this, including posts linking to it, so I downvoted.
Perhaps I should also note that I disagree with your analysis on various points.
I’m schizotypal I suppose, but not schizophrenic given the standard definition. I don’t think I have any trouble interpreting mundane coincidences as mundane.
Not especially so, actually.
No, I honestly prefer something like Thomism to tricky hoaxes.
At Computational Theology I haven’t even really gotten into theology yet, and I certainly haven’t claimed that any supposed paranormal influences are or aren’t related to God.
I’m not sure what “this” is that you’re referring to. Theological language? I don’t think schizophrenics commonly try to “justify” their delusions by couching them in terms of theological language. What would the point be? I don’t get it. Note that talking about the abstract nature of God and so on is completely unrelated to common schizophrenic symptoms like thinking one is God or that one is somehow an ontologically privileged person.
No, I don’t represent LessWrong as a thing in that way. Some on LessWrong are very interesting, some aren’t. I try to only talk to the interesting folk, even if they have serious disagreements with me. I certainly don’t think I’m “superior” to sundry people who participate on LessWrong.
I rarely troll—few of my LessWrong comments are downvoted. Is trolling relevant to the post? I don’t think the writing style and content of the post smacks of superiority, and I don’t think it’s trolling. It seems to me to be an argument made in good faith in the hopes of calling attention to a hypothesis that is rightly or wrongly seen as neglected.
Which approach? I don’t think I’m trolling, or condescend-ing. Regarding pleasantness, is there something else wrong with my writing style? Regarding fruitfulness, is it that you’re not interested in the things I discuss for whatever reason, or, more likely, is it that I generally don’t come up with ideas that catalyze further fruit-bearing insights for you? If the latter, I agree this is a problem, which is why I’ve created Computational Theology to have some place to plant seeds in the process of conceptual gardening. Hopefully having my own blog will allow me to share various interesting and significant ideas that I’ve had for a long time but that I’ve never had a chance to share on LessWrong. Hanging out at SingInst for a few years led me to have a lot of cool thoughts that ideally should be shared with the greater LessWrong community.
What are you referring to? Few of my comments here are downvoted, and many are heavily upvoted. Also, I’ve put forth many original ideas that have been upvoted by the LessWrong community. Presumably those comments would not be “available elsewhere”.
Fair enough!
(Empirical data: According to a karma histogram program someone posted some months ago (I saved a copy locally, but regrettably have forgotten the author’s identity), 294 of 2190 of your recent comments (about 13.4%) have negative karma as of around 1735 PDT today.)
[Edited to add: However, as Will points out in the child, it might be misleading to simply count downvoted comments, because it is believed that some users mass-downvote the comments of certain others rather than judging each comment individually; only 80 out of the 2190 comments under consideration (about 3.7%) were voted to −4 or below.]
Thanks!
Note that much of that is likely due to karmassassination, not legitimate downvoting.
Disagree. I approve downvoting of most of your comments that were downvoted to −2 or below, for reasons triggered by those particular comments. This makes it plausible that they were downvoted for similar reasons, rather than in a way insensitive to qualities of individual comments.
Right, but I also know that karmassassination has occurred at various points, and any karmassassination is likely to take up a disproportionate chunk of the downvotes. No?
Zack’s statistic of −4 or below is the most pertinent. It’s at 3.7%.
People will naturally wish to compare this with the percentage of my comments that are +4 or more. Zack tells us that this percentage is 19.2%.
So there’s clearly a very large asymmetry. What one makes of it depends on a lot of other background stuff.
Not necessarily. Taboo “karmassassination”, what were you actually observing? One scenario is that some comments you make draw attention and people look over the recent N of your posts, judge them individually, but it turns out that the judgment is mostly negative. Another is that people who want to discourage a certain type of comments downvote multiple already-downvoted posts without paying too much attention, expecting that downvotes that are already present carry sufficient evidence in the context. Both cases result in surges of negative votes which remain sensitive to qualities of individual comments.
You’re drifting from the topic, I’m not discussing a net perception of your participation, only explanations for the negatively judged contributions. Your writing them off as not-particularly-meaningful (effect of “karmassassination” rather than of comments’ negative qualities) seems like a rationalization, given the observations above.
Like, I’m not trying to avoid the knowledge that I often make contributions to LessWrong that aren’t well-received. It happens, more for me than for others. I was just pointing out that I’ve also noticed strict karmassassination sometimes, not necessarily often in my 2190 most recent comments. It’s just a thing to take into account. The karmassassination I have experience with is often not of the sort that you describe. But I’m perfectly willing to accept such explanations sometimes, and I’ve already noticed that they explain a few big chunks lost a few months back.
I don’t write all of them off as meaningless, of course! Didn’t mean to imply that. Some comments just aren’t positive contributions to LessWrong. It happens, and it happpens to me more than others. I’m not denying that at all.
Oh, that’s a good point—I’ve added an addendum to the grandparent.
I have a request, which you’re not at all obligated to fulfill of course. But could you tell me what percentage of my 2190 most recent comments have received 4 or more upvotes?
19.2%
(And I am sorry if it was rude of me to have initiated this exchange at all, but surely it will be understood that this is the type of venue where if someone uses a word like most or few and one happens to have the actual data easily available, then one should be encouraged to share it.)
Not at all! I very much appreciate the data. Thank you for sharing.
The linked argument doesn’t require blue-tentacle-like psi phenomena. See the three bullet points that apply when there’s no superintelligent influence. The planetarium hypothesis is completely disjunctive with psi arguments, and explains the Fermi paradox even in the absence of psi. It’s also not just my hypothesis—there’s historical precedent, as has been linked to in the post. ETA: I hope that the second, Fermi-centric half of the linked post can be judged on its own terms and inspire debate about its arguments, regardless of the various theological or paranormal claims that might exist elsewhere on the blog.
[My primary interpretation of the downvotes for this comment is basically: “I want to discourage people from talking about psi, parapsychology, or anything like that—we all know that magic doesn’t exist, so we should try to explain phenomena that actually exist and that are therefore actually interesting. Admittedly you (Will_Newsome) didn’t spontaneously bring up psi in your comment, and your comment is a more-or-less reasonable reply to its parent, but downvoting this comment is the easiest way to punish you for associating LessWrong with blatantly irrational speculation.”]
I’m a tad annoyed that it apparently breaks my space bar—arrow keys and pgup/pgdwn work, but space does nothing.
Anyway, my basic reaction is that you give no interesting reasons for preferring a planetarium over a simulation besides philosophy of mind (most of which theories, I believe, would not predict any output difference in the absence of real qualia in a simulation) or efficiency (which to the extent we can analyze at all, weighs in strongly for simulation being more efficient).
I also don’t understand how such an entity would even build a planetarium in the first place. Wouldn’t any physical shell badly interfere with predictions of planetary or cometary orbits? Or cause parallax? etc. What would the timing be, and is there really no natural records that would throw off a planetarium constructed just in time for humans to be fooled (akin to testing the fine structure constant by looking at natural nuclear reactors from millions/billions of years ago)?
Can you expand on this? This isn’t obvious to me.
Existing matter seems highly redundant, and building a full-scale 1:1 replica, as it were, means you cannot opt for any amount of approximation by definition or possible optimization.
I would draw an analogy to NP problems: yes, the best way to solve the pathologically hardest instances of any NP problem is brute force, just like there are probably arrangements of matter which cannot be calculated more efficiently by computronium than the actual arrangement of matter. But nevertheless, SAT solvers run remarkably fast on many real-world problem and far faster than anyone focused on the general asymptotic behavior would expect, and we have no reason to believe the world itself is a pathological instance of worlds.
One possible objection: what if humans are doing hypercomputation? E.g., being created by evolution (which is fundamentally “tied into” reality’s computation) lets humans tap into the latent computation of the universe in a way that an algorithmic AI can’t emulate, so it keeps humans around to use as hypercomputers. Various people have proposed similar hypotheses. I think this objection can be met, though.
The usual anti-Penrose point comes to mind: if quantum microtubules are really that useful, we can probably just build them into chips, and better, and the problem goes away.
Unless you mean the “tieing into” somehow requires a prefrontal cortex, at least 1 kidney, a working gallbladder, etc, in which case I think that’s just sheer privileging of hypothesis with not a scrap of evidence for it.
Former, not the latter. And yes, the anti-Penrose point applies, but we can skirt it by postulating that the superintelligence is limited in its decision theory—it can recognize good results when it seems them, much as TDT can recognize that UDT beats it at counterfactual mugging, but it’s architecturally constrained not to self-modify into the winning thing. So humans might run native hypercomputation or native super-awesome decision theory that an AI could exploit but that the AI would know it couldn’t emulate given its knowledge of its own limited architecture.
I guess you’re distantly alluding to the old discussion of ‘what would AIXI do if it ran into a hypercomputing oracle?’ in modern guise. I’m afraid I know too little about TDT or UDT to appreciate the point. It just seems a little far-fetched—so not only are we thinking about hypercomputation, which I believe is generally regarded as being orders of magnitude less likely than say P=NP, we’re also thinking about a superintelligent and superpowerful agent with a decision theory that just happens to be broken in the right way?
If we were being mined for our computational potential, I can’t help but feel human lives ought to be less repetitive than they are.
Haven’t seen any surveys, but I don’t think so. I think hypercomputation is considered by some important people to be more likely than P=NP. I believe very few people have really considered it, so you shouldn’t take anyone’s off-the-cuff impressions as meaning very much unless you know they’ve thought a lot about the limitations of theoretical computer science. I don’t really have any ax to grind on the matter, but I think hypercomputation is neglected.
I think my points were supposed to be disjunctive, not conjunctive. A broken decision theory or a limited theory of computation can both result in humans outcompeting superintelligences on certain very specific decision problems or (pseudo-)computations. Wei Dai’s “Metaphilosophical Mysteries” is relevant.
Given some models, yes. Given other models, the AI might not be able to locate what parts of the system have the special sauce and what parts don’t, so it’s more likely to let humans be.
Your link isn’t a stupid person, but to some extent, the lack of interest in hypercomputation says what the field thinks of it. Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.
Wei Dai’s link is pretty controversial.
Not sure, but it seems that whenever I get into discussions with you it’s usually about some potentially-important edge case or something. Strange.
But anyway, yeah. I just want to flag hypercomputation as a speculative thing that it might be worth taking an interest in, much like mirror matter. One or two of my default models are probably very similar to yours when it comes down to betting odds.
But only after it was discovered that the theory of quantum mechanics implied it was theoretically possible.
My understanding of the history is that everyone believed the extended Church-Turing thesis until someone noticed that the (already established) theory of quantum mechanics contradicted it.
I don’t think I’ve ever seen anyone invoke the extended Church-Turing thesis by either name or substance before quantum computing came around.
People were talking about P-time before quantum computing and implicitly assuming that it applied to any computer they could build.
I don’t see how one would apply “P-time” to “any computer they could build”.
I meant “apply” in the sense that one applies a mathematical model to a phenomenon. Specifically, it was implicitly assumed the the notion of polynomial time captured what was actually possible to compute in polynomial time.
Um, you do realize you’re comparing apples and oranges there, since one is a statement about physics and the other a statement about mathematics.
In this area, I do not think there is such a hard and fast distinction.
So, how would you phrase the existence of hypercomputation as a mathematical statement?
Presumably something involving recursively enumerable functions...
As someone who understands computational theory, I strongly suspect you’re seriously confused about how computational complexity theory works. As I don’t have the time or interest to give a course in computational complexity, might I recommend asking the original question on mathoverflow if you are interested.
Apologies if that came off as rude.
I don’t find this argument persuasive or even strong. n qubits can’t simulate n+1 qubits in general. In fact, n qubits can’t even in general simulate n+1 bits. This suggests that if our understanding of the laws of physics are close to correct for our universe and the larger universe (whether holographic planetarium or simulationist), simulation should be tough.
That may be, but such a general point would be about arbitrary qubits or bits, when a simulation doesn’t have to work over all or even most arrangements.
Hmm, so thinking about this more, I think that Holevo’s theorem can probably be interpreted in a way that much more substantially restricts what one would need to know about the other n bits in order to simulate them, especially since one is apparently simulating not just bits but qubits. But I don’t really have a good understanding of this sort of thing at all. Maybe someone who knows more can comment?
Another issue which backs up simulation being easier- if one cares primarily about life forms one doesn’t need a detailed simulation then of the inside of planets and stars. The exact quantum state of every iron atom in the core of the planet for example shouldn’t matter that much. So if one is mainly simulating the surface of a single planet in full detail, or even just the surfaces of a bunch of planets, that’s a lot less computation.
One other issue is that I’m not sure you can have simulations run that much faster than your own physical reality (again assuming that the simulated universe uses the same basic physics as the underlying universe). See for example this paper which shows that most classical algorithms don’t get major speedup from a quantum computer beyond a constant factor. That constant factor could be big, but this is a pretty strong result even before one is talking about general quantum algorithms. Of course, if the external world didn’t quite work the same (say different constants for things like the speed of light) this might not be much of an issue at all.
Hmm, that’s a good point. So it would then come down to how much of an expectation of what the simulation is likely to do do you need in order to get away with using fewer qubits. I don’t have a good intuition for that, but the fact that BQP is likely to be fairly small compared to all of PSPACE suggests to me that one can’t really get that much out of it. But that’s a weak argument. Your remark makes me update in favor of simulationism being more plausible.
Google’s fault. Thanks for letting me know, though.
Right—the argument is pretty modest. It’s mostly just that the planetarium hypothesis is on par with other hypotheses like the simulation argument.
Yeah, I left this to “a wizard did it”—if you accept simulation, then you can mix and match bigger and smaller planetariums around your brain or around the solar system to pose various physical problems. The planetarium hypothesis is sort of continuous with the simulation hypothesis if you like simulationistic assumptions. [ETA: And I didn’t address any of those problems at any scale, because there’s a problem for each scale. Factor your intuitions about the improbability of actually engineering a planetarium into your a posteriori estimate, to get a custom fit probability.]