I feel much the same about this post as I did about Roko’s Final Post. It’s imaginative, it’s original, it has an internal logic that manages to range from metaphysics to cosmology; it’s good to have some crazy-bold big-picture thinking like this in the public domain; but it’s still wrong, wrong, wrong. It’s an artefact of its time rather than a glimpse of reality. The reason it’s nonetheless interesting is that it’s an attempt to grasp aspects of reality which are not yet understood in its time—and this is also why I can’t prove it to be “wrong” in a deductive way. Instead, I can only oppose my postulates to the author’s, and argue that mine make more sense.
First I want to give a historical example of human minds probing the implications of things new and unknown, which in a later time became familiar and known. The realization that the other planets were worlds like Earth, a realization we might date from Galileo forwards, opened the human imagination to the idea of other worlds in the sky. People began to ask themselves: what’s on those other worlds, is there life, what’s it like; what’s the big picture, the logic of the situation. In the present day, when robot probes have been to most of the planets and we know them as beautiful but uninhabited landscapes, it may be hard to enter into the mindset of earlier centuries. Earthbound minds, knowing only the one planet, and seeing it to be inhabited, naturally thought of other worlds as inhabited too. Even before 20th-century science fiction, there was an obscure literature of speculation about the alien humanities living on the other planets, how their character might reflect their circumstance, and so forth. It may all seem strange, arbitrary, and even childish now, but it was a way of thinking which was natural to its time.
So, what is the aspect of reality, not yet understood in its time, which makes this article possible, in the same way that the knowledge that there were other worlds, nearby in the sky, made it possible to speculate about life on those worlds? There’s obviously a bit of metaphysics at work in this essay, regarding the relationship between simulation and reality, metaphysics which is very zeitgeisty and not yet understood, and it’s where I will focus my criticism subsequently.
But I would say that the shocking knowledge specific to our own time, that supplied the canvas on which a cosmology like this can be painted, is the realization that the matter of the universe could be used technologically, on a cosmic scale. I remember the shock of reading Stross’s Accelerando and realizing that the planet Mercury really could be dismantled and turned into a cloud of computational elements. The abstract idea of astronomical bodies being turned into giant computers had been known to me for twenty years, but it was still shocking to realize viscerally that it was already manifestly a material possibility, right here in the reality where I live.
Stross’s Mercury gets turned into a cloud of nanocomputers, and it might be argued that this is still vaporware, with many fundamental problems to be solved before it can confidently be said to be possible; but just think of Mercury being turned into a quadrillion Athlon processors, then, orbiting the sun. That would require a titanic industrial enterprise on the dark side of Mercury, and many engineering problems would have to be solved; but we do already know how to mine, how to fabricate chips, how to travel through space. This modified version of Stross’s scenario serves as my proof of concept for the idea of dismantling a planet and turning it into a computer (or a network of computers).
So, to repeat, the shocking discovery is the possibility of megascale (astronomical) engineering, with the construction of megascale computers and computer networks on a trans-solar scale being especially interesting and challenging. It appears to be materially possible for whole solar systems to be turned into computing devices, which could then communicate across interstellar distances and operate for geological periods of time. It’s the further idea that this is the destiny of the universe—the galaxies to be turned into giant Internets—which provides the canvas for cosmo-computational speculation such as we see above.
Various reactions to this possibility exist. Some people embrace it because they have experienced the freedom and power of computation in the present, and they think that a whole universe turned to organized computation implies so much freedom and power that it transcends any previous concept of utopia. Some people will reject it as insanity—they just can’t believe that anything like that could be possible. Some people will offer a more grudging, lukewarm rejection—sure it’s possible, but do you really think we should do that; do you really think a wise, superior alien race would want to eat the universe; in their wisdom, wouldn’t they know that growth isn’t so great—etc. I don’t believe the argument that technological civilizations will avoid doing this as a rule, out of a wise embrace of limits; but the idea of a universe transformed into “computronium”, and especially the idea that any sufficiently advanced civilization will obviously do this, has a manic uniformity about it which makes me suspicious. However, I cannot deny that the vision of robot fleets traveling the galaxy and making Dyson spheres does appear to be a material and technological possibility.
So much for the analysis of where we stand intellectually—what we know, what we don’t know, what we are now able to see as possible but do not know to be actual, likely, or necessary. What do I think of this particular vision of how all that computation will be used? I’m going to start with one of my competing postulates, which provides me with a major reason why I think Jacob’s reasoning is radically wrong. Unfortunately it’s a postulate which is not just at odds with his thinking, but with much of the thinking on this site; so be it. It simply is the postulate that simulation does not create things. Simulations of consciousness do not create consciousness, simulations of universes do not create subjectively inhabited universes. Using Jacob’s terminology, the postulate is that consciousness is strictly a phenomenon occurring at the “base level of reality”. You could have a brain in a vat wired up to a simulation within a simulation, in which case it might be experiencing events at two removes from the physical base; but there won’t be any experience happening, there won’t be anyone home, unless you have the right sort of physical process happening. Abstract computation is not enough.
OK, that’s my main reason for dissenting from this argument, but that’s definitely a minority opinion here. However, I can offer a few other considerations which affect its plausibility. Jacob writes:
Imagine that we had an infinite or near-infinite Turing Machine.
But we don’t, nor does anyone living in a universe with physics like this one. There is a cosmological horizon which bounds the amount of bits available, there is a cosmological evolution which bounds the amount of time available. Just enumerating all programs of length n requires memory resources exponential in n; actually executing them in turn, according to the AIXI algorithm, will be even more computationally intensive. The number of operations which can be executed in our future light-cone is actually not that big, when we start looking at such exponentials of exponentials. This sort of universe isn’t even big enough to simulate all possible stars.
The implication seems to be that if our existence has been coughed up by an AIXI-like brute-force simulation occurring in a universe whose base-level physics is like the physics we see (let’s ignore for the moment my skepticism about functionalism), we can’t be living in a simulation of base-level physics—certainly not a base-level simulation of a whole universe. That is way too big a program to ever be encountered in a brute-force search of program space, occurring in a universe this small. We must be living in some sketchy, partial, approximate virtual reality, big enough to create these appearances and not much else.
If we suppose that the true base-level physics of the ultimate reality might be quite different to that in our simulated universe, then this counterargument doesn’t work—but in that case, we are no longer talking about “ancestor simulations”, we are just talking about brute-force calculations occurring in a possible universe of completely unknown physics. In fact, although Jacob proposes that a universe like ours, run forward, should produce simulations of itself, the argument here leads in the other direction: whatever the base-level physics of reality, it isn’t the physics of the standard model and the big-bang cosmology, because that universe isn’t big enough to generically produce such simulations.
I will repeat my contention that I don’t believe in functionalism/simulationism anyway, but even if one adopts that premise, there needs to be a lot more thought about the sorts of universes one thinks exist in the multiverse, and about the “demographics” of the computations occurring in them. This argument from AIXI would be neat if it worked, because AIXI’s optimality suggests it should be showing up everywhere that Vast computation occurs, and its universality suggests that the same pocket universes should be showing up wherever it runs on a Vast scale. But the conditions for Vast enough computation are not automatically realized, not even in a universe like the one that real-world physics postulates; so one would need to ask oneself, what sort of possible worlds do contains sufficiently Vast computation, and how common in the multiverse are they, and how often will their Vast resources actually get used in a brute-force way.
I feel much the same about this post as I did about Roko’s Final Post.
So from searching around, it looks like Roko was cosmically censored or something on this site. I don’t know if thats supposed to be a warning (if you keep up this train of thought, you too will be censored), or just an observation—but again I wasn’t here so I don’t know much of anything about Roko or his posts.
In the present day, when robot probes have been to most of the planets and we know them as beautiful but uninhabited landscapes, it may be hard to enter into the mindset of earlier centuries. Earthbound minds, knowing only the one planet, and seeing it to be inhabited, naturally thought of other worlds as inhabited too
we have sent robot probes to only a handful of locations in our solar system, a far cry from “most of the planets” unless you think the rest of the galaxy is a facade. (and yeah I realize you probably meant the solar system, but still). And the jury is still out on Mars—it may have had simple life on the past. We don’t have enough observational data yet. Also, there may be life on europa or titan. I’m not holding my breath, but its worth mentioning.
Beware the hindsight bias. When we had limited observational data, it was very reasonable given what we knew then to suppose that other worlds were similar to our own. If you seriously want to argue that the principle of anthropomorphic uniqueness (that earth is a rare unique gem in every statistical measure) vs the principle of mediocrity—the evidence for the latter is quite strong.
Without more observational data, we simply do not know the prior probability for life. But lacking detailed data, we should assume we are a random sample from some unknown distribution.
We used to think we were in the center of the galaxy, but we are within the 95% interval middle, we used to think our system is unique to have planets, we now know that our system is typical in this sense (planets are typical), our system is not especially older or younger, etc etc etc. By all measures we can currently measure based on data we have now, our system is average.
So you can say that life arises to civilization on average on only one system in a trillion, but atm it is extremely difficult to make any serious case for that, and the limited evidence strongly suggests otherwise. Based on our knowledge of our solar system, we see life arising on 1 body out of a few dozen, with the possibility of that being 2 or 3 out of a few dozen (mars, europa, titan still have some small probability).
But I would say that the shocking knowledge specific to our own time, that supplied the canvas on which a cosmology like this can be painted, is the realization that the matter of the universe could be used technologically, on a cosmic scale.
Actually no, I do not find the cosmic scale computer scenarios of Stross, Moravec et al to be realistic. Actually I find them to be about as realistic as our descendants dismantling the universe to build Babbage’s Difference Engines or giant steam clocks. But that analogy isn’t very telling.
If you look at what physics tells you about the fundamentals of computation, you can derive surprisingly powerful invariant predictions about future evolution with knowledge of just a few simple principles:
maximum data storage capacity is proportional to mass
maximum computational throughput is proportional to energy. With quantum computing, this also scales (for probabilistic algorithms) exponentially with the mass: vaguely O(e*2^m). This is of course, insane, but apparently a fact of nature (if quantum computing actually works).
maximum efficiency (in multiple senses: algorithmic, intelligence—ability to make effective use of data, transmission overhead) is inversely proportional to size (radius, volume) - this is a direct consequence of the speed of light
So armed with this knowledge, you can determine apriori that future computational hyperintelligences are highly unlikely to ever get to planetary size. They will be small, possibly even collapsing into singularities or exotic matter in final form. They will necessarily have to get smaller to become more efficient and more intelligent. This isn’t something one has a choice about: big is slow and dumb, small is fast and smart.
Very roughly, I expect that a full-blown runaway Singularity on earth may end up capturing a big chunk of the available solar energy (although perhaps less than the biosphere captures, as fusion or more exotic potentials exist), but would only ever end up needing a small fraction of earth’s mass: probably less than humans currently use. And from thermodynamics, we know maximum efficiency is reached operating in the range of earth’s ambient temperature, and that would be something of a speed constraint.
It simply is the postulate that simulation does not create things.
Make no mistake, it certainly does, and this just a matter of fact—unless one wants to argue definitions.
The computer you are using right now was created first in an approximate simulation in a mammalian cortex, which was later promoted to approximate simulations in computer models, until eventually it was simulated in a very detailed near molecular/quantum level simulation, and then emulated (perfect simulation) through numerous physical prototypes.
Literally everything around you was created through simulation in some form. You can’t create anything without simulation—thought itself is a form of simulation.
Simulations of consciousness do not create consciousness, simulations of universes do not create subjectively inhabited universes.
If you are hard set against computationalism, its probably not worth my energy to get into it (I assumed it is a given), but just to show my perspective a little:
Simulations of consciousness will create consciousness when we succeed in creating AGI’s that are as intelligent as humans and are objectively indistinguishable. At the moment we don’t understand our own brain and mechanisms of intelligence in enough detail to simulate them, and we don’t yet have enough computational power to discover those mechanisms through brute evolutionary search. But that will change pretty soon.
Keep in mind that your consciousness—the essence of your intelligence—is itself is a simulation, nothing more, nothing less.
Just enumerating all programs of length n requires memory resources exponential in n;
Not at all. It requires space of only N plus whatever each program uses for runtime. You are thinking of time resources—that does scale exponentially with N. But no hyperintelligence will use pure AIXI—they will use universal hierarchical approximations (mammalian cortex already does something like this) which have fantastically better scaling. But hold that thought, because your next line of argument brings us (indirectly) to an important agreement. .
actually executing them in turn, according to the AIXI algorithm, will be even more computationally intensive. The number of operations which can be executed in our future light-cone is actually not that big, when we start looking at such exponentials of exponentials. This sort of universe isn’t even big enough to simulate all possible stars.
perfect optimal deterministic intelligence (absolute deterministic 100% future knowledge of everything) requires a computer with at least as much mass as the system you want to simulate, and it provides an exponential time brute force algorithm to find the ultimate minimal program to perfectly simulate said system. That program will essentially be the ultimate theory of physics. But you only need to find that program once, and then forever after that you can in theory simulate anything in linear time with a big enough quantum computer.
But you can only approach that ultimate, so if you want absolute 100% accurate knowledge about how a physical system will evolve, you need to make the physical system itself. We already know this and use this throughout engineering.
First we create things in approximate simulations inside our mammalian cortices, and we create and discard a vast number of potential ideas, the best of which we simulate in ever more detail in computers, until eventually we actually physically create them and test those samples.
I think this is very a strong further argument that future hyper-intelligences will not go around turning all of the universe into computronium. Not only would that be unnecessary and ineffecient, but it would destroy valuable information: they will want to preserve as much of the interesting stuff in the galaxy as possible.
But they will probably convert little chunks of dead matter here and there into hyperintelligences and use those to run countless approximate simulations (that is to say—hyperthought) of the interesting stuff they find. (such as worlds with life)
Roko was censored and publicly abused in and about one post but he deleted everything else himself. (That would have taken hours of real time unless he created some sort of automaton. I tried just browsing through my posts for the last few months and it took ages!)
Hmm, I didn’t ask whether he’d ever had a comment deleted; what I’m confident of is that the root-and-branch removal of all his work was his own doing.
I feel much the same about this post as I did about Roko’s Final Post. It’s imaginative, it’s original, it has an internal logic that manages to range from metaphysics to cosmology; it’s good to have some crazy-bold big-picture thinking like this in the public domain; but it’s still wrong, wrong, wrong. It’s an artefact of its time rather than a glimpse of reality. The reason it’s nonetheless interesting is that it’s an attempt to grasp aspects of reality which are not yet understood in its time—and this is also why I can’t prove it to be “wrong” in a deductive way. Instead, I can only oppose my postulates to the author’s, and argue that mine make more sense.
First I want to give a historical example of human minds probing the implications of things new and unknown, which in a later time became familiar and known. The realization that the other planets were worlds like Earth, a realization we might date from Galileo forwards, opened the human imagination to the idea of other worlds in the sky. People began to ask themselves: what’s on those other worlds, is there life, what’s it like; what’s the big picture, the logic of the situation. In the present day, when robot probes have been to most of the planets and we know them as beautiful but uninhabited landscapes, it may be hard to enter into the mindset of earlier centuries. Earthbound minds, knowing only the one planet, and seeing it to be inhabited, naturally thought of other worlds as inhabited too. Even before 20th-century science fiction, there was an obscure literature of speculation about the alien humanities living on the other planets, how their character might reflect their circumstance, and so forth. It may all seem strange, arbitrary, and even childish now, but it was a way of thinking which was natural to its time.
So, what is the aspect of reality, not yet understood in its time, which makes this article possible, in the same way that the knowledge that there were other worlds, nearby in the sky, made it possible to speculate about life on those worlds? There’s obviously a bit of metaphysics at work in this essay, regarding the relationship between simulation and reality, metaphysics which is very zeitgeisty and not yet understood, and it’s where I will focus my criticism subsequently.
But I would say that the shocking knowledge specific to our own time, that supplied the canvas on which a cosmology like this can be painted, is the realization that the matter of the universe could be used technologically, on a cosmic scale. I remember the shock of reading Stross’s Accelerando and realizing that the planet Mercury really could be dismantled and turned into a cloud of computational elements. The abstract idea of astronomical bodies being turned into giant computers had been known to me for twenty years, but it was still shocking to realize viscerally that it was already manifestly a material possibility, right here in the reality where I live.
Stross’s Mercury gets turned into a cloud of nanocomputers, and it might be argued that this is still vaporware, with many fundamental problems to be solved before it can confidently be said to be possible; but just think of Mercury being turned into a quadrillion Athlon processors, then, orbiting the sun. That would require a titanic industrial enterprise on the dark side of Mercury, and many engineering problems would have to be solved; but we do already know how to mine, how to fabricate chips, how to travel through space. This modified version of Stross’s scenario serves as my proof of concept for the idea of dismantling a planet and turning it into a computer (or a network of computers).
So, to repeat, the shocking discovery is the possibility of megascale (astronomical) engineering, with the construction of megascale computers and computer networks on a trans-solar scale being especially interesting and challenging. It appears to be materially possible for whole solar systems to be turned into computing devices, which could then communicate across interstellar distances and operate for geological periods of time. It’s the further idea that this is the destiny of the universe—the galaxies to be turned into giant Internets—which provides the canvas for cosmo-computational speculation such as we see above.
Various reactions to this possibility exist. Some people embrace it because they have experienced the freedom and power of computation in the present, and they think that a whole universe turned to organized computation implies so much freedom and power that it transcends any previous concept of utopia. Some people will reject it as insanity—they just can’t believe that anything like that could be possible. Some people will offer a more grudging, lukewarm rejection—sure it’s possible, but do you really think we should do that; do you really think a wise, superior alien race would want to eat the universe; in their wisdom, wouldn’t they know that growth isn’t so great—etc. I don’t believe the argument that technological civilizations will avoid doing this as a rule, out of a wise embrace of limits; but the idea of a universe transformed into “computronium”, and especially the idea that any sufficiently advanced civilization will obviously do this, has a manic uniformity about it which makes me suspicious. However, I cannot deny that the vision of robot fleets traveling the galaxy and making Dyson spheres does appear to be a material and technological possibility.
So much for the analysis of where we stand intellectually—what we know, what we don’t know, what we are now able to see as possible but do not know to be actual, likely, or necessary. What do I think of this particular vision of how all that computation will be used? I’m going to start with one of my competing postulates, which provides me with a major reason why I think Jacob’s reasoning is radically wrong. Unfortunately it’s a postulate which is not just at odds with his thinking, but with much of the thinking on this site; so be it. It simply is the postulate that simulation does not create things. Simulations of consciousness do not create consciousness, simulations of universes do not create subjectively inhabited universes. Using Jacob’s terminology, the postulate is that consciousness is strictly a phenomenon occurring at the “base level of reality”. You could have a brain in a vat wired up to a simulation within a simulation, in which case it might be experiencing events at two removes from the physical base; but there won’t be any experience happening, there won’t be anyone home, unless you have the right sort of physical process happening. Abstract computation is not enough.
OK, that’s my main reason for dissenting from this argument, but that’s definitely a minority opinion here. However, I can offer a few other considerations which affect its plausibility. Jacob writes:
But we don’t, nor does anyone living in a universe with physics like this one. There is a cosmological horizon which bounds the amount of bits available, there is a cosmological evolution which bounds the amount of time available. Just enumerating all programs of length n requires memory resources exponential in n; actually executing them in turn, according to the AIXI algorithm, will be even more computationally intensive. The number of operations which can be executed in our future light-cone is actually not that big, when we start looking at such exponentials of exponentials. This sort of universe isn’t even big enough to simulate all possible stars.
The implication seems to be that if our existence has been coughed up by an AIXI-like brute-force simulation occurring in a universe whose base-level physics is like the physics we see (let’s ignore for the moment my skepticism about functionalism), we can’t be living in a simulation of base-level physics—certainly not a base-level simulation of a whole universe. That is way too big a program to ever be encountered in a brute-force search of program space, occurring in a universe this small. We must be living in some sketchy, partial, approximate virtual reality, big enough to create these appearances and not much else.
If we suppose that the true base-level physics of the ultimate reality might be quite different to that in our simulated universe, then this counterargument doesn’t work—but in that case, we are no longer talking about “ancestor simulations”, we are just talking about brute-force calculations occurring in a possible universe of completely unknown physics. In fact, although Jacob proposes that a universe like ours, run forward, should produce simulations of itself, the argument here leads in the other direction: whatever the base-level physics of reality, it isn’t the physics of the standard model and the big-bang cosmology, because that universe isn’t big enough to generically produce such simulations.
I will repeat my contention that I don’t believe in functionalism/simulationism anyway, but even if one adopts that premise, there needs to be a lot more thought about the sorts of universes one thinks exist in the multiverse, and about the “demographics” of the computations occurring in them. This argument from AIXI would be neat if it worked, because AIXI’s optimality suggests it should be showing up everywhere that Vast computation occurs, and its universality suggests that the same pocket universes should be showing up wherever it runs on a Vast scale. But the conditions for Vast enough computation are not automatically realized, not even in a universe like the one that real-world physics postulates; so one would need to ask oneself, what sort of possible worlds do contains sufficiently Vast computation, and how common in the multiverse are they, and how often will their Vast resources actually get used in a brute-force way.
So from searching around, it looks like Roko was cosmically censored or something on this site. I don’t know if thats supposed to be a warning (if you keep up this train of thought, you too will be censored), or just an observation—but again I wasn’t here so I don’t know much of anything about Roko or his posts.
we have sent robot probes to only a handful of locations in our solar system, a far cry from “most of the planets” unless you think the rest of the galaxy is a facade. (and yeah I realize you probably meant the solar system, but still). And the jury is still out on Mars—it may have had simple life on the past. We don’t have enough observational data yet. Also, there may be life on europa or titan. I’m not holding my breath, but its worth mentioning.
Beware the hindsight bias. When we had limited observational data, it was very reasonable given what we knew then to suppose that other worlds were similar to our own. If you seriously want to argue that the principle of anthropomorphic uniqueness (that earth is a rare unique gem in every statistical measure) vs the principle of mediocrity—the evidence for the latter is quite strong.
Without more observational data, we simply do not know the prior probability for life. But lacking detailed data, we should assume we are a random sample from some unknown distribution.
We used to think we were in the center of the galaxy, but we are within the 95% interval middle, we used to think our system is unique to have planets, we now know that our system is typical in this sense (planets are typical), our system is not especially older or younger, etc etc etc. By all measures we can currently measure based on data we have now, our system is average.
So you can say that life arises to civilization on average on only one system in a trillion, but atm it is extremely difficult to make any serious case for that, and the limited evidence strongly suggests otherwise. Based on our knowledge of our solar system, we see life arising on 1 body out of a few dozen, with the possibility of that being 2 or 3 out of a few dozen (mars, europa, titan still have some small probability).
Actually no, I do not find the cosmic scale computer scenarios of Stross, Moravec et al to be realistic. Actually I find them to be about as realistic as our descendants dismantling the universe to build Babbage’s Difference Engines or giant steam clocks. But that analogy isn’t very telling.
If you look at what physics tells you about the fundamentals of computation, you can derive surprisingly powerful invariant predictions about future evolution with knowledge of just a few simple principles:
maximum data storage capacity is proportional to mass
maximum computational throughput is proportional to energy. With quantum computing, this also scales (for probabilistic algorithms) exponentially with the mass: vaguely O(e*2^m). This is of course, insane, but apparently a fact of nature (if quantum computing actually works).
maximum efficiency (in multiple senses: algorithmic, intelligence—ability to make effective use of data, transmission overhead) is inversely proportional to size (radius, volume) - this is a direct consequence of the speed of light
So armed with this knowledge, you can determine apriori that future computational hyperintelligences are highly unlikely to ever get to planetary size. They will be small, possibly even collapsing into singularities or exotic matter in final form. They will necessarily have to get smaller to become more efficient and more intelligent. This isn’t something one has a choice about: big is slow and dumb, small is fast and smart.
Very roughly, I expect that a full-blown runaway Singularity on earth may end up capturing a big chunk of the available solar energy (although perhaps less than the biosphere captures, as fusion or more exotic potentials exist), but would only ever end up needing a small fraction of earth’s mass: probably less than humans currently use. And from thermodynamics, we know maximum efficiency is reached operating in the range of earth’s ambient temperature, and that would be something of a speed constraint.
Make no mistake, it certainly does, and this just a matter of fact—unless one wants to argue definitions.
The computer you are using right now was created first in an approximate simulation in a mammalian cortex, which was later promoted to approximate simulations in computer models, until eventually it was simulated in a very detailed near molecular/quantum level simulation, and then emulated (perfect simulation) through numerous physical prototypes.
Literally everything around you was created through simulation in some form. You can’t create anything without simulation—thought itself is a form of simulation.
If you are hard set against computationalism, its probably not worth my energy to get into it (I assumed it is a given), but just to show my perspective a little:
Simulations of consciousness will create consciousness when we succeed in creating AGI’s that are as intelligent as humans and are objectively indistinguishable. At the moment we don’t understand our own brain and mechanisms of intelligence in enough detail to simulate them, and we don’t yet have enough computational power to discover those mechanisms through brute evolutionary search. But that will change pretty soon.
Keep in mind that your consciousness—the essence of your intelligence—is itself is a simulation, nothing more, nothing less.
Not at all. It requires space of only N plus whatever each program uses for runtime. You are thinking of time resources—that does scale exponentially with N. But no hyperintelligence will use pure AIXI—they will use universal hierarchical approximations (mammalian cortex already does something like this) which have fantastically better scaling. But hold that thought, because your next line of argument brings us (indirectly) to an important agreement. .
perfect optimal deterministic intelligence (absolute deterministic 100% future knowledge of everything) requires a computer with at least as much mass as the system you want to simulate, and it provides an exponential time brute force algorithm to find the ultimate minimal program to perfectly simulate said system. That program will essentially be the ultimate theory of physics. But you only need to find that program once, and then forever after that you can in theory simulate anything in linear time with a big enough quantum computer.
But you can only approach that ultimate, so if you want absolute 100% accurate knowledge about how a physical system will evolve, you need to make the physical system itself. We already know this and use this throughout engineering.
First we create things in approximate simulations inside our mammalian cortices, and we create and discard a vast number of potential ideas, the best of which we simulate in ever more detail in computers, until eventually we actually physically create them and test those samples.
I think this is very a strong further argument that future hyper-intelligences will not go around turning all of the universe into computronium. Not only would that be unnecessary and ineffecient, but it would destroy valuable information: they will want to preserve as much of the interesting stuff in the galaxy as possible.
But they will probably convert little chunks of dead matter here and there into hyperintelligences and use those to run countless approximate simulations (that is to say—hyperthought) of the interesting stuff they find. (such as worlds with life)
Roko wasn’t censored, he deleted everything he’d ever posted. I’ve independently confirmed this via contact with him outside LW.
Roko was censored and publicly abused in and about one post but he deleted everything else himself. (That would have taken hours of real time unless he created some sort of automaton. I tried just browsing through my posts for the last few months and it took ages!)
Actually lots of people were censored—several of my comments were removed from the public record, for example—and others were totally deleted.
Hmm, I didn’t ask whether he’d ever had a comment deleted; what I’m confident of is that the root-and-branch removal of all his work was his own doing.
That’s what he says here.