Three places similar ideas have occurred that spring to mind:
FIRSTSuarez’s pair of novels Daemon and Freedom(tm) are probably the most direct analogue, because it is a story of taking over the world via software, with an intensely practical focus.
The essential point for this discussion here and now is that prior to launching his system, the character who takes over the world first tests the quality of the goal state that he’s aiming at by implementing it first as a real world MMORP. Then the takeover of the world proceeds via trigger-response software scripts running on the net, but causing events in the real world via: bribes, booby traps, contracted R&D, and video game like social engineering.
The MMORP start not only functions as his test bed for how he wants the world to work at the end… it also gives him starting cash, a suite of software tools for describing automated responses to human decisions, code to script the tactics of swarms of killer robots, and so on.
SECOND Nozick’s Experience Machine thought experiment is remarkably similar to your thought experiment, and yet aimed at a totally different question.
Nozick was not wondering “can such a machine be described in detail and exist” (this was assumed) but rather “would people enter any such machine and thereby give up on some sort of atavistic connection to an unmediated substrate reality, and if not what does this mean about the axiological status of subjective experience as such?”
Personally I find the specifics of the machine to matter an enormous amount to how I feel about it… so much so that Nozick’s thought experiment doesn’t really work for me in its philosophically intended manner. There has been a lot of play with the concept in fiction that neighbors on the trope where the machine just gives you the experience of leaving the machine if you try to leave it. This is probably some kind of archetypal response to how disgusting it is in practice for people to be pure subjective hedonists?
THIRD Greg Egan’s novel Diaspora has most of the human descended people living purely in and as software.
In the novel any common environment simulator and interface (which has hooks into the sensory processes of the software people) is referred to as a “scape” and many of the software people’s political positions revolve around which kinds of scapes are better or worse for various reasons.
Konishi Polis produces a lot of mathematicians, and has a scape that supports “gestalt” (like vision) and “linear” (like speech or sound) but it does not support physical contact between avatars (their relative gestalt positions just ghost around and through each other) because physical contact seems sort of metaphysically coercive and unethical to them. By contrast Carter-Zimmerman produces the best physicists, and it has relatively high quality physics simulations built into their scape, because they think that high quality minds with powerful intuitions require that kind of low level physical experience embedded into their everyday cognitive routines. There are also flesh people (who think flesh gives them authenticity or something like that) and robots (who think “fake physics” is fake, even though having flesh bodies is too dangerous) and so on.
All of the choices matter personally to the people… but there is essentially no lock in, in the sense that people are forced to do one thing or another by an overarching controller that settles how things will work for everyone for all time.
If you want to emmigrate from Konishi to Carter Zimmerman you just change which server you’re hosted on (for better latency) and either have mind surgery (to retrofit your soul with the necessary reflexes for navigating the new kind of scape) or else turn on a new layer of exoself (that makes your avatar in the new place move according to a translation scheme based on your home scape’s equivalent reflexes).
If you want to, you can get a robot body instead (the physical world then becomes like a very very slow scape and you run into the question of whether to slow down your clocks and let all your friends and family race ahead mentally, or keep your clock at a normal speed and have the robot body be like a slow moving sculpture you direct to do new things over subjectively long periods of time). Some people are still implemented in flesh, but if they choose they can get scanned into software and run as a biology emulation. Becoming biologically based is the only transformation rarely performed because… uh… once you’ve been scanned (or been built from software from scratch) why would you do this?!
Interesting angles:
Suarez assumes physical coercion and exponential growth as the natural order, and is mostly interested in the details of these processes as implemented in real political/economic systems. He doesn’t care about 200 years from now and he uses MMORP simulations as simply a testbed for practical engineering in intensely human domains.
Nozick wants to assume utopia, and often an objection is “who keeps the Experience Machine from breaking down?”
Egan’s novel has cool posthuman world building, but the actual story revolves around the question of keeping the experience machine from breaking down… eventually stars explode or run down… so what should be done in the face of a seemingly inevitable point in time where there will be no good answer to the question of “how can we survive this new situation?”
Of course there are many examples of virtual reality in fiction! The goal of the post is dealing with superintelligence x-risk, by making UFAI build the VR in a particular way that prevents extra suffering and further intelligence explosions. All examples you gave are still vulnerable to superintelligence x-risk, as far as I can tell.
Three places similar ideas have occurred that spring to mind:
FIRST Suarez’s pair of novels Daemon and Freedom(tm) are probably the most direct analogue, because it is a story of taking over the world via software, with an intensely practical focus.
The essential point for this discussion here and now is that prior to launching his system, the character who takes over the world first tests the quality of the goal state that he’s aiming at by implementing it first as a real world MMORP. Then the takeover of the world proceeds via trigger-response software scripts running on the net, but causing events in the real world via: bribes, booby traps, contracted R&D, and video game like social engineering.
The MMORP start not only functions as his test bed for how he wants the world to work at the end… it also gives him starting cash, a suite of software tools for describing automated responses to human decisions, code to script the tactics of swarms of killer robots, and so on.
SECOND Nozick’s Experience Machine thought experiment is remarkably similar to your thought experiment, and yet aimed at a totally different question.
Nozick was not wondering “can such a machine be described in detail and exist” (this was assumed) but rather “would people enter any such machine and thereby give up on some sort of atavistic connection to an unmediated substrate reality, and if not what does this mean about the axiological status of subjective experience as such?”
Personally I find the specifics of the machine to matter an enormous amount to how I feel about it… so much so that Nozick’s thought experiment doesn’t really work for me in its philosophically intended manner. There has been a lot of play with the concept in fiction that neighbors on the trope where the machine just gives you the experience of leaving the machine if you try to leave it. This is probably some kind of archetypal response to how disgusting it is in practice for people to be pure subjective hedonists?
THIRD Greg Egan’s novel Diaspora has most of the human descended people living purely in and as software.
In the novel any common environment simulator and interface (which has hooks into the sensory processes of the software people) is referred to as a “scape” and many of the software people’s political positions revolve around which kinds of scapes are better or worse for various reasons.
Konishi Polis produces a lot of mathematicians, and has a scape that supports “gestalt” (like vision) and “linear” (like speech or sound) but it does not support physical contact between avatars (their relative gestalt positions just ghost around and through each other) because physical contact seems sort of metaphysically coercive and unethical to them. By contrast Carter-Zimmerman produces the best physicists, and it has relatively high quality physics simulations built into their scape, because they think that high quality minds with powerful intuitions require that kind of low level physical experience embedded into their everyday cognitive routines. There are also flesh people (who think flesh gives them authenticity or something like that) and robots (who think “fake physics” is fake, even though having flesh bodies is too dangerous) and so on.
All of the choices matter personally to the people… but there is essentially no lock in, in the sense that people are forced to do one thing or another by an overarching controller that settles how things will work for everyone for all time.
If you want to emmigrate from Konishi to Carter Zimmerman you just change which server you’re hosted on (for better latency) and either have mind surgery (to retrofit your soul with the necessary reflexes for navigating the new kind of scape) or else turn on a new layer of exoself (that makes your avatar in the new place move according to a translation scheme based on your home scape’s equivalent reflexes).
If you want to, you can get a robot body instead (the physical world then becomes like a very very slow scape and you run into the question of whether to slow down your clocks and let all your friends and family race ahead mentally, or keep your clock at a normal speed and have the robot body be like a slow moving sculpture you direct to do new things over subjectively long periods of time). Some people are still implemented in flesh, but if they choose they can get scanned into software and run as a biology emulation. Becoming biologically based is the only transformation rarely performed because… uh… once you’ve been scanned (or been built from software from scratch) why would you do this?!
Interesting angles:
Suarez assumes physical coercion and exponential growth as the natural order, and is mostly interested in the details of these processes as implemented in real political/economic systems. He doesn’t care about 200 years from now and he uses MMORP simulations as simply a testbed for practical engineering in intensely human domains.
Nozick wants to assume utopia, and often an objection is “who keeps the Experience Machine from breaking down?”
Egan’s novel has cool posthuman world building, but the actual story revolves around the question of keeping the experience machine from breaking down… eventually stars explode or run down… so what should be done in the face of a seemingly inevitable point in time where there will be no good answer to the question of “how can we survive this new situation?”
Of course there are many examples of virtual reality in fiction! The goal of the post is dealing with superintelligence x-risk, by making UFAI build the VR in a particular way that prevents extra suffering and further intelligence explosions. All examples you gave are still vulnerable to superintelligence x-risk, as far as I can tell.