I didn’t find the view that AI will have human survival as an instrumental goal, for example, as workers or, more likely, as a possible trade with aliens or simulation owners. It will preserve humans to demonstrate its general friendliness to possible peers.
AI may also preserve humans for research proposes like running experiments in simulations.
Yeah, I think that’s another example of a combination of going partway into “why would it do the scary thing?” (3) and “wouldn’t it be good anyway?” (5). (A lot of people wouldn’t consider “AI takes over but keeps humans alive for its own (perhaps scary) reasons” to be a “non-doom” outcome.) Missing positions like this one is a consequence of trying to categorize into disjoint groups, unfortunately.
To fizzlers: advance AI is internally unstable and can suddenly halt. The more advance is AI, the quicker it halts, as it reach its goal in shorter and shorter time.
I didn’t find the view that AI will have human survival as an instrumental goal, for example, as workers or, more likely, as a possible trade with aliens or simulation owners. It will preserve humans to demonstrate its general friendliness to possible peers.
AI may also preserve humans for research proposes like running experiments in simulations.
Yeah, I think that’s another example of a combination of going partway into “why would it do the scary thing?” (3) and “wouldn’t it be good anyway?” (5). (A lot of people wouldn’t consider “AI takes over but keeps humans alive for its own (perhaps scary) reasons” to be a “non-doom” outcome.) Missing positions like this one is a consequence of trying to categorize into disjoint groups, unfortunately.
To fizzlers: advance AI is internally unstable and can suddenly halt. The more advance is AI, the quicker it halts, as it reach its goal in shorter and shorter time.