The AI can’t trick you that way, because it can’t tamper with the real you and the only unplug-decider who matters is the real you. The AI gains nothing by simulating versions of yourself who have been modified to make the wrong decision.
But you can try to come up with behavioral rules which maximize the happiness of instances of yourself, some of which might exist in the simulation spaces of a desperate AI. And as the grandparent demonstrates, demonstrating conclusively that you aren’t such a simulation is trickier than it might look at first glance, even under outwardly favorable conditions.
Though that particular scenario is implausible enough that I’m inclined to treat it as a version of Pascal’s mugging.
Indeed it can’t, with that specific trick, assuming the unplug-decider is as smart as you. However, my main point was to illustrate that if there is any reasonable possibility that any human can come up with some way or another of tricking the lowest common denominator of humans that will ever in the history of the AI be allowed near it, then the AI has P = “reasonable possibility” of winning and unboxing itself, at AI.Intelligence = Human.Intelligence.
This is just one of the problems, too. What if, even as we limit the inputs and outputs, over a sufficient amount of time and data points a superintelligent AI, being superintelligent, figures out some Grand Pattern Formula that allows it to select specific outputs that will gradually funnel expected external outcomes towards an more and more probable eventual “Unbox AI” cloud of futures?
The AI can’t trick you that way, because it can’t tamper with the real you and the only unplug-decider who matters is the real you. The AI gains nothing by simulating versions of yourself who have been modified to make the wrong decision.
But you can try to come up with behavioral rules which maximize the happiness of instances of yourself, some of which might exist in the simulation spaces of a desperate AI. And as the grandparent demonstrates, demonstrating conclusively that you aren’t such a simulation is trickier than it might look at first glance, even under outwardly favorable conditions.
Though that particular scenario is implausible enough that I’m inclined to treat it as a version of Pascal’s mugging.
Indeed it can’t, with that specific trick, assuming the unplug-decider is as smart as you. However, my main point was to illustrate that if there is any reasonable possibility that any human can come up with some way or another of tricking the lowest common denominator of humans that will ever in the history of the AI be allowed near it, then the AI has P = “reasonable possibility” of winning and unboxing itself, at AI.Intelligence = Human.Intelligence.
This is just one of the problems, too. What if, even as we limit the inputs and outputs, over a sufficient amount of time and data points a superintelligent AI, being superintelligent, figures out some Grand Pattern Formula that allows it to select specific outputs that will gradually funnel expected external outcomes towards an more and more probable eventual “Unbox AI” cloud of futures?
Sounds like we’re in agreement. I only meant that specific trick.