Indeed it can’t, with that specific trick, assuming the unplug-decider is as smart as you. However, my main point was to illustrate that if there is any reasonable possibility that any human can come up with some way or another of tricking the lowest common denominator of humans that will ever in the history of the AI be allowed near it, then the AI has P = “reasonable possibility” of winning and unboxing itself, at AI.Intelligence = Human.Intelligence.
This is just one of the problems, too. What if, even as we limit the inputs and outputs, over a sufficient amount of time and data points a superintelligent AI, being superintelligent, figures out some Grand Pattern Formula that allows it to select specific outputs that will gradually funnel expected external outcomes towards an more and more probable eventual “Unbox AI” cloud of futures?
Indeed it can’t, with that specific trick, assuming the unplug-decider is as smart as you. However, my main point was to illustrate that if there is any reasonable possibility that any human can come up with some way or another of tricking the lowest common denominator of humans that will ever in the history of the AI be allowed near it, then the AI has P = “reasonable possibility” of winning and unboxing itself, at AI.Intelligence = Human.Intelligence.
This is just one of the problems, too. What if, even as we limit the inputs and outputs, over a sufficient amount of time and data points a superintelligent AI, being superintelligent, figures out some Grand Pattern Formula that allows it to select specific outputs that will gradually funnel expected external outcomes towards an more and more probable eventual “Unbox AI” cloud of futures?
Sounds like we’re in agreement. I only meant that specific trick.