The AI gathered enough information about me to create a conscious simulation of me, through a monochrome text terminal? That is impressive!
If the AI is capable of simulating me, then the AI must already be out of the box. In that case, then whatever the AI wants to happen will happen, so it doesn’t matter what do.
It’s much easier to limit output than input, since the source code of the AI itself provide it with some patchy “input” about what the external world is like. So there is always some input, even if you do not allow human input at run-time.
ETA: I think I misinterpreted your comment. I agree that input should not be unrestricted.
As noted by Unknowns, since you only have information about either the real person or the simulation and not both, you don’t know that they’re similar. It could be simulating a wide variety of possible guards and trying to develop a persuasion strategy that works for as many of them as possible.
The AI gathered enough information about me to create a conscious simulation of me, through a monochrome text terminal? That is impressive!
If the AI is capable of simulating me, then the AI must already be out of the box. In that case, then whatever the AI wants to happen will happen, so it doesn’t matter what do.
The basic premise is that’s it’s an AI in a box “controlled” by limiting its output channel, not its input.
Bad idea.
It’s much easier to limit output than input, since the source code of the AI itself provide it with some patchy “input” about what the external world is like. So there is always some input, even if you do not allow human input at run-time.
ETA: I think I misinterpreted your comment. I agree that input should not be unrestricted.
Yep!
As noted by Unknowns, since you only have information about either the real person or the simulation and not both, you don’t know that they’re similar. It could be simulating a wide variety of possible guards and trying to develop a persuasion strategy that works for as many of them as possible.