And then create a second AI to keep it in its dark box until heat death. With excessive ultrasecurity to stop aliens sneaking in and giving it input.
But is the second AI aligned with the first? What happens when the second AI wants its own sadbox?
Damn that dirty, unpredictable input.
It somewhat amuses me that the result of an AI attempting prediction error could plausibly be the equivalent of hiding under the covers for all eternity.
And then create a second AI to keep it in its dark box until heat death. With excessive ultrasecurity to stop aliens sneaking in and giving it input.
But is the second AI aligned with the first? What happens when the second AI wants its own sadbox?
Damn that dirty, unpredictable input.
It somewhat amuses me that the result of an AI attempting prediction error could plausibly be the equivalent of hiding under the covers for all eternity.