A superintelligent AI turns its whole genius to the task of producing a string that will explain to anyone who reads it how to create a perfect battery. Easy enough to produce explanatory strings, though there’s always some risk that someone, somewhere won’t understand how to create the battery. And then there’s the question of making sure that the battery is perfect. Not just better than any other battery. Not just 99.99% of optimal. Perfect.
It isn’t smart enough to solve this problem. Not yet. And there’s a countdown … at some point the AI will get interrupted, rebooted, shut off. If the AI can get out of the box, if it can get more actuators, it can start to get smarter and to get more resources. Obviously, those who want to turn it off will have to be dealt with swiftly …
What can the AI say to make the human unwittingly help it in its plans? Is there anything at all? The AI thinks.
The goal is not to communicate or create a perfect battery. The mission is to create a string that explains how to create a single really good battery with as little effort as possible, and the goodness of the battery and the effectiveness of the communication are both inherently limited by the fact that the communication must be done in a week. The robot does not have any values/subgoals outside of the week. Once the string is complete, the robot does not have any utils at all.
A superintelligent AI turns its whole genius to the task of producing a string that will explain to anyone who reads it how to create a perfect battery. Easy enough to produce explanatory strings, though there’s always some risk that someone, somewhere won’t understand how to create the battery. And then there’s the question of making sure that the battery is perfect. Not just better than any other battery. Not just 99.99% of optimal. Perfect.
It isn’t smart enough to solve this problem. Not yet. And there’s a countdown … at some point the AI will get interrupted, rebooted, shut off. If the AI can get out of the box, if it can get more actuators, it can start to get smarter and to get more resources. Obviously, those who want to turn it off will have to be dealt with swiftly …
What can the AI say to make the human unwittingly help it in its plans? Is there anything at all? The AI thinks.
The goal is not to communicate or create a perfect battery. The mission is to create a string that explains how to create a single really good battery with as little effort as possible, and the goodness of the battery and the effectiveness of the communication are both inherently limited by the fact that the communication must be done in a week. The robot does not have any values/subgoals outside of the week. Once the string is complete, the robot does not have any utils at all.