AGI seems to require many-step plans and planning seems to require goals.
Personally I try to see general intelligence purely as a potential. Why would any artificial agent tap its full potential, where does the incentive come from?
If you deprived a human infant of all its evolutionary drives (e.g. to avoid pain, seek nutrition, status and sex), would it just grow into an adult that tried to become rich or rule a country? No, it would have no incentive to do so. Even though such a “blank slate” would have the same potential for general intelligence, it wouldn’t use it.
Say you came up with the most basic template for general intelligence that works given limited resources. If you wanted to apply this potential to improve your template, you would have to give it the explicit incentive to do so. But would it take over the world in doing so? Not if you didn’t explicitly told it to do so, why would it?
In what sense would it be wrong for a general intelligence to maximize paperclips in the universe by waiting for them to arise due to random fluctuations out of a state of chaos? It is not inherently stupid to desire that, there is no law of nature that prohibits certain goals.
The crux of the matter is that a goal isn’t enough to enable the full potential of general intelligence, you also need to explicitly define how to achieve that goal. General intelligence does not imply recursive self-improvement, just the potential to do so, not the incentive. The incentive has to be explicitly defined.
Personally I try to see general intelligence purely as a potential. Why would any artificial agent tap its full potential, where does the incentive come from?
If you deprived a human infant of all its evolutionary drives (e.g. to avoid pain, seek nutrition, status and sex), would it just grow into an adult that tried to become rich or rule a country? No, it would have no incentive to do so. Even though such a “blank slate” would have the same potential for general intelligence, it wouldn’t use it.
Say you came up with the most basic template for general intelligence that works given limited resources. If you wanted to apply this potential to improve your template, you would have to give it the explicit incentive to do so. But would it take over the world in doing so? Not if you didn’t explicitly told it to do so, why would it?
In what sense would it be wrong for a general intelligence to maximize paperclips in the universe by waiting for them to arise due to random fluctuations out of a state of chaos? It is not inherently stupid to desire that, there is no law of nature that prohibits certain goals.
The crux of the matter is that a goal isn’t enough to enable the full potential of general intelligence, you also need to explicitly define how to achieve that goal. General intelligence does not imply recursive self-improvement, just the potential to do so, not the incentive. The incentive has to be explicitly defined.