Lets suppose that Nanotechnology capable of recording and manipulating brains on a subneronal level exists, to such a level that duplicating people is straightforward. Lets also assume that everyone working on this project has the same goal function, and that they aren’t too intrinsically concerned about modifying themselves. The problem you are setting this AI is, given a full brain state, modify it to be much smarter but otherwise the same “person”. Same person implies both same goal function and same memories and same personality quirks. So it would be strictly easier to tell your AI to make a new “person” that has the same goals, and I don’t care if they have the same memories. Remove a few restrictions about making it psychologically humanoid, and you are asking it to solve friendly AI, that won’t be easy.
If there was a simple drug that made humans FAR smarter while leaving our goal functions intact, the AI could find that. However, given my understanding of the human mind, making large intelligence increases and mangling the goal function seems strictly easier than making large increases in intelligence while preserving the goal function. The latter would also seem to require a technical definition of the human goal function, a major component of friendly AI.
Lets suppose that Nanotechnology capable of recording and manipulating brains on a subneronal level exists, to such a level that duplicating people is straightforward. Lets also assume that everyone working on this project has the same goal function, and that they aren’t too intrinsically concerned about modifying themselves. The problem you are setting this AI is, given a full brain state, modify it to be much smarter but otherwise the same “person”. Same person implies both same goal function and same memories and same personality quirks. So it would be strictly easier to tell your AI to make a new “person” that has the same goals, and I don’t care if they have the same memories. Remove a few restrictions about making it psychologically humanoid, and you are asking it to solve friendly AI, that won’t be easy.
If there was a simple drug that made humans FAR smarter while leaving our goal functions intact, the AI could find that. However, given my understanding of the human mind, making large intelligence increases and mangling the goal function seems strictly easier than making large increases in intelligence while preserving the goal function. The latter would also seem to require a technical definition of the human goal function, a major component of friendly AI.