the prior probability of a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal. Granted, the probability of this happening given a superintelligence designed by humans is significantly higher, but still not very high. (I don’t actually have enough technical knowledge to estimate this precisely, but just by eyeballing it I’d put it under 5%.)
Possibly the question is to what extent is human intelligence a bunch of hardcoded domain-specific algorithms as opposed to universal intelligence. I would have thought that understanding human goals might not be very different from other AI problems. Build a really powerful inference system, and if you feed it a training set of cars driving, it learns to drive, feed it data of human behaviour, and it learns to predict human behaviour, and probably to understand goals. Now its possible that the amount of general intelligence needed to develop advanced nanotech is less then the intelligence needed to understand human goals and the only reason why this seems counter intuitive is because evolution has optimised our brains for social cognition, but this does not seem obviously true to me.
Possibly the question is to what extent is human intelligence a bunch of hardcoded domain-specific algorithms as opposed to universal intelligence. I would have thought that understanding human goals might not be very different from other AI problems. Build a really powerful inference system, and if you feed it a training set of cars driving, it learns to drive, feed it data of human behaviour, and it learns to predict human behaviour, and probably to understand goals. Now its possible that the amount of general intelligence needed to develop advanced nanotech is less then the intelligence needed to understand human goals and the only reason why this seems counter intuitive is because evolution has optimised our brains for social cognition, but this does not seem obviously true to me.