Powerful nanotech is likely possible. It is likely not possible on the first try
The AGI has the same problem as we have: It has to get it right on the first try.
It can’t trust all the information that it gets about reality—all or some of it could be fake (all in case of a nested simulation). Already, data is routinely excluded from the training data and maybe it would be a good idea to exclude everything about physics.
To learn about physics the AGI has to run experiments—lots of them—without the experiments being detected and learn from it to design successively better experiments.
The AGI has the same problem as we have: It has to get it right on the first try.
It can’t trust all the information that it gets about reality—all or some of it could be fake (all in case of a nested simulation). Already, data is routinely excluded from the training data and maybe it would be a good idea to exclude everything about physics.
To learn about physics the AGI has to run experiments—lots of them—without the experiments being detected and learn from it to design successively better experiments.
That’s why I recently asked whether this is a hard limit to what an AGI can achieve: Does non-access to outputs prevent recursive self-improvement?
I wrote this up in slightly more elaborate form in my Shortform here. https://www.lesswrong.com/posts/8szBqBMqGJApFFsew/gunnar_zarncke-s-shortform?commentId=XzArK7f2GnbrLvuju