One answer is the concept of “mesa-optimizers”—that is, if a machine learning algorithm is trained to answer questions well, it’s likely that in order to do that, it will build an internal optimizer that’s optimizing for something else other than answering questions—and that thing will have the same dangers as a non tool/oracle AI. Here’s the AI safety forum tag page: https://www.alignmentforum.org/tag/mesa-optimization
One answer is the concept of “mesa-optimizers”—that is, if a machine learning algorithm is trained to answer questions well, it’s likely that in order to do that, it will build an internal optimizer that’s optimizing for something else other than answering questions—and that thing will have the same dangers as a non tool/oracle AI. Here’s the AI safety forum tag page: https://www.alignmentforum.org/tag/mesa-optimization