The trouble is that while such a limited AI would be safer, but it would also be proportionately less useful. There are two important reasons to want a safe, powerful AI much more than a safe, not-so-powerful AI:
Such an AI will help us stop other people from building a non-safe, highly-competent AI and releasing it into the wild.
AI that shares human values would help us cure disease, save the environment, explore the stars, etc, but only if it is highly competent.
Sure—we could do all these things with slide rules and scratch paper, given enough time and resources. But more powerful technology serves as an important multiplier in making things happen.
And air cooled fission cores have amazing simplicity and power density.
What bugs me about the concept of a “seed AI” that basically rebuilds itself incrementally is I don’t see a whole lot of difference than basically rigging a nuclear reactor to blow by using too much highly enriched uranium too close together. Or building an electrical panel where the wiring is all intermixed and you’ve got bricks of explosives put in there.
If you don’t have a clean, rational design for an AI, with a clear purpose and clear definition of success/failure for each subsystem, you’ve done a bad job at engineering one. We absolutely could develop AIs that will automate most menial tasks because they are well defined. We could develop one that would act like a force mulitplier for existing engineers, allowing the engineer to specify the optimization parameters and the AI produces candidate designs it thinks based on simulations and past experience will optimally meet those parameters.
Even more difficult things like treatments for aging and nanomachinery design and production could be solved with limited function, specialized agents acting like a force multiplier. Hardly the same thing as going back to paper and slide rules.
The trouble is that while such a limited AI would be safer, but it would also be proportionately less useful. There are two important reasons to want a safe, powerful AI much more than a safe, not-so-powerful AI:
Such an AI will help us stop other people from building a non-safe, highly-competent AI and releasing it into the wild.
AI that shares human values would help us cure disease, save the environment, explore the stars, etc, but only if it is highly competent.
You could do all those things with more limited agents, it would just take longer and be less efficient.
Sure—we could do all these things with slide rules and scratch paper, given enough time and resources. But more powerful technology serves as an important multiplier in making things happen.
And air cooled fission cores have amazing simplicity and power density.
What bugs me about the concept of a “seed AI” that basically rebuilds itself incrementally is I don’t see a whole lot of difference than basically rigging a nuclear reactor to blow by using too much highly enriched uranium too close together. Or building an electrical panel where the wiring is all intermixed and you’ve got bricks of explosives put in there.
If you don’t have a clean, rational design for an AI, with a clear purpose and clear definition of success/failure for each subsystem, you’ve done a bad job at engineering one. We absolutely could develop AIs that will automate most menial tasks because they are well defined. We could develop one that would act like a force mulitplier for existing engineers, allowing the engineer to specify the optimization parameters and the AI produces candidate designs it thinks based on simulations and past experience will optimally meet those parameters.
Even more difficult things like treatments for aging and nanomachinery design and production could be solved with limited function, specialized agents acting like a force multiplier. Hardly the same thing as going back to paper and slide rules.