I do think it’s plausible that e.g. nanotech requires some amount of trial-and-error or experimentation, even for a superintelligence. But such experimentation could be done quickly or cheaply.
But the main idea is that intelligence will eventually boil down to searching the best answer by trying—like evolution does.
Evolution is a pretty dumb optimization process; ordinary human level intelligence is more than enough to surpass its optimization power with OOM less trial and error.
For example, designing an internal combustion engine or a CPU requires solving some problems which might run into combinatorial explosions, if your strategy is to just try a bunch of different designs until you find one that works. But humans manage to design engines and CPUs and many other things that evolution couldn’t do with billions of years of trial and error.
There might be some practical problems for which combinatorial explosion or computational hardness imposes a hard limit on the capabilities of intelligence. For example, I expect there are cryptographic algorithms that even a superintelligence won’t be able to break.
But I doubt that such impossibilities translate into practical limits—what does it matter if a superintelligence can’t crack the keys to your bitcoin wallet, if it can just directly disassemble you and your computer into their constituent atoms?
Maybe developing disassembling technology itself unavoidably requires solving some fundamentally intractable problem. But I think human success at various design problems is at least weak evidence that this isn’t true. If you didn’t know the answer in advance, and you had to guess whether it was possible to design a modern CPU without intractable amounts of trial and error, you might guess no.
Maybe developing disassembling technology itself unavoidably requires solving some fundamentally intractable problem
It’s very difficult to argue with most of the other claims if the base assumption is that this sort of technology is a) possible b) in one or few shots c) with reasonable for the planet compute.
I do think it’s plausible that e.g. nanotech requires some amount of trial-and-error or experimentation, even for a superintelligence. But such experimentation could be done quickly or cheaply.
Evolution is a pretty dumb optimization process; ordinary human level intelligence is more than enough to surpass its optimization power with OOM less trial and error.
For example, designing an internal combustion engine or a CPU requires solving some problems which might run into combinatorial explosions, if your strategy is to just try a bunch of different designs until you find one that works. But humans manage to design engines and CPUs and many other things that evolution couldn’t do with billions of years of trial and error.
There might be some practical problems for which combinatorial explosion or computational hardness imposes a hard limit on the capabilities of intelligence. For example, I expect there are cryptographic algorithms that even a superintelligence won’t be able to break.
But I doubt that such impossibilities translate into practical limits—what does it matter if a superintelligence can’t crack the keys to your bitcoin wallet, if it can just directly disassemble you and your computer into their constituent atoms?
Maybe developing disassembling technology itself unavoidably requires solving some fundamentally intractable problem. But I think human success at various design problems is at least weak evidence that this isn’t true. If you didn’t know the answer in advance, and you had to guess whether it was possible to design a modern CPU without intractable amounts of trial and error, you might guess no.
It’s very difficult to argue with most of the other claims if the base assumption is that this sort of technology is a) possible b) in one or few shots c) with reasonable for the planet compute.