Rather than building a single SAI and trusting our future—good or bad—to it, we instead will find ourselves tending to a diverse garden of different AIs, each optimized to a different environment. In this future, Humanity does not simply build a Last Invention and then enjoy a golden retirement. Rather, the future is filled with an endless series of new and unique challenges that we must adapt—and dare I say evolve—in response to.
I don’t understand why the previous sections in the post implies that. Here’s my understanding of the argument:
Intelligence is largely about understanding patterns that allow you to simplify underlying domains. However, many domains we care about have a tremendous amount of irreducible complexity—biological organisms evolved via a pseudorandom process, there’s no reason to expect them to obey clean laws of abstraction. There is no unified theory of biology because there is no simple set of rules that lead to emergent complexity. It’s just tons of weird stuff built on top of each other all the way down. Therefore, an ASI couldn’t just understand all of biology and then one-shot an immortality drug. Creating an immortality drug would necessarily involve a great deal of trial and error—an optimal approach might look something like a random sampling across a very large space of possibilities, which occasionally hits on something that lets you narrow down in the possibility space.
Thus, we can’t have powerful AI, and we won’t have a single ASI controlling everything. We’ll have narrow AI’s optimised for different things instead, and humans will be the gardeners of these optimisers.
===
But why on earth does the second paragraph follow from the first? What’s stopping an ASI from figuring out the best way to navigate these heuristics and applying them itself, the same way humans manage to do both biology and discrete mathematics without having separate bio-humans and math-humans? If the best strategy to solve science is to have many different narrow AI’s, the ASI is the thing that can create those narrow AI’s. Why exactly is it that humans are the ones who “become gardeners atop a wild ecosystem of heuristic optimizers” when your own example of Doom from the start was about an AI doing this right now?
It seems like the “anti-singularity” just means “We have an ASI that can do everything, except the ASI also needs to invest an enormous amount of labor into solving computationally intractable domains. It will only be mildly more sample-efficient than humans and largely have to rely on being able to think faster and work 24⁄7 to have an advantage over people in these areas.” So, we don’t get an AI bootstrapping itself into godhood, but we still become obsolete eventually.
But that’s not what you seem to be saying. You seem to go straight from “Domains like biology have a large amount of computational irreducibility” to “Therefore, humans will always remain at the highest level of abstraction.” Why? An ASI doesn’t have to find a unified theory of biology to make us essentially obsolete. Seems like an anti-singularity just gets us to the same place slower.
I don’t understand why the previous sections in the post implies that. Here’s my understanding of the argument:
Intelligence is largely about understanding patterns that allow you to simplify underlying domains. However, many domains we care about have a tremendous amount of irreducible complexity—biological organisms evolved via a pseudorandom process, there’s no reason to expect them to obey clean laws of abstraction. There is no unified theory of biology because there is no simple set of rules that lead to emergent complexity. It’s just tons of weird stuff built on top of each other all the way down. Therefore, an ASI couldn’t just understand all of biology and then one-shot an immortality drug. Creating an immortality drug would necessarily involve a great deal of trial and error—an optimal approach might look something like a random sampling across a very large space of possibilities, which occasionally hits on something that lets you narrow down in the possibility space.
Thus, we can’t have powerful AI, and we won’t have a single ASI controlling everything. We’ll have narrow AI’s optimised for different things instead, and humans will be the gardeners of these optimisers.
===
But why on earth does the second paragraph follow from the first? What’s stopping an ASI from figuring out the best way to navigate these heuristics and applying them itself, the same way humans manage to do both biology and discrete mathematics without having separate bio-humans and math-humans? If the best strategy to solve science is to have many different narrow AI’s, the ASI is the thing that can create those narrow AI’s. Why exactly is it that humans are the ones who “become gardeners atop a wild ecosystem of heuristic optimizers” when your own example of Doom from the start was about an AI doing this right now?
It seems like the “anti-singularity” just means “We have an ASI that can do everything, except the ASI also needs to invest an enormous amount of labor into solving computationally intractable domains. It will only be mildly more sample-efficient than humans and largely have to rely on being able to think faster and work 24⁄7 to have an advantage over people in these areas.” So, we don’t get an AI bootstrapping itself into godhood, but we still become obsolete eventually.
But that’s not what you seem to be saying. You seem to go straight from “Domains like biology have a large amount of computational irreducibility” to “Therefore, humans will always remain at the highest level of abstraction.” Why? An ASI doesn’t have to find a unified theory of biology to make us essentially obsolete. Seems like an anti-singularity just gets us to the same place slower.