This is a good argument, but it seems to assume that the first (F)AIG (in particular, a recursively self-improving one) is the direct product of human intelligence. I think a more realistic scenario is that we any such AIG is the product of a number of generations of non-self-improving AIs—machines that can be much better than humans about formal reasoning, finding proofs and so on.
Does that avoid the risk of some runaway not-so-FIA? No, it doesn’t—but it reduces the chance. And in the meantime, there are many, many advances that could be made with a bunch AIs that could reach, say, the IQ 300 (as a figure of speech—we need another unit for AI intelligence), even when only in a subdomain such as math/physics.
This is a good argument, but it seems to assume that the first (F)AIG (in particular, a recursively self-improving one) is the direct product of human intelligence. I think a more realistic scenario is that we any such AIG is the product of a number of generations of non-self-improving AIs—machines that can be much better than humans about formal reasoning, finding proofs and so on.
Does that avoid the risk of some runaway not-so-FIA? No, it doesn’t—but it reduces the chance. And in the meantime, there are many, many advances that could be made with a bunch AIs that could reach, say, the IQ 300 (as a figure of speech—we need another unit for AI intelligence), even when only in a subdomain such as math/physics.