Speaking as someone who assigns a low probability to AI going FOOM, I agree that letting an AI go online drastically increases the plausibility that an AI will go FOOM.
However, without that capability other claims you’ve made don’t have much plausibility.
Even if it could barely scrape by in a Turing test against a five-year-old, it would still have all the powers that all computers inherently have, so it would already be superhuman in some respects, giving it enormous self-improving ability.
Not really. If a machine has no more intelligence than a human, even a moderately bright human, that doesn’t mean it will have enough intelligence to self-improve. Self-improvement requires deep understanding. A bright AI might be able to improve specific modules (say by replacing a module for factoring numbers with a module that uses a quicker algorithm) .
There are other general problems with AIs going FOOM. In particular, if the AI doesn’t have access to knew hardware then it is limited by the limits of software improvement. Thus for example, if P != NP in a strong way, that puts a serious limit on how efficient software can become. Similarly, some common mathematicial algorithms, such as linear programming, are close to their theoretical optimums. There’s been some interesting discussion here about this subject before. See especially this discussion of mine with cousin_it. That discussion made me think that theoretical comp sci provides fewer barriers to AI going FOOM than I thought but it still seems to provide substantial barriers.
There are a few other issues that an AI trying to go FOOM might run into. For example, there’s a general historical metapattern that it takes more and more resources to learn more about the universe. Thus for example, in the 1850s a single biologist could make amazing discoveries and a single chemist could discover a new element. But now, even to turn out minor papers can require a lot of resources and people. The metapattern of nature is that the resources it takes to understand things more increases at about the same rate as our improved understanding gives us more resources to understand things. In many fields if anything, there is a decreasing marginal return . So even if the AI is very smart, it might not be able to do that much.
Certainly, an AI going FOOM is one of the more plausible forms of Singularities proposed. But I don’t assign it a particularly high probability as long as people aren’t doing things like giving the AI general internet access. The nightmare scenario seems to be that a) someone gives a marginally smart AI internet access and b) the AI discovers a very quick algorithm for factoring integers, and then the entire internet becomes the AI’s playground and then shortly after that becomes functional brainpower. But this requires three unlikely things to occur: 1) someone connecting the AI to the internet with minimal supervision 2) there to exist a fast factoring algorithm that no one has discovered 3) The AI finding that algorithm.
For example, there’s a general historical metapattern that it takes more and more resources to learn more about the universe.
This is one of the strongest arguments I’ve ever heard against FOOM. But if we can get an AI up to the level of one moderately-smart scientist, horizontal scaling makes it a million scientists working at 1000x the human rate without any problems with coordination and akrasia, which sounds extremely scary.
I guess I should have noted that I’m assuming it can have all the hardware it wants. If it doesn’t, yes, that does create problems. There’s only so much better you can do than Quicksort.
And the reason I think that a transhuman AI might still be bad at the Turing test is that humans are really good at it, and pretty bad at remembering that ALL execution paths have to return a value, and that it has to be a string. So I think computers will learn to program long before they learn to speak English.
Speaking as someone who assigns a low probability to AI going FOOM, I agree that letting an AI go online drastically increases the plausibility that an AI will go FOOM.
However, without that capability other claims you’ve made don’t have much plausibility.
Not really. If a machine has no more intelligence than a human, even a moderately bright human, that doesn’t mean it will have enough intelligence to self-improve. Self-improvement requires deep understanding. A bright AI might be able to improve specific modules (say by replacing a module for factoring numbers with a module that uses a quicker algorithm) .
There are other general problems with AIs going FOOM. In particular, if the AI doesn’t have access to knew hardware then it is limited by the limits of software improvement. Thus for example, if P != NP in a strong way, that puts a serious limit on how efficient software can become. Similarly, some common mathematicial algorithms, such as linear programming, are close to their theoretical optimums. There’s been some interesting discussion here about this subject before. See especially this discussion of mine with cousin_it. That discussion made me think that theoretical comp sci provides fewer barriers to AI going FOOM than I thought but it still seems to provide substantial barriers.
There are a few other issues that an AI trying to go FOOM might run into. For example, there’s a general historical metapattern that it takes more and more resources to learn more about the universe. Thus for example, in the 1850s a single biologist could make amazing discoveries and a single chemist could discover a new element. But now, even to turn out minor papers can require a lot of resources and people. The metapattern of nature is that the resources it takes to understand things more increases at about the same rate as our improved understanding gives us more resources to understand things. In many fields if anything, there is a decreasing marginal return . So even if the AI is very smart, it might not be able to do that much.
Certainly, an AI going FOOM is one of the more plausible forms of Singularities proposed. But I don’t assign it a particularly high probability as long as people aren’t doing things like giving the AI general internet access. The nightmare scenario seems to be that a) someone gives a marginally smart AI internet access and b) the AI discovers a very quick algorithm for factoring integers, and then the entire internet becomes the AI’s playground and then shortly after that becomes functional brainpower. But this requires three unlikely things to occur: 1) someone connecting the AI to the internet with minimal supervision 2) there to exist a fast factoring algorithm that no one has discovered 3) The AI finding that algorithm.
This is one of the strongest arguments I’ve ever heard against FOOM. But if we can get an AI up to the level of one moderately-smart scientist, horizontal scaling makes it a million scientists working at 1000x the human rate without any problems with coordination and akrasia, which sounds extremely scary.
I guess I should have noted that I’m assuming it can have all the hardware it wants. If it doesn’t, yes, that does create problems. There’s only so much better you can do than Quicksort.
And the reason I think that a transhuman AI might still be bad at the Turing test is that humans are really good at it, and pretty bad at remembering that ALL execution paths have to return a value, and that it has to be a string. So I think computers will learn to program long before they learn to speak English.