Here’s an incomplete framework/set of partial suggestions I started working on developing.
I’m going to begin with the assumption that the goal of the AI is to turn the solar system into paperclips. Since most estimates of the computing power of the human brain put it around 10 petaflops, I’m going to assume that the AI needs access to a similar amount of hardware. Even if the AI is 1000 times more efficient than a human brain and only needs access to 10 teraflops of compute power, it still isn’t going to be able to do things like copy itself into every microwave on the planet.
This limitation also makes sense if we think that our AI needs to have access to a lot of memory. If the AI is capable of processing large amounts of information, it probably needs somewhere to store all of that information. It’s not totally clear to me how much that data could be compressed by being distilled down to facts, but I think requiring at least 10s of terabytes of storage is probably pretty sensible. It takes GB of memory to make an accurate model of a few dozen water molecules. I have a difficult time believing an AI is going to be effective at doing much of anything with significantly less memory than that. Access to less memory would impose really strong constraints on what the AI could do. You can’t have a conversation with half the world at once if you don’t have enough memory to keep track of the conversations.
With that being said, there are currently a handful of computers in the 10 petaflops range in the world, and presumably there will be a handful of computers 100-1000 times larger a decade from now. A mobile phone a decade from now might have on the order of 10 gigaflops of processing power, and a typical desktop computer maybe a teraflop. It’s not clear whether these are really good estimates, but they’re unlikely to be off by more than an order of magnitude.
The conclusion is that our AI is probably only going to be able to run on a super computer, although it’s difficult to be super confident in that assertion because it’s not really clear what a good estimate for the computational efficiency of the human brain is.
In general “intelligence” or “problem solving ability” doesn’t scale linearly with additional computing power. For most problems there is a significant amount of overhead associated with managing additional computing resources, and with combining/synchronizing the results. Throwing more processors at a problem can make finding a solution slower, and throwing too many resources at a problem usually ends up looking a lot more like Twitch plays pokemon than a superintelligence. As a consequence of this it isn’t clear how useful it will be from an intelligence/problem solving perspective for an artificial reasoner to take over additional computers via internet connections that are orders of magnitude slower than its internal connections.
There are some things that we can expect an AI to be much better at than humans. It isn’t clear that an AI, even a very intelligent AI, will outperform humans at all tasks. Many humans can be outperformed on shortest path problems or minimum spanning tree problems by slime molds. Since humans are much more intelligent than mold, it seems safe to say that even vastly superior intelligence is no guarantee of superior performance in every domain.
One of the things we’ve learned from computer science is that what seems difficult to humans is a very poor measure of computational difficulty complexity. To a human, multiplying together two 5 digit numbers seems much more difficult than determining whether a picture contains a cat or a fish, but from a computational standpoint the first is trivial while the second is very difficult.
We should probably not expect a general artificial intelligence to be significantly better than humans at manipulating humans or reading human body language and facial expressions. Reading the nuances of human facial expressions is very difficult. To get a more intuitive understanding of this difficulty, try and have a conversation with someone whose face is upside down relative to you. Humans are super specialists in communicating with other humans even down to the whites of our eyes.
It is certainly possible that because the AI is programmed by humans, it will have some understanding of human language/desires/etc., but humans have experienced much stronger selection pressures towards understanding/communicating with/manipulating other humans than our AI is likely to experience.
With that being said, there are tasks that computers are much much better at the humans. One of the things that machines, and presumably a machine intelligence, can do very well that humans struggle with is sequential reasoning. Humans tend to need reinforcement at each step along a path of complex reasoning. Humans play chess by developing heuristics for “good position” while machines play chess primarily via search trees toward an end goal.
It is not totally clear the extent to which the differing levels of comfort with long sequential chains is an accident of human evolution, a limitation due to the fact that human brains work by moving ions around, or whether these sorts of heuristics are necessary for complicated sequential reasoning in world with an impractically large probabilistic search spaces. A full discussion of this issue, and the “evolution” of AI heuristics is probably very important, but only tangentially related to the current question. I’ll try and address this issue more fully in a separate post.
I intended to make more progress by this point, but the more I think about this the more facets of the question come to mind. I’ll try and come back to this and add on to this post sometime in the next week, but I need to take a break from this line of inquiry for a while since writing LW posts isn’t my day job. I’m going ahead and posting this in its incomplete form so people can play around with it if they like, and so I can stop thinking about it and free my mind up to think about other things for a while.
Here’s an incomplete framework/set of partial suggestions I started working on developing.
I’m going to begin with the assumption that the goal of the AI is to turn the solar system into paperclips. Since most estimates of the computing power of the human brain put it around 10 petaflops, I’m going to assume that the AI needs access to a similar amount of hardware. Even if the AI is 1000 times more efficient than a human brain and only needs access to 10 teraflops of compute power, it still isn’t going to be able to do things like copy itself into every microwave on the planet.
This limitation also makes sense if we think that our AI needs to have access to a lot of memory. If the AI is capable of processing large amounts of information, it probably needs somewhere to store all of that information. It’s not totally clear to me how much that data could be compressed by being distilled down to facts, but I think requiring at least 10s of terabytes of storage is probably pretty sensible. It takes GB of memory to make an accurate model of a few dozen water molecules. I have a difficult time believing an AI is going to be effective at doing much of anything with significantly less memory than that. Access to less memory would impose really strong constraints on what the AI could do. You can’t have a conversation with half the world at once if you don’t have enough memory to keep track of the conversations.
With that being said, there are currently a handful of computers in the 10 petaflops range in the world, and presumably there will be a handful of computers 100-1000 times larger a decade from now. A mobile phone a decade from now might have on the order of 10 gigaflops of processing power, and a typical desktop computer maybe a teraflop. It’s not clear whether these are really good estimates, but they’re unlikely to be off by more than an order of magnitude.
The conclusion is that our AI is probably only going to be able to run on a super computer, although it’s difficult to be super confident in that assertion because it’s not really clear what a good estimate for the computational efficiency of the human brain is.
In general “intelligence” or “problem solving ability” doesn’t scale linearly with additional computing power. For most problems there is a significant amount of overhead associated with managing additional computing resources, and with combining/synchronizing the results. Throwing more processors at a problem can make finding a solution slower, and throwing too many resources at a problem usually ends up looking a lot more like Twitch plays pokemon than a superintelligence. As a consequence of this it isn’t clear how useful it will be from an intelligence/problem solving perspective for an artificial reasoner to take over additional computers via internet connections that are orders of magnitude slower than its internal connections.
There are some things that we can expect an AI to be much better at than humans. It isn’t clear that an AI, even a very intelligent AI, will outperform humans at all tasks. Many humans can be outperformed on shortest path problems or minimum spanning tree problems by slime molds. Since humans are much more intelligent than mold, it seems safe to say that even vastly superior intelligence is no guarantee of superior performance in every domain.
One of the things we’ve learned from computer science is that what seems difficult to humans is a very poor measure of computational difficulty complexity. To a human, multiplying together two 5 digit numbers seems much more difficult than determining whether a picture contains a cat or a fish, but from a computational standpoint the first is trivial while the second is very difficult.
We should probably not expect a general artificial intelligence to be significantly better than humans at manipulating humans or reading human body language and facial expressions. Reading the nuances of human facial expressions is very difficult. To get a more intuitive understanding of this difficulty, try and have a conversation with someone whose face is upside down relative to you. Humans are super specialists in communicating with other humans even down to the whites of our eyes. It is certainly possible that because the AI is programmed by humans, it will have some understanding of human language/desires/etc., but humans have experienced much stronger selection pressures towards understanding/communicating with/manipulating other humans than our AI is likely to experience.
With that being said, there are tasks that computers are much much better at the humans. One of the things that machines, and presumably a machine intelligence, can do very well that humans struggle with is sequential reasoning. Humans tend to need reinforcement at each step along a path of complex reasoning. Humans play chess by developing heuristics for “good position” while machines play chess primarily via search trees toward an end goal.
It is not totally clear the extent to which the differing levels of comfort with long sequential chains is an accident of human evolution, a limitation due to the fact that human brains work by moving ions around, or whether these sorts of heuristics are necessary for complicated sequential reasoning in world with an impractically large probabilistic search spaces. A full discussion of this issue, and the “evolution” of AI heuristics is probably very important, but only tangentially related to the current question. I’ll try and address this issue more fully in a separate post.
I intended to make more progress by this point, but the more I think about this the more facets of the question come to mind. I’ll try and come back to this and add on to this post sometime in the next week, but I need to take a break from this line of inquiry for a while since writing LW posts isn’t my day job. I’m going ahead and posting this in its incomplete form so people can play around with it if they like, and so I can stop thinking about it and free my mind up to think about other things for a while.