Unless I’m really misinterpreting you, “simply copy the algorithm into more hardware” sounds totally silly to me. In general, tasks need to be designed from the ground up with parallelization in mind in order to be efficiently parallelizable. Rarely have I ever wanted to run a serial algorithm in parallel and had it be a matter of “simply run the same old thing on each one and put the results together.” The more complicated the algorithm in question, the more work it takes to efficiently and correctly split up the work; and at really large, Google-esque scales, you need to start worrying about latency and hardware reliability.
I tend to agree that recursive self-improvement will lead to big gains fast, but I don’t buy that it’s going to be immediately trivial for the AI to just throw more hardware at the problem and gain huge chunks of performance for free. It depends on the initial design.
Unless I’m really misinterpreting you, “simply copy the algorithm into more hardware” sounds totally silly to me. In general, tasks need to be designed from the ground up with parallelization in mind in order to be efficiently parallelizable.
If human-level AI is developed successfully, the first working AI will already be parallelized across many computers. An algorthm that wasn’t would have too much of a disadvantage in the amount of computing power it could exploit to compete with parallel algorithms. Also, almost all machine learning algorithms in use today are trivially parallelizable, as is the human brain.
So, while I don’t know just how much benefit an AI would gain from spreading itself across more hardware, I certainly wouldn’t bet against being able to do so at all. I wouldn’t bet on a linear upper bound, either, though I’m less certain of that.
That’s quite true. I mean, honestly, I would expect any AI to parallelize very well, although I’m loathe to trust my intuition about anything related to AGI. But I don’t think we can take it as a given that the AI will be able to get linear or better gains in its speed of thought when going, say, from some big parallel supercomputer in a datacenter to trying to spread itself out through commodity hardware in other physical locations.
If a prospective AI had a tremendous, planet-sized amount of hardware available to it, it might hardly matter, but in the real world, I imagine that the AI would have to work hard to obtain a sizable amount of physical resources, and how well it can use those resources could make the difference between hours, days, weeks, or months of “FOOMing.”
EDIT on reflection: Yeah, maybe I’m underestimating how many resources would be available.
in the real world, I imagine that the AI would have to work hard to obtain a sizable amount of physical resources
I suggest you Google the word “botnet”. It isn’t particularly hard for human-level intelligences to gain access to substantial computing power for selfish purposes.
Unless I’m really misinterpreting you, “simply copy the algorithm into more hardware” sounds totally silly to me. In general, tasks need to be designed from the ground up with parallelization in mind in order to be efficiently parallelizable. Rarely have I ever wanted to run a serial algorithm in parallel and had it be a matter of “simply run the same old thing on each one and put the results together.” The more complicated the algorithm in question, the more work it takes to efficiently and correctly split up the work; and at really large, Google-esque scales, you need to start worrying about latency and hardware reliability.
I tend to agree that recursive self-improvement will lead to big gains fast, but I don’t buy that it’s going to be immediately trivial for the AI to just throw more hardware at the problem and gain huge chunks of performance for free. It depends on the initial design.
If human-level AI is developed successfully, the first working AI will already be parallelized across many computers. An algorthm that wasn’t would have too much of a disadvantage in the amount of computing power it could exploit to compete with parallel algorithms. Also, almost all machine learning algorithms in use today are trivially parallelizable, as is the human brain.
So, while I don’t know just how much benefit an AI would gain from spreading itself across more hardware, I certainly wouldn’t bet against being able to do so at all. I wouldn’t bet on a linear upper bound, either, though I’m less certain of that.
That’s quite true. I mean, honestly, I would expect any AI to parallelize very well, although I’m loathe to trust my intuition about anything related to AGI. But I don’t think we can take it as a given that the AI will be able to get linear or better gains in its speed of thought when going, say, from some big parallel supercomputer in a datacenter to trying to spread itself out through commodity hardware in other physical locations.
If a prospective AI had a tremendous, planet-sized amount of hardware available to it, it might hardly matter, but in the real world, I imagine that the AI would have to work hard to obtain a sizable amount of physical resources, and how well it can use those resources could make the difference between hours, days, weeks, or months of “FOOMing.”
EDIT on reflection: Yeah, maybe I’m underestimating how many resources would be available.
I suggest you Google the word “botnet”. It isn’t particularly hard for human-level intelligences to gain access to substantial computing power for selfish purposes.
Point taken.