Unless I’m really misinterpreting you, “simply copy the algorithm into more hardware” sounds totally silly to me. In general, tasks need to be designed from the ground up with parallelization in mind in order to be efficiently parallelizable.
If human-level AI is developed successfully, the first working AI will already be parallelized across many computers. An algorthm that wasn’t would have too much of a disadvantage in the amount of computing power it could exploit to compete with parallel algorithms. Also, almost all machine learning algorithms in use today are trivially parallelizable, as is the human brain.
So, while I don’t know just how much benefit an AI would gain from spreading itself across more hardware, I certainly wouldn’t bet against being able to do so at all. I wouldn’t bet on a linear upper bound, either, though I’m less certain of that.
That’s quite true. I mean, honestly, I would expect any AI to parallelize very well, although I’m loathe to trust my intuition about anything related to AGI. But I don’t think we can take it as a given that the AI will be able to get linear or better gains in its speed of thought when going, say, from some big parallel supercomputer in a datacenter to trying to spread itself out through commodity hardware in other physical locations.
If a prospective AI had a tremendous, planet-sized amount of hardware available to it, it might hardly matter, but in the real world, I imagine that the AI would have to work hard to obtain a sizable amount of physical resources, and how well it can use those resources could make the difference between hours, days, weeks, or months of “FOOMing.”
EDIT on reflection: Yeah, maybe I’m underestimating how many resources would be available.
in the real world, I imagine that the AI would have to work hard to obtain a sizable amount of physical resources
I suggest you Google the word “botnet”. It isn’t particularly hard for human-level intelligences to gain access to substantial computing power for selfish purposes.
If human-level AI is developed successfully, the first working AI will already be parallelized across many computers. An algorthm that wasn’t would have too much of a disadvantage in the amount of computing power it could exploit to compete with parallel algorithms. Also, almost all machine learning algorithms in use today are trivially parallelizable, as is the human brain.
So, while I don’t know just how much benefit an AI would gain from spreading itself across more hardware, I certainly wouldn’t bet against being able to do so at all. I wouldn’t bet on a linear upper bound, either, though I’m less certain of that.
That’s quite true. I mean, honestly, I would expect any AI to parallelize very well, although I’m loathe to trust my intuition about anything related to AGI. But I don’t think we can take it as a given that the AI will be able to get linear or better gains in its speed of thought when going, say, from some big parallel supercomputer in a datacenter to trying to spread itself out through commodity hardware in other physical locations.
If a prospective AI had a tremendous, planet-sized amount of hardware available to it, it might hardly matter, but in the real world, I imagine that the AI would have to work hard to obtain a sizable amount of physical resources, and how well it can use those resources could make the difference between hours, days, weeks, or months of “FOOMing.”
EDIT on reflection: Yeah, maybe I’m underestimating how many resources would be available.
I suggest you Google the word “botnet”. It isn’t particularly hard for human-level intelligences to gain access to substantial computing power for selfish purposes.
Point taken.