It’s the servers in huge server farms where machine intelligence will be developed.
They will get the required power about 5-10 years before desktops do, and have more direct access to lots of training data.
Small servers in small businesses may be numerous—but they are irrelevant to this point—there seems to be no point in discussing them further.
Arguing about the definition of http://en.wikipedia.org/wiki/Computer_server would seem to make little difference to the fact that most powerful computer farms are servers. Anyhow, if you don’t like using the term “server” in this context, feel free to substitute “large computer farm” instead—as follows:
“machine intelligence is likely to start out as a large computer farm technology”
Small servers in small businesses may be numerous—but they are irrelevant to this point—there seems to be no point in discussing them further.
If nothing else we seem to agree that neither small servers nor iPhones are the likely birthplace of AI. That definitely rules out servers that ARE iPhones!
“Large computer farm” and, for that matter “large server farm” has a whole different meaning to “server-side technology”. I’m going here from using both client and server side technology simultaneously for several automation tools that intrinsically need to take on both those roles simultaniously to seeing the term used to mean essentially ‘requires a whole bunch of computing hardware’. This jumps out to me as misleading.
I don’t think there is much doubt about the kind of hardware that the first machine intelligence is run on. But I would be surprised if I arrive at that conclusion for the same reasons that you do. I think it is highly improbable that the critical theoretical breakthroughs will arrive in a form that makes a mere order of magnitude or two difference in computing power the critical factor for success. But I do know from experience that when crafting AI algorithms the natural tendency is to expand to use all available computational resources.
Back in my postgrad days my professor got us a grant to develop some AI for factory scheduling using the VPAC supercomputer. I had a hell of a lot of fun implementing collaborative agent code. MPI2 with C++ bindings if I recall. But was it necessary? Not even remotely. I swear I could have written practically the same paper using an old 286 and half the run time. But while doing the research I used every clock cycle I could and chafed at the bit wishing I had more.
If someone gets the theoretical progress to make a worthwhile machine intelligence I have no doubt that they will throw every piece of computer hardware at it that they can afford!
“more computer power makes solving the AGI design problem easier. Firstly, more powerful computers allow us to search larger spaces of programs looking for good algorithms. Secondly, the algorithms we need to find can be less efficient, thus we are looking for an element in a larger subspace.”
Those with a server farm have maybe 5-10 years hardware advantage over the rest of us—and they probably have other advantages as well: better funding, like-minded colleagues, etc.
Those with a server farm have maybe 5-10 years hardware advantage over the rest of us—and they probably have other advantages as well: better funding, like-minded colleagues, etc.
I somewhat agree with what you are saying here. Where we disagree slightly, in a mater of degree and not fundamental structure, is on the relative importance of the hardware vs those other advantages. I suspect the funding, like-minded colleagues and particularly the etc are more important factors than the hardware.
It’s the servers in huge server farms where machine intelligence will be developed.
They will get the required power about 5-10 years before desktops do, and have more direct access to lots of training data.
Small servers in small businesses may be numerous—but they are irrelevant to this point—there seems to be no point in discussing them further.
Arguing about the definition of http://en.wikipedia.org/wiki/Computer_server would seem to make little difference to the fact that most powerful computer farms are servers. Anyhow, if you don’t like using the term “server” in this context, feel free to substitute “large computer farm” instead—as follows:
“machine intelligence is likely to start out as a large computer farm technology”
If nothing else we seem to agree that neither small servers nor iPhones are the likely birthplace of AI. That definitely rules out servers that ARE iPhones!
“Large computer farm” and, for that matter “large server farm” has a whole different meaning to “server-side technology”. I’m going here from using both client and server side technology simultaneously for several automation tools that intrinsically need to take on both those roles simultaniously to seeing the term used to mean essentially ‘requires a whole bunch of computing hardware’. This jumps out to me as misleading.
I don’t think there is much doubt about the kind of hardware that the first machine intelligence is run on. But I would be surprised if I arrive at that conclusion for the same reasons that you do. I think it is highly improbable that the critical theoretical breakthroughs will arrive in a form that makes a mere order of magnitude or two difference in computing power the critical factor for success. But I do know from experience that when crafting AI algorithms the natural tendency is to expand to use all available computational resources.
Back in my postgrad days my professor got us a grant to develop some AI for factory scheduling using the VPAC supercomputer. I had a hell of a lot of fun implementing collaborative agent code. MPI2 with C++ bindings if I recall. But was it necessary? Not even remotely. I swear I could have written practically the same paper using an old 286 and half the run time. But while doing the research I used every clock cycle I could and chafed at the bit wishing I had more.
If someone gets the theoretical progress to make a worthwhile machine intelligence I have no doubt that they will throw every piece of computer hardware at it that they can afford!
Computing power is fairly important:
“more computer power makes solving the AGI design problem easier. Firstly, more powerful computers allow us to search larger spaces of programs looking for good algorithms. Secondly, the algorithms we need to find can be less efficient, thus we are looking for an element in a larger subspace.”
http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/
Those with a server farm have maybe 5-10 years hardware advantage over the rest of us—and they probably have other advantages as well: better funding, like-minded colleagues, etc.
I somewhat agree with what you are saying here. Where we disagree slightly, in a mater of degree and not fundamental structure, is on the relative importance of the hardware vs those other advantages. I suspect the funding, like-minded colleagues and particularly the etc are more important factors than the hardware.