There’s a limit to how large we can scale computers at any given tech level. What you’re talking about is basically what a supercomputer is (they have many CPUs rather than one huge one), but there’s still a limit to what’s practical with them.
What do you mean by “evolve intelligence”? Run evolutionary algorithms on random bits of code? How do you evaluate the results? Before you can use search algorithms you have to be able to define the target, which is most of the problem in this case, plus search is likely to be impractically slow in something as big as “the space of all programs”.
Having 1000+ petabytes is not impossible with our level of technology. It is somewhat nitpicky to focus rather on the physical absurdity of house-sized computers.
Run Watson, select the Watsons that can solve problems better.
1000 petabytes of what? RAM? How do you know that’s enough to do what you want anyway? My point at any rate is that we can’t grab a billion dollars and make some computer that is “fast enough to ‘evolve an AI’” just by throwing money at the problem—universities, companies and governments are spending money right now on supercomputers, and they still have limitations due to underlying technical issues like cooling and inter-processor communication (as the other commenters pointed out).
Watson is a big complex program, not some small DNA-like seed that can easily be mutated and iterated on automatically. There’s no known small seed that generates anything like a general intelligent agent (except of course DNA itself and the resulting biology which can’t be very efficiently simulated even with a supercomputer).
There’s a limit to how large we can scale computers at any given tech level. What you’re talking about is basically what a supercomputer is (they have many CPUs rather than one huge one), but there’s still a limit to what’s practical with them.
What do you mean by “evolve intelligence”? Run evolutionary algorithms on random bits of code? How do you evaluate the results? Before you can use search algorithms you have to be able to define the target, which is most of the problem in this case, plus search is likely to be impractically slow in something as big as “the space of all programs”.
Having 1000+ petabytes is not impossible with our level of technology. It is somewhat nitpicky to focus rather on the physical absurdity of house-sized computers.
Run Watson, select the Watsons that can solve problems better.
1000 petabytes of what? RAM? How do you know that’s enough to do what you want anyway? My point at any rate is that we can’t grab a billion dollars and make some computer that is “fast enough to ‘evolve an AI’” just by throwing money at the problem—universities, companies and governments are spending money right now on supercomputers, and they still have limitations due to underlying technical issues like cooling and inter-processor communication (as the other commenters pointed out).
Watson is a big complex program, not some small DNA-like seed that can easily be mutated and iterated on automatically. There’s no known small seed that generates anything like a general intelligent agent (except of course DNA itself and the resulting biology which can’t be very efficiently simulated even with a supercomputer).