Computers work by performing a sequence of computations, one at a time: parallelization can cut down the time for repetitive tasks such as linear algebra, but hits diminishing returns very quickly. This is vey different than the way the brain works. the brain is highly parallel. Is there any reason to think that our current techniques for making algorithms are powerful enough to produce “intelligence” whatever that means.
All biological organisms, considered as signalling or information processing networks, are massively parallel: huge amounts of similar cells with slightly different state signalling one another. It’s not surprising that the biological evolved brain works the same way. A turing machine-like sequential computer powerful/fast enough for general intelligence would be far less likely to evolve.
So the fact that human intelligence is slow and parallel isn’t evidence for thinking you can’t implement intelligence as a fast serial algorithm. It’s only evidence that the design is likely to be different from that of human brains.
It’s likely true that we don’t have the algorithmic (or other mathematical) techniques yet to make general intelligence. But that doesn’t seem to me to be evidence that such algorithms would be qualitatively different from what we do have. We could just as easily be a few specific algorithmic inventions away from a general intelligence implementation.
Finally, as far as sheer scale goes, we’re on track to achieve rough computational parity with a human brain in a single multi-processor cluster within IIRC something like a decade.
I’m not trying to play burden of proof tennis here but surely the fact that the only “intelligence” that we know of is implemented in a massively parallel way should give you pause as to assuming that it can be done serially. Unless of course the kind of AI that humans create is nothing like the human mind, in which my question is irrelevant.
But that doesn’t seem to me to be evidence that such algorithms would be qualitatively different from what we do have.
But we already know that the existing algorithms (in the brain) are qualitatively different from computer programs. I’m not an expert so apologies for any mistakes but the brain is not massively parallel in the way that computers are. A parallel piece of software can funnel a repetitive task into different processors (like the same algorithm for each value of a vector). But parallelism is a built in feature of how the brain works; neurons and clusters of neurons perform computations semi-independently of each other, yet are still coordinated together in a dynamic way. The question is whether algorithms performing similar functions could be implemented serially. Why do you think that they can be?
Regarding computational parity: sure I never said that would be the issue.
the fact that the only “intelligence” that we know of is implemented in a massively parallel way should give you pause as to assuming that it can be done serially.
An optimization process (evolution) tried and succeeded at producing massively-parallel biological intelligence.
No optimization process has yet tried and failed to produce serial-processing based intelligence. Humans have been trying for very little time, and our serial computers may be barely fast enough, or may only become fast enough some years from now.
The fact the parallel intelligence could be created, is not evidence that other kinds of intelligence can’t be created. Talking about “the only intelligence we know of” ignores the fact that no process ever tried to create a serial intelligence, and so of course none was created.
Unless of course the kind of AI that humans create is nothing like the human mind
That’s quite possible.
The question is whether algorithms performing similar functions could be implemented serially. Why do you think that they can be?
All algorithms can be implemented on our Turing-complete computers. The questions is what algorithms we can successfully design.
What exactly do you mean by ‘serially’? Any parallel algorithm can be implemented on a serial computer. And we do have parallel computer architectures (multicore/multicpu/cluster) that we can use for speedups, but that’s purely an optimization issue.
Computers work by performing a sequence of computations, one at a time: parallelization can cut down the time for repetitive tasks such as linear algebra, but hits diminishing returns very quickly. This is vey different than the way the brain works. the brain is highly parallel. Is there any reason to think that our current techniques for making algorithms are powerful enough to produce “intelligence” whatever that means.
All biological organisms, considered as signalling or information processing networks, are massively parallel: huge amounts of similar cells with slightly different state signalling one another. It’s not surprising that the biological evolved brain works the same way. A turing machine-like sequential computer powerful/fast enough for general intelligence would be far less likely to evolve.
So the fact that human intelligence is slow and parallel isn’t evidence for thinking you can’t implement intelligence as a fast serial algorithm. It’s only evidence that the design is likely to be different from that of human brains.
It’s likely true that we don’t have the algorithmic (or other mathematical) techniques yet to make general intelligence. But that doesn’t seem to me to be evidence that such algorithms would be qualitatively different from what we do have. We could just as easily be a few specific algorithmic inventions away from a general intelligence implementation.
Finally, as far as sheer scale goes, we’re on track to achieve rough computational parity with a human brain in a single multi-processor cluster within IIRC something like a decade.
I’m not trying to play burden of proof tennis here but surely the fact that the only “intelligence” that we know of is implemented in a massively parallel way should give you pause as to assuming that it can be done serially. Unless of course the kind of AI that humans create is nothing like the human mind, in which my question is irrelevant.
But we already know that the existing algorithms (in the brain) are qualitatively different from computer programs. I’m not an expert so apologies for any mistakes but the brain is not massively parallel in the way that computers are. A parallel piece of software can funnel a repetitive task into different processors (like the same algorithm for each value of a vector). But parallelism is a built in feature of how the brain works; neurons and clusters of neurons perform computations semi-independently of each other, yet are still coordinated together in a dynamic way. The question is whether algorithms performing similar functions could be implemented serially. Why do you think that they can be?
Regarding computational parity: sure I never said that would be the issue.
There is no such thing as qualitatively different algorithms. Anything that a parallel computer can do, a fast enough serial computer also can do.
An optimization process (evolution) tried and succeeded at producing massively-parallel biological intelligence.
No optimization process has yet tried and failed to produce serial-processing based intelligence. Humans have been trying for very little time, and our serial computers may be barely fast enough, or may only become fast enough some years from now.
The fact the parallel intelligence could be created, is not evidence that other kinds of intelligence can’t be created. Talking about “the only intelligence we know of” ignores the fact that no process ever tried to create a serial intelligence, and so of course none was created.
That’s quite possible.
All algorithms can be implemented on our Turing-complete computers. The questions is what algorithms we can successfully design.
Why do you think that intelligence can be implemented serially?
What exactly do you mean by ‘serially’? Any parallel algorithm can be implemented on a serial computer. And we do have parallel computer architectures (multicore/multicpu/cluster) that we can use for speedups, but that’s purely an optimization issue.