Anthropomorphic AI: A reasonably efficient strategy for AI is to use a design loosely inspired by the human brain.
This is a rather anthropocentric view.
Yes, but intentionally so. ;)
We are getting into a realm where its important to understand background assumptions, which is why I listed some of mine. But notice I did quality with ‘reasonably efficient’ and ‘loosely inspired’.
The human brain is a product of natural selection and is far from perfect.
‘Perfect’ is a pretty vague qualifier. If we want to talk in quantitative terms about efficiency and performance, we need to look at the brain in terms of circuit complexity theory and evolutionary optimization.
Evolution as a search algorithm is known (from what I remember from studying CS theory a while back) to be optimal in some senses: given enough time and some diversity considerations in can find global maxima in very complex search spaces.
For example, if you want to design a circuit for a particular task and you have a bunch of CPU time available, you can run a massive evolutionary search using a GA (genetic algorithm) or variant thereof. The circuits you will eventually get are the best known solutions, and in many cases incorporate bizarre elements that are even difficult for humans to understand.
Now, that same algorithm is what has produced everything from insect ganglions to human brains.
Look at the wiring diagram for a cockroach or a bumblebee compared to what it actually does, and if you compare that circuit to equivalent complexity computer circuits for robots we can build, it is very hard to say that the organic circuit design could be improved on. An insect ganglion’s circuit organization, is in some sense perfect. (keep in mind organic circuits runs at less than 1khz). Evolution has had a long long time to optimize these circuits.
Can we improve on the brain—eventually we can obviously beat the brain by making bigger and faster circuits, but that would be cheating to some degree, right?
A more important question is: can we beat the cortex’s generic learning algorithm.
The answer today is: no. Not yet. But the evidence trend looks like we are narrowing down on a space of algorithms that are similar to the cortex (deep belief networks, hierarchical temporal etc etc).
Many of the key problems in science and engineering can be thought of as search problems. Designing a new circuit is a search in the vast space of possible arrangement of molecules on a surface.
So we can look at how the brain compares to our best algorithms in smaller constrained search worlds. For smaller spaces (such as checkers), we have much simpler serial algorithms that can win by a landslide. For more complex search spaces, like chess the favor shifts somewhat but even desktop PC’s can now beat grandmasters. Now go up one more complexity jump to a game like Go and we are still probably years away from an algorithm that can play at top human level.
Most interesting real world problems are many steps up the complexity ladder past Go.
Also remember this very important principle: the brain runs at only a few hundred hertz. So computers are cheating—they are over a million times faster.
So to compare the brains algorithms for a fair comparison, you would need to compare the brain to a large computer cluster that runs at only 500hz or so. Parallel algorithms do not scale nearly as well, so this is a huge handicap—and yet the brain still wins by a landslide in any highly complex search spaces.
Our neurons are believed to calculate using continuous values, but our computers are assemblies of discrete on/off switches. A properly structured AI could make much better use of this fact, not to mention be better at mental arithmetic than us.
Neurons mainly do calculate in analog space, but that is because this is vastly more efficient for probabilistic approximate calculation, which is what the brain is built on. A digital multiplier is many orders of magnitude less circuit space efficient than an analog multiplier—it pays a huge cost for its precision.
The brain is a highly optimized specialized circuit implementation of a very general universal intelligence algorithm. Also, the brain is Turing complete—keep that in mind.
The human mind is a small island in a massive mindspace, and the only special thing about it is that it’s the first sentient mind we have encountered.
mind != brain
The brain is the hardware and the algorithms, the mind is the actual learned structure, the data, the beliefs, ideas, personality—everything important. Very different concepts.
Evolution as a search algorithm is known (from what I remember from studying
CS theory a while back) to be optimal in some senses: given enough time and
some diversity considerations in can find global maxima in very complex search spaces.
Evolution by random mutations pretty-much sucks as a search strategy:
“One of the reasons genetic algorithms get used at all is because we do not yet have machine intelligence. Once we have access to superintelligent machines, search techniques will use intelligence ubiquitously. Modifications will be made intelligently, tests will be performed intelligently, and the results will be used intelligently to design the next generation of trials.
There will be a few domains where the computational cost of using intelligence outweighs the costs of performing additional trials—but this will only happen in a tiny fraction of cases.
Even without machine intelligence, random mutations are rarely an effective strategy in practice. In the future, I expect that their utility will plummet—and intelligent design will become ubiquitous as a search technique.”
I listened to your talk until I realized I could just read the essay :)
I partly agree with you. You say:
Evolution by random mutations pretty-much sucks as a search strategy:
Sucks is not quite descriptive enough. Random mutation is slow, but that is not really relevant to my point—as I said—given enough time it is very robust. And sex transfer speeds that up dramatically, and then intelligence speeds up evolutionary search dramatically.
yes intelligent search is a large—huge—potential speedup on top of genetic evolution alone.
But we need to understand this in the wider context … you yourself say:
One of the reasons genetic algorithms get used at all is because we do not yet have machine intelligence.
Ahh but we already have human intelligence.
Intelligence still uses an evolutionary search strategy, it is just internalized and approximate. Your brain considers a large number of potential routes in a highly compressed statistical approximation of reality, and the most promising eventually get written up or coded up and become real designs in the real world.
But this entire process is still all evolutionary.
And regardless, the approximate simulation that intelligence such as our brain uses does have limitations—mainly precision. Some things are just way too complex to simulate accurately in our brain, so we have to try them in detailed computer simulations.
Likewise, if you are designing a simple circuit space, then a simpler GA search running on a fast computer can almost certainly find the optimal solution way faster than a general intelligence—similar to an optimized chess algorithm.
A general intelligence is a huge speed up for evolution, but it is just one piece in a larger system .. You also need deep computer simulation, and you still have evolution operating at the world-level
Intelligence still uses an evolutionary search strategy, it is just internalized and
approximate. Your brain considers a large number of potential routes in a highly
compressed statistical approximation of reality, and the most promising eventually
get written up or coded up and become real designs in the real world. But this entire
process is still all evolutionary.
In the sense that it consists of copying with variation and differential reproductive success, yes.
However, evolution using intelligence isn’t the same as evolution by random mutations—and you originally went on to draw conclusions about the optimality of organic evolution—which was mostly the “random mutations” kind.
A more important question can we beat the cortex’s generic learning algorithm.
The answer today is: no. Not yet. But the evidence trend looks like we are narrowing
down on a space of algorithms that are similar to the cortex (deep belief networks,
hierarchical temporal etc etc).
Google learns about the internet by making a compressed bitwise identical digital copy of it. Machine intelligences will be able to learn that way too—and it is really not much like what goes on in brains. The way the brain makes reliable long-term memories is just a total mess.
Google learns about the internet by making a compressed bitwise identical digital copy of it.
I wouldn’t consider that learning.
Learning is building up a complex hierarchical web of statistical dimension reducing associations that allow massively efficient approximate simulation.
Yes, human minds think more efficiently than computers currently. But this does not support the idea that we cannot create something even more efficient. You have only compared us to some of our first attempts to create new beings, within an infinite series of possibilities. I am open to the possibility that human brains are the most efficient design we will see in the near future, but you seem almost certain of it. why do you believe what you believe?
And for that matter… Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.
I had a longer reply, but unfortunately my computer was suddenly attacked by some wierd virus (yes really), and had to reboot.
Your line of thought investigates some of my assumptions that would require lengthier expositions to support, but I’ll just summarize here (and may link to something else relevant when i dig it up).
You have only compared us to some of our first attempts to create new beings, within an infinite series of possibilities.
The set of any programs for a particular problem is infinite, but this irrelevant. There are an infinite number of programs for sorting a list of numbers. All of them suck for various reasons, and we are left with just a couple provably best algorithms (serial and parallel).
There appears to be a single program underlying our universe—physics. We have reasonable approximations to it at different levels of scale. Our simulation techniques are moving towards a set of best approximations to our physics.
Intelligence itself is a form of simulation of this same physics. Our brain appears to use (in the cortex) a universal data-driven approximation of this universal physics.
So the space of intelligent algorithms is infinite, but the are just a small set of universal intelligent algorithms derived from our physics which are important.
And for that matter… Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.
Not really.
Imagine if you took a current CPU back in time 10 years ago. Engineers then wouldn’t be able to build it immediately, but it would accelerate their progress significantly.
The brain in some sense is like an AGI computer from the future. We can’t build it yet, but we can use it to accelerate our technological evolution towards AGI.
Unless we understand exactly how a human brain works, how can we
improve its efficiency? Reverse engineering a system is often harder
than making one from scratch.
Not really.
Yet aeroplanes are not much like birds, hydraulics are not much like muscles, loudspeakers are not much like the human throat, microphones are not much like the human ear—and so on.
Convergent evolution wins sometimes—for example, eyes—but we can see that this probably won’t happen with the brain—since its “design” is so obviously xxxxxd up.
Airplanes exploit one single simple principle (from a vast set of principles) that birds use—aerodynamic lift.
If you want a comparison like that—then we already have it. Computers exploit one single simple principle from the brain—abstract computation (as humans were the original computers and are turing complete) - and magnify it greatly.
But there is much more to intelligence than just that one simple principle.
So building an AGI is much close to building an entire robotic bird.
And that really is the right level of analogy. Look at the complexity of building a complete android—really analyze just the robotic side of things, and there is no one simple magic principle you can exploit to make some simple dumb system which amplifies it to the Nth degree. And building a human or animal level robotic body is immensely complex.
There is not one simple principle—but millions.
And the brain is the most complex part of building a robot.
I’ll take your point and I should have said “there is much more to practical intelligence” than just one simple principle—because yes at the limits I agree that universal intelligence does have a compact description.
AIXI is related to finding a universal TOE—a simple theory of physics, but that doesn’t mean it is actually computationally tractable. Creating a practical, efficient simulation involves a large series of principles.
Yes, but intentionally so. ;)
We are getting into a realm where its important to understand background assumptions, which is why I listed some of mine. But notice I did quality with ‘reasonably efficient’ and ‘loosely inspired’.
‘Perfect’ is a pretty vague qualifier. If we want to talk in quantitative terms about efficiency and performance, we need to look at the brain in terms of circuit complexity theory and evolutionary optimization.
Evolution as a search algorithm is known (from what I remember from studying CS theory a while back) to be optimal in some senses: given enough time and some diversity considerations in can find global maxima in very complex search spaces.
For example, if you want to design a circuit for a particular task and you have a bunch of CPU time available, you can run a massive evolutionary search using a GA (genetic algorithm) or variant thereof. The circuits you will eventually get are the best known solutions, and in many cases incorporate bizarre elements that are even difficult for humans to understand.
Now, that same algorithm is what has produced everything from insect ganglions to human brains.
Look at the wiring diagram for a cockroach or a bumblebee compared to what it actually does, and if you compare that circuit to equivalent complexity computer circuits for robots we can build, it is very hard to say that the organic circuit design could be improved on. An insect ganglion’s circuit organization, is in some sense perfect. (keep in mind organic circuits runs at less than 1khz). Evolution has had a long long time to optimize these circuits.
Can we improve on the brain—eventually we can obviously beat the brain by making bigger and faster circuits, but that would be cheating to some degree, right?
A more important question is: can we beat the cortex’s generic learning algorithm.
The answer today is: no. Not yet. But the evidence trend looks like we are narrowing down on a space of algorithms that are similar to the cortex (deep belief networks, hierarchical temporal etc etc).
Many of the key problems in science and engineering can be thought of as search problems. Designing a new circuit is a search in the vast space of possible arrangement of molecules on a surface.
So we can look at how the brain compares to our best algorithms in smaller constrained search worlds. For smaller spaces (such as checkers), we have much simpler serial algorithms that can win by a landslide. For more complex search spaces, like chess the favor shifts somewhat but even desktop PC’s can now beat grandmasters. Now go up one more complexity jump to a game like Go and we are still probably years away from an algorithm that can play at top human level.
Most interesting real world problems are many steps up the complexity ladder past Go.
Also remember this very important principle: the brain runs at only a few hundred hertz. So computers are cheating—they are over a million times faster.
So to compare the brains algorithms for a fair comparison, you would need to compare the brain to a large computer cluster that runs at only 500hz or so. Parallel algorithms do not scale nearly as well, so this is a huge handicap—and yet the brain still wins by a landslide in any highly complex search spaces.
Neurons mainly do calculate in analog space, but that is because this is vastly more efficient for probabilistic approximate calculation, which is what the brain is built on. A digital multiplier is many orders of magnitude less circuit space efficient than an analog multiplier—it pays a huge cost for its precision.
The brain is a highly optimized specialized circuit implementation of a very general universal intelligence algorithm. Also, the brain is Turing complete—keep that in mind.
mind != brain
The brain is the hardware and the algorithms, the mind is the actual learned structure, the data, the beliefs, ideas, personality—everything important. Very different concepts.
Evolution by random mutations pretty-much sucks as a search strategy:
“One of the reasons genetic algorithms get used at all is because we do not yet have machine intelligence. Once we have access to superintelligent machines, search techniques will use intelligence ubiquitously. Modifications will be made intelligently, tests will be performed intelligently, and the results will be used intelligently to design the next generation of trials.
There will be a few domains where the computational cost of using intelligence outweighs the costs of performing additional trials—but this will only happen in a tiny fraction of cases.
Even without machine intelligence, random mutations are rarely an effective strategy in practice. In the future, I expect that their utility will plummet—and intelligent design will become ubiquitous as a search technique.”
http://alife.co.uk/essays/intelligent_design_vs_random_mutations/
I listened to your talk until I realized I could just read the essay :)
I partly agree with you. You say:
Sucks is not quite descriptive enough. Random mutation is slow, but that is not really relevant to my point—as I said—given enough time it is very robust. And sex transfer speeds that up dramatically, and then intelligence speeds up evolutionary search dramatically.
yes intelligent search is a large—huge—potential speedup on top of genetic evolution alone.
But we need to understand this in the wider context … you yourself say:
Ahh but we already have human intelligence.
Intelligence still uses an evolutionary search strategy, it is just internalized and approximate. Your brain considers a large number of potential routes in a highly compressed statistical approximation of reality, and the most promising eventually get written up or coded up and become real designs in the real world.
But this entire process is still all evolutionary.
And regardless, the approximate simulation that intelligence such as our brain uses does have limitations—mainly precision. Some things are just way too complex to simulate accurately in our brain, so we have to try them in detailed computer simulations.
Likewise, if you are designing a simple circuit space, then a simpler GA search running on a fast computer can almost certainly find the optimal solution way faster than a general intelligence—similar to an optimized chess algorithm.
A general intelligence is a huge speed up for evolution, but it is just one piece in a larger system .. You also need deep computer simulation, and you still have evolution operating at the world-level
In the sense that it consists of copying with variation and differential reproductive success, yes.
However, evolution using intelligence isn’t the same as evolution by random mutations—and you originally went on to draw conclusions about the optimality of organic evolution—which was mostly the “random mutations” kind.
Google learns about the internet by making a compressed bitwise identical digital copy of it. Machine intelligences will be able to learn that way too—and it is really not much like what goes on in brains. The way the brain makes reliable long-term memories is just a total mess.
I wouldn’t consider that learning.
Learning is building up a complex hierarchical web of statistical dimension reducing associations that allow massively efficient approximate simulation.
The term is more conventionally used as follows:
knowledge acquired by systematic study in any field of scholarly application.
the act or process of acquiring knowledge or skill.
Psychology . the modification of behavior through practice, training, or experience.
http://dictionary.reference.com/browse/learning
Yes, human minds think more efficiently than computers currently. But this does not support the idea that we cannot create something even more efficient. You have only compared us to some of our first attempts to create new beings, within an infinite series of possibilities. I am open to the possibility that human brains are the most efficient design we will see in the near future, but you seem almost certain of it. why do you believe what you believe?
And for that matter… Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.
I had a longer reply, but unfortunately my computer was suddenly attacked by some wierd virus (yes really), and had to reboot.
Your line of thought investigates some of my assumptions that would require lengthier expositions to support, but I’ll just summarize here (and may link to something else relevant when i dig it up).
The set of any programs for a particular problem is infinite, but this irrelevant. There are an infinite number of programs for sorting a list of numbers. All of them suck for various reasons, and we are left with just a couple provably best algorithms (serial and parallel).
There appears to be a single program underlying our universe—physics. We have reasonable approximations to it at different levels of scale. Our simulation techniques are moving towards a set of best approximations to our physics.
Intelligence itself is a form of simulation of this same physics. Our brain appears to use (in the cortex) a universal data-driven approximation of this universal physics.
So the space of intelligent algorithms is infinite, but the are just a small set of universal intelligent algorithms derived from our physics which are important.
Not really.
Imagine if you took a current CPU back in time 10 years ago. Engineers then wouldn’t be able to build it immediately, but it would accelerate their progress significantly.
The brain in some sense is like an AGI computer from the future. We can’t build it yet, but we can use it to accelerate our technological evolution towards AGI.
Also .. brain != mind
Yet aeroplanes are not much like birds, hydraulics are not much like muscles, loudspeakers are not much like the human throat, microphones are not much like the human ear—and so on.
Convergent evolution wins sometimes—for example, eyes—but we can see that this probably won’t happen with the brain—since its “design” is so obviously xxxxxd up.
Airplanes exploit one single simple principle (from a vast set of principles) that birds use—aerodynamic lift.
If you want a comparison like that—then we already have it. Computers exploit one single simple principle from the brain—abstract computation (as humans were the original computers and are turing complete) - and magnify it greatly.
But there is much more to intelligence than just that one simple principle.
So building an AGI is much close to building an entire robotic bird.
And that really is the right level of analogy. Look at the complexity of building a complete android—really analyze just the robotic side of things, and there is no one simple magic principle you can exploit to make some simple dumb system which amplifies it to the Nth degree. And building a human or animal level robotic body is immensely complex.
There is not one simple principle—but millions.
And the brain is the most complex part of building a robot.
Reference? For counter-reference, see:
http://www.hutter1.net/ai/uaibook.htm#oneline
That looks a lot like the intellectual equivalent of “lift” to me.
An implementation may not be that simple—but then aeroplanes are not simple either.
The point was not that engineered artefacts are simple, but that they are only rarely the result of reverse engineering biological entities.
I’ll take your point and I should have said “there is much more to practical intelligence” than just one simple principle—because yes at the limits I agree that universal intelligence does have a compact description.
AIXI is related to finding a universal TOE—a simple theory of physics, but that doesn’t mean it is actually computationally tractable. Creating a practical, efficient simulation involves a large series of principles.