I had a longer reply, but unfortunately my computer was suddenly attacked by some wierd virus (yes really), and had to reboot.
Your line of thought investigates some of my assumptions that would require lengthier expositions to support, but I’ll just summarize here (and may link to something else relevant when i dig it up).
You have only compared us to some of our first attempts to create new beings, within an infinite series of possibilities.
The set of any programs for a particular problem is infinite, but this irrelevant. There are an infinite number of programs for sorting a list of numbers. All of them suck for various reasons, and we are left with just a couple provably best algorithms (serial and parallel).
There appears to be a single program underlying our universe—physics. We have reasonable approximations to it at different levels of scale. Our simulation techniques are moving towards a set of best approximations to our physics.
Intelligence itself is a form of simulation of this same physics. Our brain appears to use (in the cortex) a universal data-driven approximation of this universal physics.
So the space of intelligent algorithms is infinite, but the are just a small set of universal intelligent algorithms derived from our physics which are important.
And for that matter… Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.
Not really.
Imagine if you took a current CPU back in time 10 years ago. Engineers then wouldn’t be able to build it immediately, but it would accelerate their progress significantly.
The brain in some sense is like an AGI computer from the future. We can’t build it yet, but we can use it to accelerate our technological evolution towards AGI.
Unless we understand exactly how a human brain works, how can we
improve its efficiency? Reverse engineering a system is often harder
than making one from scratch.
Not really.
Yet aeroplanes are not much like birds, hydraulics are not much like muscles, loudspeakers are not much like the human throat, microphones are not much like the human ear—and so on.
Convergent evolution wins sometimes—for example, eyes—but we can see that this probably won’t happen with the brain—since its “design” is so obviously xxxxxd up.
Airplanes exploit one single simple principle (from a vast set of principles) that birds use—aerodynamic lift.
If you want a comparison like that—then we already have it. Computers exploit one single simple principle from the brain—abstract computation (as humans were the original computers and are turing complete) - and magnify it greatly.
But there is much more to intelligence than just that one simple principle.
So building an AGI is much close to building an entire robotic bird.
And that really is the right level of analogy. Look at the complexity of building a complete android—really analyze just the robotic side of things, and there is no one simple magic principle you can exploit to make some simple dumb system which amplifies it to the Nth degree. And building a human or animal level robotic body is immensely complex.
There is not one simple principle—but millions.
And the brain is the most complex part of building a robot.
I’ll take your point and I should have said “there is much more to practical intelligence” than just one simple principle—because yes at the limits I agree that universal intelligence does have a compact description.
AIXI is related to finding a universal TOE—a simple theory of physics, but that doesn’t mean it is actually computationally tractable. Creating a practical, efficient simulation involves a large series of principles.
I had a longer reply, but unfortunately my computer was suddenly attacked by some wierd virus (yes really), and had to reboot.
Your line of thought investigates some of my assumptions that would require lengthier expositions to support, but I’ll just summarize here (and may link to something else relevant when i dig it up).
The set of any programs for a particular problem is infinite, but this irrelevant. There are an infinite number of programs for sorting a list of numbers. All of them suck for various reasons, and we are left with just a couple provably best algorithms (serial and parallel).
There appears to be a single program underlying our universe—physics. We have reasonable approximations to it at different levels of scale. Our simulation techniques are moving towards a set of best approximations to our physics.
Intelligence itself is a form of simulation of this same physics. Our brain appears to use (in the cortex) a universal data-driven approximation of this universal physics.
So the space of intelligent algorithms is infinite, but the are just a small set of universal intelligent algorithms derived from our physics which are important.
Not really.
Imagine if you took a current CPU back in time 10 years ago. Engineers then wouldn’t be able to build it immediately, but it would accelerate their progress significantly.
The brain in some sense is like an AGI computer from the future. We can’t build it yet, but we can use it to accelerate our technological evolution towards AGI.
Also .. brain != mind
Yet aeroplanes are not much like birds, hydraulics are not much like muscles, loudspeakers are not much like the human throat, microphones are not much like the human ear—and so on.
Convergent evolution wins sometimes—for example, eyes—but we can see that this probably won’t happen with the brain—since its “design” is so obviously xxxxxd up.
Airplanes exploit one single simple principle (from a vast set of principles) that birds use—aerodynamic lift.
If you want a comparison like that—then we already have it. Computers exploit one single simple principle from the brain—abstract computation (as humans were the original computers and are turing complete) - and magnify it greatly.
But there is much more to intelligence than just that one simple principle.
So building an AGI is much close to building an entire robotic bird.
And that really is the right level of analogy. Look at the complexity of building a complete android—really analyze just the robotic side of things, and there is no one simple magic principle you can exploit to make some simple dumb system which amplifies it to the Nth degree. And building a human or animal level robotic body is immensely complex.
There is not one simple principle—but millions.
And the brain is the most complex part of building a robot.
Reference? For counter-reference, see:
http://www.hutter1.net/ai/uaibook.htm#oneline
That looks a lot like the intellectual equivalent of “lift” to me.
An implementation may not be that simple—but then aeroplanes are not simple either.
The point was not that engineered artefacts are simple, but that they are only rarely the result of reverse engineering biological entities.
I’ll take your point and I should have said “there is much more to practical intelligence” than just one simple principle—because yes at the limits I agree that universal intelligence does have a compact description.
AIXI is related to finding a universal TOE—a simple theory of physics, but that doesn’t mean it is actually computationally tractable. Creating a practical, efficient simulation involves a large series of principles.