Take for example the evolution of echolocation, it seems to have been a gradual progress with no obvious quantum leaps. The same can be said about eyes and other features exhibited by biological agents.
Yes, but these are features produced by evolution. Evolution doesn’t work very much the same, and any AI would likely start with much of human knowledge already given.
Yes, but these are features produced by evolution.
There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs. But when it comes to unknown unknowns, what difference is there between intelligence and evolution? The only difference then seems to be that intelligence is goal-oriented, can think ahead and jump fitness gaps. Yet the critical similarity is that both rely on dumb luck when it comes to genuine novelty. And where else but when it comes to the dramatic improvement of intelligence does it take the discovery of novel unknown unknowns?
A basic argument supporting the risks from superhuman intelligence is that we don’t know what it could possible come up with. That is why we call it a ‘Singularity’. But why does nobody ask how it knows what it could possible come up with?
It seems to be an unquestioned assumption that intelligence is kind of a black box, a cornucopia that can sprout an abundance of novelty. But this implicitly assumes that if you increase intelligence you also decrease the distance between discoveries. I don’t see that...
The only difference then seems to be that intelligence is goal-oriented, can think ahead and jump fitness gaps
seems to merit a response of “So, other than that, Mrs. Lincoln, how was the play?” Those are all very large differences. Let me add to the list: Intelligence can engage in direct experimentation. Intelligence can also observe and incorporate solutions that other optimizing agents (intelligent or not) have used for similar situations. All of these seem to be distinctions that make intelligence very different from other evolution. It isn’t an accident that the technologies which have been most successful for humans such as writing are technologies which augment many of these different advantages that intelligence has over evolution.
It isn’t an accident that the technologies which have been most successful for humans such as writing are technologies which augment many of these different advantages that intelligence has over evolution.
I agree. To be clear, my confusion is mainly about the possibility of explosive recursive self-improvement. I have a hard time to accept that it is very likely (e.g. easily larger than a 1% probability), that such a thing is practically and effectively possible, or at least that we will be able to come up with an algorithm that is capable of quickly surpassing a human set of skills without huge amounts of hard-coded intelligence. I am skeptical that we will be able to quickly approach such a problem, that it won’t be a slow and incremental evolution slowly approaching superhuman intelligence.
As I see it, the more abstract a seed AI is, the closer it is to something like AIXI, the more time it will need to reach human level intelligence, let alone superhuman intelligence. The less abstract a seed AI is, the more work we will have to put into painstakingly hard-coding it to be able to help us improve its intelligence even further. And in any case, I don’t think that dramatic quantum leaps in intelligence are a matter of speed improvements or the accumulation of expert systems. It might very well need some genuine novelty in the form of the discovery of unknown unknowns.
What is intelligence? Take a chess computer, it is arguably intelligent. It is a narrow form of intelligence. But what is it that differentiates narrow intelligence from general intelligence? Is it a conglomerate of expertise, some sort of conceptual revolution or a special kind of expert system that is missing? My point is, why haven’t we seen any of our expert systems come up with true novelty in their field, something no human has thought of before? The only algorithms that have so far been capable of achieving this have been of evolutionary nature, not what we would label artificial intelligence.
Intelligence can also observe and incorporate solutions that other optimizing agents (intelligent or not) have used for similar situations.
Evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven’t been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.
Your point is a good one, I am just saying that the gap between intelligence and evolution isn’t that big here.
Let me add to the list, intelligence can engage in direct experimentation.
Yes, but evolution makes better use of dumb luck by being blindfolded. This seems to be a disadvantage but actually allows it to discover unknown unknowns that are hidden where no intelligent, rational agent would suspect them and therefore would never find them given evidence based exploration.
Yes, but evolution makes better use of dumb luck by being blindfolded. This seems to be a disadvantage but actually allows it to discover unknown unknowns that are hidden where no intelligent, rational agent would suspect them and therefore would never find them given evidence based exploration
Never is a very strong word and it isn’t obvious that evolution will actually find things that intelligence would not. The general scale that evolution gets to work at is much longer term than intelligence has so far. If intelligence has as much time to fiddle it might be able to do everything evolution can (indeed, intelligence can even co-opt evolution by means of genetic algorithms). But, this doesn’t impact your main point in so far as if intelligent were to need those sorts of time scales then one obviously wouldn’t have an intelligence explosion.
Is it clear that the discovery of intelligence by evolution had a larger impact than the discovery of eyes? What evidence do we have that increasing intelligence itself outweighs its cost compared to adding a new pair of sensors?
What I am asking is how we can be sure that it would be instrumental for an AGI to increase its intelligence rather than using its existing intelligence to pursue its terminal goal? Do we have good evidence that the resources that are necessary to increase intelligence outweigh the cost of being unable to use those resources to pursue its terminal goal directly?
My main point regarding the advantage of being “irrational” was that if we would all think like perfect rational agents, e.g. closer to how Eliezer Yudkowsky thinks, we would have missed out on a lot of discoveries that were made by people pursuing “Rare Disease for Cute Kitten” activities.
How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?
What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages over evolutionary discovery relative to its cost? What evidence do we have that any increase in intelligence does vastly outweigh its computational cost and the expenditure of time needed to discover it?
Evolution acting on intelligent agents has been able to do quite a bit of that for millions of years, though—for example via the topic I am forbidden to mention.
Yes, but these are features produced by evolution. Evolution doesn’t work very much the same, and any AI would likely start with much of human knowledge already given.
There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs. But when it comes to unknown unknowns, what difference is there between intelligence and evolution? The only difference then seems to be that intelligence is goal-oriented, can think ahead and jump fitness gaps. Yet the critical similarity is that both rely on dumb luck when it comes to genuine novelty. And where else but when it comes to the dramatic improvement of intelligence does it take the discovery of novel unknown unknowns?
A basic argument supporting the risks from superhuman intelligence is that we don’t know what it could possible come up with. That is why we call it a ‘Singularity’. But why does nobody ask how it knows what it could possible come up with?
It seems to be an unquestioned assumption that intelligence is kind of a black box, a cornucopia that can sprout an abundance of novelty. But this implicitly assumes that if you increase intelligence you also decrease the distance between discoveries. I don’t see that...
These seem like mainly valid points. However,
seems to merit a response of “So, other than that, Mrs. Lincoln, how was the play?” Those are all very large differences. Let me add to the list: Intelligence can engage in direct experimentation. Intelligence can also observe and incorporate solutions that other optimizing agents (intelligent or not) have used for similar situations. All of these seem to be distinctions that make intelligence very different from other evolution. It isn’t an accident that the technologies which have been most successful for humans such as writing are technologies which augment many of these different advantages that intelligence has over evolution.
I agree. To be clear, my confusion is mainly about the possibility of explosive recursive self-improvement. I have a hard time to accept that it is very likely (e.g. easily larger than a 1% probability), that such a thing is practically and effectively possible, or at least that we will be able to come up with an algorithm that is capable of quickly surpassing a human set of skills without huge amounts of hard-coded intelligence. I am skeptical that we will be able to quickly approach such a problem, that it won’t be a slow and incremental evolution slowly approaching superhuman intelligence.
As I see it, the more abstract a seed AI is, the closer it is to something like AIXI, the more time it will need to reach human level intelligence, let alone superhuman intelligence. The less abstract a seed AI is, the more work we will have to put into painstakingly hard-coding it to be able to help us improve its intelligence even further. And in any case, I don’t think that dramatic quantum leaps in intelligence are a matter of speed improvements or the accumulation of expert systems. It might very well need some genuine novelty in the form of the discovery of unknown unknowns.
What is intelligence? Take a chess computer, it is arguably intelligent. It is a narrow form of intelligence. But what is it that differentiates narrow intelligence from general intelligence? Is it a conglomerate of expertise, some sort of conceptual revolution or a special kind of expert system that is missing? My point is, why haven’t we seen any of our expert systems come up with true novelty in their field, something no human has thought of before? The only algorithms that have so far been capable of achieving this have been of evolutionary nature, not what we would label artificial intelligence.
Evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven’t been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.
Your point is a good one, I am just saying that the gap between intelligence and evolution isn’t that big here.
Yes, but evolution makes better use of dumb luck by being blindfolded. This seems to be a disadvantage but actually allows it to discover unknown unknowns that are hidden where no intelligent, rational agent would suspect them and therefore would never find them given evidence based exploration.
A minor quibble:
Never is a very strong word and it isn’t obvious that evolution will actually find things that intelligence would not. The general scale that evolution gets to work at is much longer term than intelligence has so far. If intelligence has as much time to fiddle it might be able to do everything evolution can (indeed, intelligence can even co-opt evolution by means of genetic algorithms). But, this doesn’t impact your main point in so far as if intelligent were to need those sorts of time scales then one obviously wouldn’t have an intelligence explosion.
I want to expand on my last comment:
Is it clear that the discovery of intelligence by evolution had a larger impact than the discovery of eyes? What evidence do we have that increasing intelligence itself outweighs its cost compared to adding a new pair of sensors?
What I am asking is how we can be sure that it would be instrumental for an AGI to increase its intelligence rather than using its existing intelligence to pursue its terminal goal? Do we have good evidence that the resources that are necessary to increase intelligence outweigh the cost of being unable to use those resources to pursue its terminal goal directly?
My main point regarding the advantage of being “irrational” was that if we would all think like perfect rational agents, e.g. closer to how Eliezer Yudkowsky thinks, we would have missed out on a lot of discoveries that were made by people pursuing “Rare Disease for Cute Kitten” activities.
How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?
What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages over evolutionary discovery relative to its cost? What evidence do we have that any increase in intelligence does vastly outweigh its computational cost and the expenditure of time needed to discover it?
Evolution acting on intelligent agents has been able to do quite a bit of that for millions of years, though—for example via the topic I am forbidden to mention.