I don’t know if I’m saying anything that hasn’t been said before elsewhere, but looking at the massive difference in intelligence between humans seems like a strong argument for FOOM to me. Humans are basically all the same. We have 99.99% the same DNA, the same brain structure, size, etc. And yet some humans have exceptional abilities.
I was just reading about Paul Erdos. He could hold 3 conversations at the same time, with mathematicians on highly technical subjects. He was constantly having insights into mathematical research left and right. He produced more papers than any other mathematician.
I don’t think it’s a matter of culture. I don’t think an average person could “learn” to have a higher IQ, let alone be Erdos. And yet he very likely had the same brain structure as everyone else. Who knows what would be possible if you are allowed to move far outside the space of humans.
But this isn’t the (main) argument Yudkowsky uses. He relies on this intuition that I don’t think was explicitly stated or argued strongly enough. This one intuition is central to all the points about recursive self improvement.
It’s that humans kind of suck. At least at engineering and solving complicated technical problems. We weren’t evolved to be good at it. There are many cases where simple genetic algorithms outperform humans. Humans outperform GAs in other cases of course, but it shows we are far from perfect. Even in the areas where we do well, we have trouble keeping track of many different things in our heads. We are very bad at prediction and pattern matching compared to small machine learning algorithms much of the time.
I think this intuition that “humans kind of suck” and “there are a lot of places we could make big improvements” is at the core of the FOOM debate and most these AI risk debates. If you really believe this, then it seems almost obvious that AI will very rapidly become much smarter than humans. People that don’t have this seem to believe that AI is going to be very slow. Perhaps with steep diminishing returns.
There are many cases where simple genetic algorithms outperform humans. Humans outperform GAs in other cases of course, but it shows we are far from perfect.
To riff on your theme a little bit, maybe one area where genetic algorithms (or other comparably “simplistic” approaches) could shine is in the design of computer algorithms, or some important features thereof.
Well actually GAs aren’t that good at algorithms. Because slightly mutating an algorithm usually breaks it, or creates an entirely different algorithm. So the fitness landscape isn’t that gentle.
You can do a bit better if you work with circuits instead. And even better if you make the circuits continuous, so small mutations create small changes in output. And you can optimize these faster with gradient descent instead of GAs.
And then you have neural networks, which are quite successful.
https://en.wikipedia.org/wiki/Neuroevolution
“Neuroevolution, or neuro-evolution, is a form of machine learning that uses evolutionary algorithms to train artificial neural networks. It is most commonly applied in artificial life, computer games, and evolutionary robotics. A main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network’s performance at a task. For example, the outcome of a game (i.e. whether one player won or lost) can be easily measured without providing labeled examples of desired strategies.”
I don’t know if I’m saying anything that hasn’t been said before elsewhere, but looking at the massive difference in intelligence between humans seems like a strong argument for FOOM to me. Humans are basically all the same. We have 99.99% the same DNA, the same brain structure, size, etc. And yet some humans have exceptional abilities.
I was just reading about Paul Erdos. He could hold 3 conversations at the same time, with mathematicians on highly technical subjects. He was constantly having insights into mathematical research left and right. He produced more papers than any other mathematician.
I don’t think it’s a matter of culture. I don’t think an average person could “learn” to have a higher IQ, let alone be Erdos. And yet he very likely had the same brain structure as everyone else. Who knows what would be possible if you are allowed to move far outside the space of humans.
But this isn’t the (main) argument Yudkowsky uses. He relies on this intuition that I don’t think was explicitly stated or argued strongly enough. This one intuition is central to all the points about recursive self improvement.
It’s that humans kind of suck. At least at engineering and solving complicated technical problems. We weren’t evolved to be good at it. There are many cases where simple genetic algorithms outperform humans. Humans outperform GAs in other cases of course, but it shows we are far from perfect. Even in the areas where we do well, we have trouble keeping track of many different things in our heads. We are very bad at prediction and pattern matching compared to small machine learning algorithms much of the time.
I think this intuition that “humans kind of suck” and “there are a lot of places we could make big improvements” is at the core of the FOOM debate and most these AI risk debates. If you really believe this, then it seems almost obvious that AI will very rapidly become much smarter than humans. People that don’t have this seem to believe that AI is going to be very slow. Perhaps with steep diminishing returns.
To riff on your theme a little bit, maybe one area where genetic algorithms (or other comparably “simplistic” approaches) could shine is in the design of computer algorithms, or some important features thereof.
Well actually GAs aren’t that good at algorithms. Because slightly mutating an algorithm usually breaks it, or creates an entirely different algorithm. So the fitness landscape isn’t that gentle.
You can do a bit better if you work with circuits instead. And even better if you make the circuits continuous, so small mutations create small changes in output. And you can optimize these faster with gradient descent instead of GAs.
And then you have neural networks, which are quite successful.
https://en.wikipedia.org/wiki/Neuroevolution “Neuroevolution, or neuro-evolution, is a form of machine learning that uses evolutionary algorithms to train artificial neural networks. It is most commonly applied in artificial life, computer games, and evolutionary robotics. A main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network’s performance at a task. For example, the outcome of a game (i.e. whether one player won or lost) can be easily measured without providing labeled examples of desired strategies.”