There are many cases where simple genetic algorithms outperform humans. Humans outperform GAs in other cases of course, but it shows we are far from perfect.
To riff on your theme a little bit, maybe one area where genetic algorithms (or other comparably “simplistic” approaches) could shine is in the design of computer algorithms, or some important features thereof.
Well actually GAs aren’t that good at algorithms. Because slightly mutating an algorithm usually breaks it, or creates an entirely different algorithm. So the fitness landscape isn’t that gentle.
You can do a bit better if you work with circuits instead. And even better if you make the circuits continuous, so small mutations create small changes in output. And you can optimize these faster with gradient descent instead of GAs.
And then you have neural networks, which are quite successful.
https://en.wikipedia.org/wiki/Neuroevolution
“Neuroevolution, or neuro-evolution, is a form of machine learning that uses evolutionary algorithms to train artificial neural networks. It is most commonly applied in artificial life, computer games, and evolutionary robotics. A main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network’s performance at a task. For example, the outcome of a game (i.e. whether one player won or lost) can be easily measured without providing labeled examples of desired strategies.”
To riff on your theme a little bit, maybe one area where genetic algorithms (or other comparably “simplistic” approaches) could shine is in the design of computer algorithms, or some important features thereof.
Well actually GAs aren’t that good at algorithms. Because slightly mutating an algorithm usually breaks it, or creates an entirely different algorithm. So the fitness landscape isn’t that gentle.
You can do a bit better if you work with circuits instead. And even better if you make the circuits continuous, so small mutations create small changes in output. And you can optimize these faster with gradient descent instead of GAs.
And then you have neural networks, which are quite successful.
https://en.wikipedia.org/wiki/Neuroevolution “Neuroevolution, or neuro-evolution, is a form of machine learning that uses evolutionary algorithms to train artificial neural networks. It is most commonly applied in artificial life, computer games, and evolutionary robotics. A main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network’s performance at a task. For example, the outcome of a game (i.e. whether one player won or lost) can be easily measured without providing labeled examples of desired strategies.”