The three known times evolutions invented a freely rotating wheel are: ATP synthase, the bacterial flagellum, and an obscure third example discovered recently which I forget.
But don’t these calculations establish a lower bound on how complex or adaptive genetic evolution is? But not an upper bound?
Those are average cases, not lower bounds. (It would be very surprising to see it happen either ten times faster or ten times slower.) Tomorrow we will discuss upper bounds.
Everyone: There’s a lot of hype surrounding genetic algorithms. DO NOT GET YOUR INFORMATION FROM BUSINESS BOOKS PRAISING THE VALUE OF CHAOS. Read AI textbooks instead. Genetic algorithms are okay (human-competitive) at simultaneously optimizing 37 different criteria using some kind of single shape that can be continuously deformed. They’re okay at designing algorithms with clearly defined success criteria that run fast most of the time in 37 lines of code. They suck like a vacuum cleaner at designing anything larger than that—defeated by the same exponential explosion that consumes most AI algorithms. Most genetic algorithms are not biologically realistic—the ones that do straight beam search, straight hill-climbing, typically do as well or better than the ones that try to imitate sexual reproduction. Remember that it took billions of years of evolution before the Cambrian explosion. Our genetic algorithms haven’t gotten to the level of multicellular organisms or sex yet.
Back in my undergrad days, a fellow student of mine implemented a genetic algorithm on a field-programmable gate array with the intention of performing computations. Once he got the thing working in the first place, it took him half a semester to get it able to pass the 7 bits from the 7 input channels to the 7 output channels, in order. He didn’t have time left over to try anything more complicated.
Well genetic algorithms work by making assumptions about the problem space, mainly that better solutions are very likely to be found close to other good solutions. If the assumption is not true or only weakly true, than of course it isn’t going to work. Like if beneficial mutations are extremely rare or practically non-existent.
My point is that it depends entirely on the problem and how it’s represented. Some problems work really well for GAs, and some don’t at all.
The three known times evolutions invented a freely rotating wheel are: ATP synthase, the bacterial flagellum, and an obscure third example discovered recently which I forget.
But don’t these calculations establish a lower bound on how complex or adaptive genetic evolution is? But not an upper bound?
Those are average cases, not lower bounds. (It would be very surprising to see it happen either ten times faster or ten times slower.) Tomorrow we will discuss upper bounds.
Everyone: There’s a lot of hype surrounding genetic algorithms. DO NOT GET YOUR INFORMATION FROM BUSINESS BOOKS PRAISING THE VALUE OF CHAOS. Read AI textbooks instead. Genetic algorithms are okay (human-competitive) at simultaneously optimizing 37 different criteria using some kind of single shape that can be continuously deformed. They’re okay at designing algorithms with clearly defined success criteria that run fast most of the time in 37 lines of code. They suck like a vacuum cleaner at designing anything larger than that—defeated by the same exponential explosion that consumes most AI algorithms. Most genetic algorithms are not biologically realistic—the ones that do straight beam search, straight hill-climbing, typically do as well or better than the ones that try to imitate sexual reproduction. Remember that it took billions of years of evolution before the Cambrian explosion. Our genetic algorithms haven’t gotten to the level of multicellular organisms or sex yet.
Back in my undergrad days, a fellow student of mine implemented a genetic algorithm on a field-programmable gate array with the intention of performing computations. Once he got the thing working in the first place, it took him half a semester to get it able to pass the 7 bits from the 7 input channels to the 7 output channels, in order. He didn’t have time left over to try anything more complicated.
So, yeah.
Well genetic algorithms work by making assumptions about the problem space, mainly that better solutions are very likely to be found close to other good solutions. If the assumption is not true or only weakly true, than of course it isn’t going to work. Like if beneficial mutations are extremely rare or practically non-existent.
My point is that it depends entirely on the problem and how it’s represented. Some problems work really well for GAs, and some don’t at all.