Here is the reason I am skeptical as to the outcome.
Hear me out a little bit. Suppose you can in fact build a ‘brain like’ model. Except, the brain is not one gigantic repeating neural network, it has many distinct regions where nature has made the rules slightly different for a reason. Nature can’t actually encode too much complexity by default as there is so little space in genomes, but it obviously encodes quite a bit or we wouldn’t see complex starter instincts for living beings.
But we just have 12 OOMs of compute, we don’t have the knowledge how to code up these thousands of regions, and we don’t have a detailed enough scan of a formerly living patient’s brain to emulate all the regions yet.
So you have to take shortcuts, cram in some ‘brain like’ neural network on top of a wobbly structure, and then what? What inputs are you feeding these brains in a box? What sort of environment do they have to develop intelligence in? Why do they need intelligence, what motivation system is giving them feedback to develop it?
I don’t dispute that more compute will help, or that AI is possible, or if we had more compute there would be napkin estimates that showed AI was imminently possible and mass crash programs that would pop up everywhere. Just it’s not as simple as getting more compute—we have a lot of it now, maybe enough if it what we have gets utilized more efficiently—we have to develop all the rest of the pieces.
These functional subsystems that wake up the rest of the brain, that give rewards, that give starter reflexes—they aren’t simple and they have to be debugged and made well defined. Or you get trash. Nature developed them over hundreds of millions of years and reused them over and over, it’s why we see so many behavior similarities with our mammalian cousins.
Thanks for the pushback! I agree that if we are trying to copy the human brain, it’s not clear that +12 OOMs would be enough. I had my Neuromorph project to illustrate this.
Oh I don’t think we need 12 OOMs. Maybe 2. I don’t think most of the electrical details have any net effect you can’t model a simpler and equally good way. I was pointing out that the brain is a system, your argument is like saying if we had the hardware for a game console we would therefore get the benefit of the most amazing games the hardware can support.
This isn’t true, once we have sufficiently capable hardware someone will have to build up algorithms that exhibit intelligence one layer at a time. Well, starting with existing work.
Here is the reason I am skeptical as to the outcome.
Hear me out a little bit. Suppose you can in fact build a ‘brain like’ model. Except, the brain is not one gigantic repeating neural network, it has many distinct regions where nature has made the rules slightly different for a reason. Nature can’t actually encode too much complexity by default as there is so little space in genomes, but it obviously encodes quite a bit or we wouldn’t see complex starter instincts for living beings.
But we just have 12 OOMs of compute, we don’t have the knowledge how to code up these thousands of regions, and we don’t have a detailed enough scan of a formerly living patient’s brain to emulate all the regions yet.
So you have to take shortcuts, cram in some ‘brain like’ neural network on top of a wobbly structure, and then what? What inputs are you feeding these brains in a box? What sort of environment do they have to develop intelligence in? Why do they need intelligence, what motivation system is giving them feedback to develop it?
I don’t dispute that more compute will help, or that AI is possible, or if we had more compute there would be napkin estimates that showed AI was imminently possible and mass crash programs that would pop up everywhere. Just it’s not as simple as getting more compute—we have a lot of it now, maybe enough if it what we have gets utilized more efficiently—we have to develop all the rest of the pieces.
These functional subsystems that wake up the rest of the brain, that give rewards, that give starter reflexes—they aren’t simple and they have to be debugged and made well defined. Or you get trash. Nature developed them over hundreds of millions of years and reused them over and over, it’s why we see so many behavior similarities with our mammalian cousins.
Thanks for the pushback! I agree that if we are trying to copy the human brain, it’s not clear that +12 OOMs would be enough. I had my Neuromorph project to illustrate this.
However, (a) this might not be as hard as it sounds, and (b) there are other ways to TAI besides trying to copy the human brain.
Oh I don’t think we need 12 OOMs. Maybe 2. I don’t think most of the electrical details have any net effect you can’t model a simpler and equally good way. I was pointing out that the brain is a system, your argument is like saying if we had the hardware for a game console we would therefore get the benefit of the most amazing games the hardware can support.
This isn’t true, once we have sufficiently capable hardware someone will have to build up algorithms that exhibit intelligence one layer at a time. Well, starting with existing work.