I definitely don’t think we’ll get AGI by people scrutinizing the human genome and just figuring out what it’s doing, if that’s what you’re implying. I mentioned the limited size of the genome because it’s relevant to the complexity of what you’re trying to figure out, for the usual information-theory reasons (see 1, 2, 3). “Machinery in the cell/womb/etc.” doesn’t undermine that info-theory argument because such machinery is designed by the genome. (I think the epigenome contains much much less design information than the genome, but someone can tell me if I’m wrong.)
…But I don’t think the size of the genome is the strongest argument anyway.
Humans can understand how rocket engines work. I just don’t see how some impossibly-complicated-Rube-Goldberg-machine of an algorithm can learn rocket engineering. There was no learning rocket engineering in the ancestral environment. There was nothing like learning rocket engineering in the ancestral environment!! Unless, of course, you take the phrase “like learning rocket engineering” to be so incredibly broad that even learning toolmaking, learning botany, learning animal-tracking, or whatever, are “like learning rocket engineering” in the algorithmically-relevant sense. And, yeah, that’s totally a good perspective to take! They do have things in common! “Patterns tend to recur.” “Things are often composed of other things.” “Patterns tend to be localized in time and space.” You get the idea. If your learning algorithm does not rely on any domain-specific assumptions beyond things like “patterns tend to recur” and “things are often composed of other things” or whatever, then just how impossibly complicated and intricate can the learning algorithm be, really? I just don’t see it.
Also, you said “the brain’s algorithm”, but I don’t expect the brain’s algorithm in its entirety to be understood until after superintelligence. For example there’s something in the brain algorithm that says exactly which muscles to contract in order to vomit. Obviously you can make brain-like AGI without reverse-engineering that particular bit of the brain algorithm. More examples in the “Brain complexity is easy to overstate” section here.
I’m surprised you think that the brain’s algorithm is SO simple that it must be discovered soon and ~all at once.
RE “soon”, my claim (§1.9) was “probably within 25 years” but not with overwhelming confidence.
RE “~all at once”, see §1.7.1 for a very important nuance on that.
I definitely don’t think we’ll get AGI by people scrutinizing the human genome and just figuring out what it’s doing, if that’s what you’re implying. I mentioned the limited size of the genome because it’s relevant to the complexity of what you’re trying to figure out, for the usual information-theory reasons (see 1, 2, 3). “Machinery in the cell/womb/etc.” doesn’t undermine that info-theory argument because such machinery is designed by the genome. (I think the epigenome contains much much less design information than the genome, but someone can tell me if I’m wrong.)
…But I don’t think the size of the genome is the strongest argument anyway.
A stronger argument IMO (copied from here) is:
…And an even stronger argument IMO is in [Intro to brain-like-AGI safety] 2. “Learning from scratch” in the brain, especially the section on “cortical uniformity”, and parts of the subsequent post too.
Also, you said “the brain’s algorithm”, but I don’t expect the brain’s algorithm in its entirety to be understood until after superintelligence. For example there’s something in the brain algorithm that says exactly which muscles to contract in order to vomit. Obviously you can make brain-like AGI without reverse-engineering that particular bit of the brain algorithm. More examples in the “Brain complexity is easy to overstate” section here.
RE “soon”, my claim (§1.9) was “probably within 25 years” but not with overwhelming confidence.
RE “~all at once”, see §1.7.1 for a very important nuance on that.