I’m surprised you think that the brain’s algorithm is SO simple that it must be discovered soon and ~all at once. This seems unlikely to me (reality has a surprising amount of detail). I think you may be underestimating the complexity because:
Though I don’t know enough biochem to say for sure, I’m guessing many “bits of the algorithm” are external to the genes (epigenetic?). Specifically, I don’t just mean data like education materials that is learned, I mean that actual pieces of the algorithm are probably constructed “in motion” by other machinery in the cell/womb/etc. Also, insofar as parts of the algorithm come in the form of channels that can be made available to AGI, it’s possible that AGI would have to be very brain like to absorb them correctly (becuase they are specifically the missing parts of an incomplete algorithm).
Closer to my specialization: whatever information does appear in the genome is probably compressed (I once redundancy is removed). That means it will look like noise, meaning it will be hard to predict, and also presumably hard to discover. So, the bit content should not be imagined as bits of an elegant Python program. It might pack in more conceptual pieces than you seem to expect.
I definitely don’t think we’ll get AGI by people scrutinizing the human genome and just figuring out what it’s doing, if that’s what you’re implying. I mentioned the limited size of the genome because it’s relevant to the complexity of what you’re trying to figure out, for the usual information-theory reasons (see 1, 2, 3). “Machinery in the cell/womb/etc.” doesn’t undermine that info-theory argument because such machinery is designed by the genome. (I think the epigenome contains much much less design information than the genome, but someone can tell me if I’m wrong.)
…But I don’t think the size of the genome is the strongest argument anyway.
Humans can understand how rocket engines work. I just don’t see how some impossibly-complicated-Rube-Goldberg-machine of an algorithm can learn rocket engineering. There was no learning rocket engineering in the ancestral environment. There was nothing like learning rocket engineering in the ancestral environment!! Unless, of course, you take the phrase “like learning rocket engineering” to be so incredibly broad that even learning toolmaking, learning botany, learning animal-tracking, or whatever, are “like learning rocket engineering” in the algorithmically-relevant sense. And, yeah, that’s totally a good perspective to take! They do have things in common! “Patterns tend to recur.” “Things are often composed of other things.” “Patterns tend to be localized in time and space.” You get the idea. If your learning algorithm does not rely on any domain-specific assumptions beyond things like “patterns tend to recur” and “things are often composed of other things” or whatever, then just how impossibly complicated and intricate can the learning algorithm be, really? I just don’t see it.
Also, you said “the brain’s algorithm”, but I don’t expect the brain’s algorithm in its entirety to be understood until after superintelligence. For example there’s something in the brain algorithm that says exactly which muscles to contract in order to vomit. Obviously you can make brain-like AGI without reverse-engineering that particular bit of the brain algorithm. More examples in the “Brain complexity is easy to overstate” section here.
I’m surprised you think that the brain’s algorithm is SO simple that it must be discovered soon and ~all at once.
RE “soon”, my claim (§1.9) was “probably within 25 years” but not with overwhelming confidence.
RE “~all at once”, see §1.7.1 for a very important nuance on that.
I’m surprised you think that the brain’s algorithm is SO simple that it must be discovered soon and ~all at once. This seems unlikely to me (reality has a surprising amount of detail). I think you may be underestimating the complexity because:
Though I don’t know enough biochem to say for sure, I’m guessing many “bits of the algorithm” are external to the genes (epigenetic?). Specifically, I don’t just mean data like education materials that is learned, I mean that actual pieces of the algorithm are probably constructed “in motion” by other machinery in the cell/womb/etc. Also, insofar as parts of the algorithm come in the form of channels that can be made available to AGI, it’s possible that AGI would have to be very brain like to absorb them correctly (becuase they are specifically the missing parts of an incomplete algorithm).
Closer to my specialization: whatever information does appear in the genome is probably compressed (I once redundancy is removed). That means it will look like noise, meaning it will be hard to predict, and also presumably hard to discover. So, the bit content should not be imagined as bits of an elegant Python program. It might pack in more conceptual pieces than you seem to expect.
I definitely don’t think we’ll get AGI by people scrutinizing the human genome and just figuring out what it’s doing, if that’s what you’re implying. I mentioned the limited size of the genome because it’s relevant to the complexity of what you’re trying to figure out, for the usual information-theory reasons (see 1, 2, 3). “Machinery in the cell/womb/etc.” doesn’t undermine that info-theory argument because such machinery is designed by the genome. (I think the epigenome contains much much less design information than the genome, but someone can tell me if I’m wrong.)
…But I don’t think the size of the genome is the strongest argument anyway.
A stronger argument IMO (copied from here) is:
…And an even stronger argument IMO is in [Intro to brain-like-AGI safety] 2. “Learning from scratch” in the brain, especially the section on “cortical uniformity”, and parts of the subsequent post too.
Also, you said “the brain’s algorithm”, but I don’t expect the brain’s algorithm in its entirety to be understood until after superintelligence. For example there’s something in the brain algorithm that says exactly which muscles to contract in order to vomit. Obviously you can make brain-like AGI without reverse-engineering that particular bit of the brain algorithm. More examples in the “Brain complexity is easy to overstate” section here.
RE “soon”, my claim (§1.9) was “probably within 25 years” but not with overwhelming confidence.
RE “~all at once”, see §1.7.1 for a very important nuance on that.
I wouldn’t say that “in 25 years” is “soon”, and 5-25 years seems like a reasonable amount of uncertainty.
What are your timelines?