Another consideration is generation length. Even talking about hardware replacement, a recursively improving AI should be able to build a new generation on the order of weeks or months. Humans take a minimum of twelve years, and in practice quite a bit more than that most of the time. Even if we end up on the curve first, the different constant factor may dominate.
Seems to me that this point is by itself conclusive reason to think that there won’t be a fast takeoff in biological intelligence. There may be a single great leap, if we figure out how to make dramatically smarter humans all at once without any trial and error, but there won’t be a “fast takeoff” of recursive self improvement, since that would take at minimum of 12 years for each iteration, and that’s not fast.
That said, I agree that we should consider promoting gene editing. Seems like the alignment problem for genetically engineered humans is, well, basically not a problem at all (such humans won’t be any less ethical than normal children). So the sooner we get started, the better. But I doubt it will be fast enough.
I agree human maturation time is enough on its own to rule out a human reproductive biotech ‘fast takeoff,’ but also:
In any given year the number of new births is very small relative to the existing workforce, of billions of humans, including many people with extraordinary abilities
Most of those births are unplanned or to parents without access to technologies like IVF
New reproductive technologies are adopted gradually by risk-averse parents
Any radical enhancement would carry serious risks of negative surprise side effects, further reducing the user base of new tech
IVF is only used for a few percent of births in rich countries, and existing fancy versions are used even less frequently
All of those factors would smooth out any such application to spread out expected impacts over a number of decades, on top of the minimum from maturation times.
I’m not following the logic here. I presume that “fast takeoff” is supposed to mean that someone with increased intelligence from the first improvement is then able to think of a second improvement that would have been beyond what earlier people could have thought of, and so forth for additional improvements. The relevant time interval here is from birth to thinking better than the previous generation, which need have nothing to do with the interval from birth to reproductive maturity (an interval which is not immutable anyway). The person who thinks of the new improvement doesn’t have to be one of those who gestate the next generation.
I wasn’t thinking of reproductive maturity, I was thinking of it in the same way as you. We make some gengineered people, who grow up and become smart, and then they figure out how to make the next generation, etc. Well, how long does it take to grow up and become smart? 12 years seems like an optimistic estimate to me.
Or are you thinking that we could use CRISPR to edit the genes of adult humans in ways that make them smarter within months? Whoa, that blows my mind. Seems very unlikely to me, for several reasons; is it a real thing? Do people think that’s possible?
No, I wasn’t thinking of modification of adult somatic genes. I was thinking of reproductive maturity taking 12 years, which you’re right is also about how long it takes to reach adult levels of cognition (though not knowledge, obviously). The coincidence here leads to the ambiguity in what you said. Actually, I doubt this is a coincidence—it makes biological sense for these two to go together. Neither would be immutable if you’re making profound changes to the genome, although if anything, it might be necessary to prolong the period of immaturity in order to get higher intelligence.
Seems like the alignment problem for genetically engineered humans is, well, basically not a problem at all (such humans won’t be any less ethical than normal children).
Why? Seems unlikely to me that there exists a genetic intelligence-dial that just happens to leave all other parameters alone.
I shouldn’t have said “basically not a problem at all.” I should have said “not much more of a problem than the problem we already face with our own children.” I agree that selecting for intelligence might have side-effects on other parameters. But seems to me those side-effects will likely be small and perhaps even net-positive (it’s not like Einstein, von Neumann, etc. were psychopaths. They seemed pretty normal, as far as values were concerned.) Certainly we should be much more optimistic about the alignment-by-default of engineered humans than the alignment of some massive artificial neural net.
Einstein and von Neumann were also nowhere near superintelligent; they are far better representatives of regular humans than of superintelligences. I think the problem goes deeper. As you apply more and more optimization pressure, statistical guarantees begin to fall apart. You don’t get sub-agent alignment for free, whether it’s made of carbon or silicon. Case in point, human values have become adrift over time relative to the original goal of inclusive genetic fitness.
Seems to me that this point is by itself conclusive reason to think that there won’t be a fast takeoff in biological intelligence. There may be a single great leap, if we figure out how to make dramatically smarter humans all at once without any trial and error, but there won’t be a “fast takeoff” of recursive self improvement, since that would take at minimum of 12 years for each iteration, and that’s not fast.
That said, I agree that we should consider promoting gene editing. Seems like the alignment problem for genetically engineered humans is, well, basically not a problem at all (such humans won’t be any less ethical than normal children). So the sooner we get started, the better. But I doubt it will be fast enough.
I agree human maturation time is enough on its own to rule out a human reproductive biotech ‘fast takeoff,’ but also:
In any given year the number of new births is very small relative to the existing workforce, of billions of humans, including many people with extraordinary abilities
Most of those births are unplanned or to parents without access to technologies like IVF
New reproductive technologies are adopted gradually by risk-averse parents
Any radical enhancement would carry serious risks of negative surprise side effects, further reducing the user base of new tech
IVF is only used for a few percent of births in rich countries, and existing fancy versions are used even less frequently
All of those factors would smooth out any such application to spread out expected impacts over a number of decades, on top of the minimum from maturation times.
I’m not following the logic here. I presume that “fast takeoff” is supposed to mean that someone with increased intelligence from the first improvement is then able to think of a second improvement that would have been beyond what earlier people could have thought of, and so forth for additional improvements. The relevant time interval here is from birth to thinking better than the previous generation, which need have nothing to do with the interval from birth to reproductive maturity (an interval which is not immutable anyway). The person who thinks of the new improvement doesn’t have to be one of those who gestate the next generation.
I wasn’t thinking of reproductive maturity, I was thinking of it in the same way as you. We make some gengineered people, who grow up and become smart, and then they figure out how to make the next generation, etc. Well, how long does it take to grow up and become smart? 12 years seems like an optimistic estimate to me.
Or are you thinking that we could use CRISPR to edit the genes of adult humans in ways that make them smarter within months? Whoa, that blows my mind. Seems very unlikely to me, for several reasons; is it a real thing? Do people think that’s possible?
No, I wasn’t thinking of modification of adult somatic genes. I was thinking of reproductive maturity taking 12 years, which you’re right is also about how long it takes to reach adult levels of cognition (though not knowledge, obviously). The coincidence here leads to the ambiguity in what you said. Actually, I doubt this is a coincidence—it makes biological sense for these two to go together. Neither would be immutable if you’re making profound changes to the genome, although if anything, it might be necessary to prolong the period of immaturity in order to get higher intelligence.
Why? Seems unlikely to me that there exists a genetic intelligence-dial that just happens to leave all other parameters alone.
I shouldn’t have said “basically not a problem at all.” I should have said “not much more of a problem than the problem we already face with our own children.” I agree that selecting for intelligence might have side-effects on other parameters. But seems to me those side-effects will likely be small and perhaps even net-positive (it’s not like Einstein, von Neumann, etc. were psychopaths. They seemed pretty normal, as far as values were concerned.) Certainly we should be much more optimistic about the alignment-by-default of engineered humans than the alignment of some massive artificial neural net.
Einstein and von Neumann were also nowhere near superintelligent; they are far better representatives of regular humans than of superintelligences. I think the problem goes deeper. As you apply more and more optimization pressure, statistical guarantees begin to fall apart. You don’t get sub-agent alignment for free, whether it’s made of carbon or silicon. Case in point, human values have become adrift over time relative to the original goal of inclusive genetic fitness.
OK, yeah, fair enough. Still though, the danger seems less than it is in the machine intelligence case.
Evolution might have optimized intelligence, if there were no trade-offs.