I don’t think that in an alternative timeline it was too realistic for us to “rise with the wave” of bio transhumanism. Especially without the acceleration from powerful AI germline engineering would have been the most likely path to super/transhumans—leaving us behind just like AI. Still a better timeline I think.
On the one hand, current acceleration of biology via AI indeed points for me towards the direction that biology is way harder to be cracked that I initially expected. On the other hand, there are still many things to try, both in terms of science and institutions, which haven’t been tried. So I wouldn’t be surprised if things like curing aging and adult human intelligence were achievable by more competent 21-century civilizations in a series of generational moonshots.
I’d also add that we haven’t tried a Manhattan Project on aging. If we viewed ourselves as battling dark gods (which I think is a fair frame!) the way our grandparents were battling the Nazis, I think the motive would be there. And I think we would win. At this point maybe not before our parents die, and possibly not before the eldest among us die, but maybe we can bring them along anyway.
There are other puzzle we haven’t become wise enough to anticipate collectively. Curing aging leaves our institutions in shambles, because they’re largely built on assumptions of mortality and narrow reproductive windows. How do we prepare for a shift like that? I suspect that given the current way we do collective reasoning, the question won’t enter the Overton window until we have the equivalent of a magic pill curing aging and we’re watching the stock market and dating markets crash (or whatever).
One of the reasons I worked on building CFAR was that the species collectively just doesn’t seem to know how to stay focused on a problem that isn’t screaming an immediate threat of death in its face. That struck me as critically upstream of everything, including (but very much not limited to) sorting out AI risk. I think we’re capable of vastly more than we’ve been able to do so far, even without major advancements in tech, simply by getting the memetics right. The scientific revolution was IMO a preview. And thankfully, memetic revolutions can happen extremely quickly. (E.g. Japan became industrial in just one generation.)
For whatever reason, though, the death trance keeps having people being “practical” or “realistic” and giving up weirdly early.
I don’t think that in an alternative timeline it was too realistic for us to “rise with the wave” of bio transhumanism. Especially without the acceleration from powerful AI germline engineering would have been the most likely path to super/transhumans—leaving us behind just like AI. Still a better timeline I think.
On the one hand, current acceleration of biology via AI indeed points for me towards the direction that biology is way harder to be cracked that I initially expected. On the other hand, there are still many things to try, both in terms of science and institutions, which haven’t been tried. So I wouldn’t be surprised if things like curing aging and adult human intelligence were achievable by more competent 21-century civilizations in a series of generational moonshots.
I’d also add that we haven’t tried a Manhattan Project on aging. If we viewed ourselves as battling dark gods (which I think is a fair frame!) the way our grandparents were battling the Nazis, I think the motive would be there. And I think we would win. At this point maybe not before our parents die, and possibly not before the eldest among us die, but maybe we can bring them along anyway.
There are other puzzle we haven’t become wise enough to anticipate collectively. Curing aging leaves our institutions in shambles, because they’re largely built on assumptions of mortality and narrow reproductive windows. How do we prepare for a shift like that? I suspect that given the current way we do collective reasoning, the question won’t enter the Overton window until we have the equivalent of a magic pill curing aging and we’re watching the stock market and dating markets crash (or whatever).
One of the reasons I worked on building CFAR was that the species collectively just doesn’t seem to know how to stay focused on a problem that isn’t screaming an immediate threat of death in its face. That struck me as critically upstream of everything, including (but very much not limited to) sorting out AI risk. I think we’re capable of vastly more than we’ve been able to do so far, even without major advancements in tech, simply by getting the memetics right. The scientific revolution was IMO a preview. And thankfully, memetic revolutions can happen extremely quickly. (E.g. Japan became industrial in just one generation.)
For whatever reason, though, the death trance keeps having people being “practical” or “realistic” and giving up weirdly early.
Sure—but we, personally, would be dead.
We still probably will be, when AGI is misaligned or misused. But in this timeline we’ve got a chance. Time to fight for it.