I’m surprised to see no one in the comments whose reaction is “KILL IT WITH FIRE”, so I’ll be that guy and make a case why this research should be stopped rather than pursued:
On the one hand, there is obviously enormous untapped potential in this technology. I don’t have issues about the natural order of life or some WW2 eugenics trauma. From my (unfamiliar with the subject) eyes, you propose a credible way to make everyone healthier, smarter, happier, at low cost and within a generation, which is hard to argue against.
On the other hand, you spend no time mentioning the context in which this technology will be developed. I imagine there will be significant public backlash and that most advances on superbabies-making will be made by private labs funded by rich tech optimists, so it seems overwhelmingly likely to me that if this technology does get developed in the next 20 years, it will not improve everyone.
At this point, we’re talking about the far future, so I need to make a caveat for AI: I have no idea how the new AI world will interact with this, but there are a few most likely futures I can condition on.
Everyone dies: No point talking about superbabies.
Cohabitive singleton: No point. It’ll decide whether it wants superbabies or not.
Controlled ASI: Altman, Musk and a few others become kings of the universe, or it’s tightly controlled by various governments.
In that last scenario, I expect people having superbabies will be the technological and intellectual elites, leading to further inequality, and not enough improvements at scale to significantly improve global life expectancy or happiness… though I guess the premises are already an irrecoverable catastrophe, so superbabies are not the crux in this case.
Lastly, there is the possibility that AI does not reach superintelligence before we develop superbabies, or that the world will proceed more or less unchanged for us; in that case, I do think superbabies will increase inequality for little gains on the scale of humanity, but I don’t see this scenario as likely enough to be upset about it.
So I guess my intuitive objection was simply wrong, but I don’t mind posting this since you’ll probably meet more people like me.
There’s also the option that even if this technology is initially funded by the wealthy, learning curves will then drive down its cost as they do for every technology, until it becomes affordable for governments to subsidize its availability for everyone.
I’m surprised to see no one in the comments whose reaction is “KILL IT WITH FIRE”, so I’ll be that guy and make a case why this research should be stopped rather than pursued:
On the one hand, there is obviously enormous untapped potential in this technology. I don’t have issues about the natural order of life or some WW2 eugenics trauma. From my (unfamiliar with the subject) eyes, you propose a credible way to make everyone healthier, smarter, happier, at low cost and within a generation, which is hard to argue against.
On the other hand, you spend no time mentioning the context in which this technology will be developed. I imagine there will be significant public backlash and that most advances on superbabies-making will be made by private labs funded by rich tech optimists, so it seems overwhelmingly likely to me that if this technology does get developed in the next 20 years, it will not improve everyone.
At this point, we’re talking about the far future, so I need to make a caveat for AI: I have no idea how the new AI world will interact with this, but there are a few most likely futures I can condition on.
Everyone dies: No point talking about superbabies.
Cohabitive singleton: No point. It’ll decide whether it wants superbabies or not.
Controlled ASI: Altman, Musk and a few others become kings of the universe, or it’s tightly controlled by various governments.
In that last scenario, I expect people having superbabies will be the technological and intellectual elites, leading to further inequality, and not enough improvements at scale to significantly improve global life expectancy or happiness… though I guess the premises are already an irrecoverable catastrophe, so superbabies are not the crux in this case.
Lastly, there is the possibility that AI does not reach superintelligence before we develop superbabies, or that the world will proceed more or less unchanged for us; in that case, I do think superbabies will increase inequality for little gains on the scale of humanity, but I don’t see this scenario as likely enough to be upset about it.
So I guess my intuitive objection was simply wrong, but I don’t mind posting this since you’ll probably meet more people like me.
There’s also the option that even if this technology is initially funded by the wealthy, learning curves will then drive down its cost as they do for every technology, until it becomes affordable for governments to subsidize its availability for everyone.