I don’t have any ontological qualms with the idea of gene editing / opt-in eugenics, but I have a lot of doubt about our ability to use that technology effectively and wisely.
I am moderately in favor of gene treatments that could prevent potential offspring / zygotes / fetuses / people in general from being susceptible to specific diseases or debilitating conditions. If we gain a robust understanding of the long-term affects and there are no red flags, I expect to update to strongly in favor (though it could take a lifetime to get the necessary data if we aren’t able to have extremely high confidence in the theory).
In contrast, I think non-medical eugenics is likely to be a net negative, for many of the same reasons already outlined by others.
To repurpose a quote from The Cincinnati Enquirer: The saying “AI X-risk is just one damn cruelty after another,” is a gross overstatement. The damn cruelties overlap.
When I saw the title, I thought, “Oh no. Of course there would be a tradeoff between those two things, if for no other reason than precisely because I hadn’t even thought about it and I would have hoped there wasn’t one.” Then as soon as I saw the question in the first header, the rest became obvious.
Thank you so much for writing this post. I’m glad I found it, even if months later. This tradeoff has a lot of implications for policy and outreach/messaging, as well as how I sort and internalize news in those domains.
Without having thought about it enough for an example: It sounds correct to me that in some contexts, appreciating both kinds of risk drives response in the same direction (toward more safety overall). But I have to agree now that in at least some important contexts, they drive in opposite directions.