Genetic enhancement seems like a safe-ish way of getting a few standard deviations without yet knowing what you are really doing, that current humanity could actually attempt in practice. And that might help a lot with both the “knowing what you are doing” part, and with not doing irreversible things without knowing what you are doing. Any change risks misalignment, uplifting to a superintelligence requires ASI-grade alignment theory and technology, even lifespans for baseline biological humans that run into centuries risk misalignment (since this never happened before). There’s always cryonics, which enables waiting for future progress, if civilization was at all serious about it.
So when you talk about “merging with AI”, that is very suspicious, because a well-developed uplifting methodology doesn’t obviously look anything like “merging with AI”. You become some kind of more capable mind, that’s different from what you were before, not taking irreversible steps towards something you wouldn’t endorse. Without such a methodology, it’s a priori about as bad an idea as building superintelligence in 2029.
I usually think about “reversible merges” for the purpose of intelligence augmentation (not for the purpose of space travel, though).
I tend to think that high-end non-invasive BCI are powerful enough for that and safer than implants. But yes, there still might be serious risks, both personal and existential.
Genetic enhancement seems like a safe-ish way of getting a few standard deviations without yet knowing what you are really doing, that current humanity could actually attempt in practice. And that might help a lot with both the “knowing what you are doing” part, and with not doing irreversible things without knowing what you are doing. Any change risks misalignment, uplifting to a superintelligence requires ASI-grade alignment theory and technology, even lifespans for baseline biological humans that run into centuries risk misalignment (since this never happened before). There’s always cryonics, which enables waiting for future progress, if civilization was at all serious about it.
So when you talk about “merging with AI”, that is very suspicious, because a well-developed uplifting methodology doesn’t obviously look anything like “merging with AI”. You become some kind of more capable mind, that’s different from what you were before, not taking irreversible steps towards something you wouldn’t endorse. Without such a methodology, it’s a priori about as bad an idea as building superintelligence in 2029.
I usually think about “reversible merges” for the purpose of intelligence augmentation (not for the purpose of space travel, though).
I tend to think that high-end non-invasive BCI are powerful enough for that and safer than implants. But yes, there still might be serious risks, both personal and existential.