Given the history of sociopathic humans, it seems to me that unfriendly upload self-modifications are significantly more likely than unfriendly AGI to produce the kind of dystopia that only results from an almost-Friendly takeover.
It also seems likely that even a rogue upload would still be at significant risk of being eaten by a proper AGI. Modifying a human in such a way as to become a full-strength singularity seems equivalent to the problem of building a singularity AGI from scratch, with an additional requirement to understand a certain amount about the human brain and mind.
Given the history of sociopathic humans, it seems to me that unfriendly upload self-modifications are significantly more likely than unfriendly AGI to produce the kind of dystopia that only results from an almost-Friendly takeover.
It also seems likely that even a rogue upload would still be at significant risk of being eaten by a proper AGI. Modifying a human in such a way as to become a full-strength singularity seems equivalent to the problem of building a singularity AGI from scratch, with an additional requirement to understand a certain amount about the human brain and mind.
The question is whether a human solves that problem or leaves it to an upload (or it never gets solved).
True. And, based on no math whatsoever, I would guess that we’re more likely to get FAI if we make uploads than if we don’t make uploads.