Mind uploading is probably quite hard. See here. It’s probably much easier to get AGI from partial understanding of how to do uploads, than to get actual uploads. Even if you have unlimited political capital, such that you can successfully prevent making partial-upload-AGIs, it’s probably just very technically difficult. Intelligence amplification is much more doable because we can copy a bunch of nature’s work by looking at all the existing genetic variants and their associated phenotypes.
We can certainly do intelligence amplification on the way to mind uploading. I would prefer to perform uploading before moving on to building (other) ASI, rather than ONLY intelligence amplification.
We almost certainly want to eventually do uploading, if nothing else because that’s probably how you avoid involuntary-preheatdeath-death. It might be the best way to do supra-genomic HIA, but I would rather leave that up to the next generation, because it seems both morally fraught and technically difficult. It’s far from clear to me that we ever want to make ASI; why ever do that rather than just have more human/humane personal growth and descendants? (I agree with the urgency of all the mundane horrible stuff that’s always happening; but my guess is we can get out of that stuff with HIA before it’s safe to make ASI. Alignment is harder than curing world hunger and stopping all war, probably (glib genie jokes aside).)
Mind uploading is probably quite hard. See here. It’s probably much easier to get AGI from partial understanding of how to do uploads, than to get actual uploads. Even if you have unlimited political capital, such that you can successfully prevent making partial-upload-AGIs, it’s probably just very technically difficult. Intelligence amplification is much more doable because we can copy a bunch of nature’s work by looking at all the existing genetic variants and their associated phenotypes.
We can certainly do intelligence amplification on the way to mind uploading. I would prefer to perform uploading before moving on to building (other) ASI, rather than ONLY intelligence amplification.
We almost certainly want to eventually do uploading, if nothing else because that’s probably how you avoid involuntary-preheatdeath-death. It might be the best way to do supra-genomic HIA, but I would rather leave that up to the next generation, because it seems both morally fraught and technically difficult. It’s far from clear to me that we ever want to make ASI; why ever do that rather than just have more human/humane personal growth and descendants? (I agree with the urgency of all the mundane horrible stuff that’s always happening; but my guess is we can get out of that stuff with HIA before it’s safe to make ASI. Alignment is harder than curing world hunger and stopping all war, probably (glib genie jokes aside).)