Well, not as aligned as the best case—humans often screw things up for themselves and each other, and emulated humans might just do that but faster. (Wei Dai might call this “human safety problems.”)
But probably, it would be good.
From a strategic standpoint, I unfortunately don’t think this seems to inform strategy too much, because afaict scanning brains is a significantly harder technical problem than building de novo AI.
I think the observation that it just isn’t obvious that ems will come before de novo AI is sufficient to worry about the problem in the case that they don’t. Possibly while focusing more capabilities development towards creating ems (whatever that would look like)?
Also, would ems actually be powerful and capable enough to reliably stop a world-destroying non-em AGI, or an em about to make some world-destroying mistake because of its human-derived flaws? Or would we need to arm them with additional tools that fall under the umbrella of AGI safety anyway?
The only reason we care about AI Safety is because we believe the consequences are potentially existential. If it wasn’t, there would be no need for safety.
Well, not as aligned as the best case—humans often screw things up for themselves and each other, and emulated humans might just do that but faster. (Wei Dai might call this “human safety problems.”)
But probably, it would be good.
From a strategic standpoint, I unfortunately don’t think this seems to inform strategy too much, because afaict scanning brains is a significantly harder technical problem than building de novo AI.
I think the observation that it just isn’t obvious that ems will come before de novo AI is sufficient to worry about the problem in the case that they don’t. Possibly while focusing more capabilities development towards creating ems (whatever that would look like)?
Also, would ems actually be powerful and capable enough to reliably stop a world-destroying non-em AGI, or an em about to make some world-destroying mistake because of its human-derived flaws? Or would we need to arm them with additional tools that fall under the umbrella of AGI safety anyway?
The only reason we care about AI Safety is because we believe the consequences are potentially existential. If it wasn’t, there would be no need for safety.