I think it’s more accurate, though the term “safe” has a much larger positive valence than is justified, and is so accurate but misleading. Particularly since it smuggles in EY’s presumptions about whom it’s safe for, and so whom we’re supposed to be rooting for, humans or transhumans. Safer is not always better. I’d rather get the concept of stasis or homogeneity in there. Stasis and homogeneity are, if not the values at the core of EY’s scheme, at least the most salient products of it.
For me Safe AI is one that is not existential risk. “Friendly” reminds me about “friendly user interface”, that is something superficial for core function.
I prefer term “Safe AI” as it more self explaining for the outsider.
I think it’s more accurate, though the term “safe” has a much larger positive valence than is justified, and is so accurate but misleading. Particularly since it smuggles in EY’s presumptions about whom it’s safe for, and so whom we’re supposed to be rooting for, humans or transhumans. Safer is not always better. I’d rather get the concept of stasis or homogeneity in there. Stasis and homogeneity are, if not the values at the core of EY’s scheme, at least the most salient products of it.
Safe AI sounds like it does what you say as long as it isn’t stupid. Friendly AIs are supposed to do whatever’s best.
For me Safe AI is one that is not existential risk. “Friendly” reminds me about “friendly user interface”, that is something superficial for core function.