Is there any chance that we (a) CAN’T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces?
First, define “friendly” in enough detail that I know that it’s different from “will not blow up in our faces”.
First, define “friendly” in enough detail that I know that it’s different from “will not blow up in our faces”.
Ooh, good catch! wheninrome15 may need to define “will not blow up in our faces” in more detail as well.