Could there not be AI value drift in our favor, from a paperclipper AI to a moral realist AI?
Possible but unlikely to occur by accident. Value-space is large. For any arbitrary biological species, most value systems don’t optimize in favor of that species.
Could there not be AI value drift in our favor, from a paperclipper AI to a moral realist AI?
Possible but unlikely to occur by accident. Value-space is large. For any arbitrary biological species, most value systems don’t optimize in favor of that species.