Shouldn’t AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.
And please, define how do you tell moral heuristics and moral values apart. E.g. which is “don’t change moral values of humans by wireheading”?
Shouldn’t AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.
And please, define how do you tell moral heuristics and moral values apart. E.g. which is “don’t change moral values of humans by wireheading”?