Sounds like we need to formalize human morality first, otherwise you aren’t guaranteed consistency. Of course formalizing human morality seems like a hopeless project. Maybe we can ask an AI for help!
1. Determine a formalised morality system close enough to the current observed human morality system that humans will be able to learn and accept it,
2. Eliminate all human culture (easier than eliminating only parts of it).
3. Raise humans with this morality system (which by the way includes systems for reducing value drift, so the process doesn’t have to be repeated too often).
Sounds like we need to formalize human morality first, otherwise you aren’t guaranteed consistency. Of course formalizing human morality seems like a hopeless project. Maybe we can ask an AI for help!
Formalising human morality is easy!
1. Determine a formalised morality system close enough to the current observed human morality system that humans will be able to learn and accept it,
2. Eliminate all human culture (easier than eliminating only parts of it).
3. Raise humans with this morality system (which by the way includes systems for reducing value drift, so the process doesn’t have to be repeated too often).
4. When value drift occurs, goto step 2.