I also predict that real Eliezer would say about many of these things that they were basically not problematic outputs themselves, just represent how hard it is to stop outputs conditioned on having decided they are problematic. The model seems to totally not get this.
Meta level: let’s use these failures to understand how hard alignment is, but not accidentally start thinking that alignment==‘not providing information that is readily available on the internet but that we think people shouldn’t use’.
Why do you think this? It seems like humans currently have values and used to have values (I’m not sure when they started having values) but they are probably different values. Certainly people today have different values in different cultures, and people who are parts of continuous cultures have different values to people in those cultures 50 years ago.
Is there some reason to think that any specific human values persisted through the human analogue of SLT?