I like your perspective here, but I don’t think it’s a given that human brains are necessarily aligned with human ‘values’. It entirely depends on how we define human values. Let’s suppose that long-term existential risk reduction is a human value (it ranks highly in nearly all moral theories). Because of cognitive limitations and biases, most human brains aren’t aligned with this value.
I like your perspective here, but I don’t think it’s a given that human brains are necessarily aligned with human ‘values’. It entirely depends on how we define human values. Let’s suppose that long-term existential risk reduction is a human value (it ranks highly in nearly all moral theories). Because of cognitive limitations and biases, most human brains aren’t aligned with this value.