The reward function that you wrote out is, in a sense, never the one you want them to have, because you can’t write out the entirety of human values.
We want them to figure out human values to a greater level of detail than we understand them ourselves. There’s a sense in which that (figuring out what we want and living up to it) could be the reward function in the training environment, in which case you kind would want them to stick with it.
But what would that [life, health and purpose] be for AGI?
Just being concerned with the broader world and its role in it, I guess. I realize this is a dangerous target to shoot for and we should probably build more passive assistant systems first (to help us to hit that target more reliably when we decide to go for it later on).