As I disclaimed, the frame of the post does rule out relevance of this point, it’s not a response to the post’s interpretation that has any centrality. I’m more complaining about the background implication that rewards are good (this is not about happiness specifically). Just because natural selection put a circuit in my mind, doesn’t mean I prefer to follow its instruction, either in ways that natural selection intended, or in ways that it didn’t. Human misalignment relative to natural selection doesn’t need to go along with rewards at all, let alone seeking superstimulus. Rewards probably play some role in the process of figuring out what is right, but there is no robust reason for their contribution to even be pointing in the obvious direction.
As I disclaimed, the frame of the post does rule out relevance of this point, it’s not a response to the post’s interpretation that has any centrality. I’m more complaining about the background implication that rewards are good (this is not about happiness specifically). Just because natural selection put a circuit in my mind, doesn’t mean I prefer to follow its instruction, either in ways that natural selection intended, or in ways that it didn’t. Human misalignment relative to natural selection doesn’t need to go along with rewards at all, let alone seeking superstimulus. Rewards probably play some role in the process of figuring out what is right, but there is no robust reason for their contribution to even be pointing in the obvious direction.