Thanks for your comment, I think it raises an important point if I understood it correctly. But I’m not sure if I have understood it correctly. Are you saying that by doing random things that make other people happy, I would be messing with their reward function? So that I would, for example, reward and thus incentivise random other things the person doesn’t really value?
In writing this, I had indeed assumed that while happiness is probably not the only valuable thing and we wouldn’t want to hook everybody up to a happiness machine, the marginal bit of happiness in our world would be positive and quite harmless. But maybe superstimuli are a counterexample to that? I have to think about it more.
As I disclaimed, the frame of the post does rule out relevance of this point, it’s not a response to the post’s interpretation that has any centrality. I’m more complaining about the background implication that rewards are good (this is not about happiness specifically). Just because natural selection put a circuit in my mind, doesn’t mean I prefer to follow its instruction, either in ways that natural selection intended, or in ways that it didn’t. Human misalignment relative to natural selection doesn’t need to go along with rewards at all, let alone seeking superstimulus. Rewards probably play some role in the process of figuring out what is right, but there is no robust reason for their contribution to even be pointing in the obvious direction.
Thanks for your comment, I think it raises an important point if I understood it correctly. But I’m not sure if I have understood it correctly. Are you saying that by doing random things that make other people happy, I would be messing with their reward function? So that I would, for example, reward and thus incentivise random other things the person doesn’t really value?
In writing this, I had indeed assumed that while happiness is probably not the only valuable thing and we wouldn’t want to hook everybody up to a happiness machine, the marginal bit of happiness in our world would be positive and quite harmless. But maybe superstimuli are a counterexample to that? I have to think about it more.
As I disclaimed, the frame of the post does rule out relevance of this point, it’s not a response to the post’s interpretation that has any centrality. I’m more complaining about the background implication that rewards are good (this is not about happiness specifically). Just because natural selection put a circuit in my mind, doesn’t mean I prefer to follow its instruction, either in ways that natural selection intended, or in ways that it didn’t. Human misalignment relative to natural selection doesn’t need to go along with rewards at all, let alone seeking superstimulus. Rewards probably play some role in the process of figuring out what is right, but there is no robust reason for their contribution to even be pointing in the obvious direction.