and yet we’re likely happier today than we were in the ancestral environment.
Well, if my goal were to be replaced by something different from me that is happier doing whatever it ends up doing than I am doing what I do, that’s relatively simple. But it doesn’t actually seem to be an adequate description of my goal.
Immediately upon being transported, certainly not.
I suppose I’m weakly confident that they would experience more happiness over the course of their lives than they would have had they not been so transported, mostly by virtue of avoiding or deferring various happiness-reducing conditions (e.g. disease, death, pain, malnutrition, death-of-loved-ones, etc.).
This is largely an expression of my belief that there hasn’t in fact been that much value drift between then and now, and my willingness to treat happiness as a rough measure of compliance-with-values (aka utility). If I believed there were a lot of value drift, my confidence would decrease.
If at time T there’s a fully-compliant-with-my-values system controlling my environment, I similarly expect less utility at some later time if that system experiences value drift than if it doesn’t.
Well, if my goal were to be replaced by something different from me that is happier doing whatever it ends up doing than I am doing what I do, that’s relatively simple. But it doesn’t actually seem to be an adequate description of my goal.
Do you think a typical hunter-gatherer would be happy to be transported to the modern world?
Fair question. I’m not sure.
Immediately upon being transported, certainly not.
I suppose I’m weakly confident that they would experience more happiness over the course of their lives than they would have had they not been so transported, mostly by virtue of avoiding or deferring various happiness-reducing conditions (e.g. disease, death, pain, malnutrition, death-of-loved-ones, etc.).
This is largely an expression of my belief that there hasn’t in fact been that much value drift between then and now, and my willingness to treat happiness as a rough measure of compliance-with-values (aka utility). If I believed there were a lot of value drift, my confidence would decrease.
If at time T there’s a fully-compliant-with-my-values system controlling my environment, I similarly expect less utility at some later time if that system experiences value drift than if it doesn’t.