The hedonic treadmill is when permanent changes to living conditions lead to only temporary increases in happiness. This keeps us always wanting improvements to our lives. We often spend money on the newest Iphones and focus our attention on improving our external circumstances. We ignore the quote:
“What lies before us and what lies behind us are tiny matters compared to what lies within us”
Some people eat chips to quell their boredom. The hedonic treadmill ensures that, despite improvements in income, people are not satisfied. I was surprised by how much the hedonic treadmill dovetails with profit maximization. If they maximized profit, I suspect companies would pay Big Pharma billions not to release drugs that improve the hedonic set point. The antidepressant drugs Big Pharma releases act as mood flatteners, according to https://www.hedweb.com/.
Say we have a powerful superintelligent utility maximizer. They will turn the world into the precise configuration that maximizes their expected utility. No human has any say in what will happen.[1]
We do not want our lives optimized for us. We want autonomy, which expected utility maximizers would not give. Nobody has found an outer aligned utility function because powerful expected utility maximizers leave us no room to optimize. Autonomy is one value necessary for futures that we value. Walden One is a dystopia where everyone is secretly manipulated but live happy social lives.
Another reason we hate powerful optimization is status quo bias. Our world is extremely complex, and almost utility functions have maximums far from the current world. This is another reason expected utility maximizers create futures we hate.
We should instead focus on tools that give us an epistesmic advantage and help us choose the world we want to live in. This could involve oracle AI, CEV, training to reduce cognitive biases, etc. This is why I think we should focus on helping people become more rational or approximating effective altruists instead of focusing on inner aligning agent AI.
Unless the utility function includes a brain emulation in a position to sculpt the world by choosing the AI’s utility function. I do not expect this to happen in practice.
Hedonic Treadmill and the Economy
The hedonic treadmill is when permanent changes to living conditions lead to only temporary increases in happiness. This keeps us always wanting improvements to our lives. We often spend money on the newest Iphones and focus our attention on improving our external circumstances. We ignore the quote:
“What lies before us and what lies behind us are tiny matters compared to what lies within us”
Some people eat chips to quell their boredom. The hedonic treadmill ensures that, despite improvements in income, people are not satisfied. I was surprised by how much the hedonic treadmill dovetails with profit maximization. If they maximized profit, I suspect companies would pay Big Pharma billions not to release drugs that improve the hedonic set point. The antidepressant drugs Big Pharma releases act as mood flatteners, according to https://www.hedweb.com/.
Why FAI will not be an expected utility maximizer
Say we have a powerful superintelligent utility maximizer. They will turn the world into the precise configuration that maximizes their expected utility. No human has any say in what will happen.[1]
We do not want our lives optimized for us. We want autonomy, which expected utility maximizers would not give. Nobody has found an outer aligned utility function because powerful expected utility maximizers leave us no room to optimize. Autonomy is one value necessary for futures that we value. Walden One is a dystopia where everyone is secretly manipulated but live happy social lives.
Another reason we hate powerful optimization is status quo bias. Our world is extremely complex, and almost utility functions have maximums far from the current world. This is another reason expected utility maximizers create futures we hate.
We should instead focus on tools that give us an epistesmic advantage and help us choose the world we want to live in. This could involve oracle AI, CEV, training to reduce cognitive biases, etc. This is why I think we should focus on helping people become more rational or approximating effective altruists instead of focusing on inner aligning agent AI.
^
Unless the utility function includes a brain emulation in a position to sculpt the world by choosing the AI’s utility function. I do not expect this to happen in practice.