Do my values bind to objects in reality, like dogs, or do they bind to my mental representations of those objects at the current timestep?
You might say: You value the dog’s happiness over your mental representation of it, since if I gave you a button which made the dog sad, but made you believe the dog was happy, and another button which made the dog happy, but made you believe the dog was sad, you’d press the second button.
I say in response: You’ve shown that I value my current timestep estimation of the dog’s future happiness over my current timestep estimation of my future estimation of the dog’s happiness.
I think we can say that whenever I make any decision, I’m optimising my mental representation of the world after the decision has been made but before it has come into effect.
Maybe this is the same as saying my values bind to objects in reality, or maybe it’s different. I’m not sure.
>I think we can say that whenever I make any decision, I’m optimising my mental representation of the world after the decision has been made but before it has come into effect.
We can frame choosing between life goals as choosing between “My future with life goal A” and “My future with life goal B” (or “My future without a life goal”). (Note how this is relevantly similar to “My future on career path A” and “My future on career path B.”) [...] It’s important to note that choosing a life goal doesn’t necessarily mean that we predict ourselves to have the highest life satisfaction (let alone the most increased moment-to-moment well-being) with that life goal in the future. Instead, it means that we feel the most satisfied about the particular decision (to adopt the life goal) in the present, when we commit to the given plan, thinking about our future.
Regarding your last point:
Maybe this is the same as saying my values bind to objects in reality, or maybe it’s different. I’m not sure.
I think it’s the same thing – it’s how “caring about objects in reality” is concretely implemented given the constraints that you need a model (“map”) of reality to steer your actions.
Do my values bind to objects in reality, like dogs, or do they bind to my mental representations of those objects at the current timestep?
You might say: You value the dog’s happiness over your mental representation of it, since if I gave you a button which made the dog sad, but made you believe the dog was happy, and another button which made the dog happy, but made you believe the dog was sad, you’d press the second button.
I say in response: You’ve shown that I value my current timestep estimation of the dog’s future happiness over my current timestep estimation of my future estimation of the dog’s happiness.
I think we can say that whenever I make any decision, I’m optimising my mental representation of the world after the decision has been made but before it has come into effect.
Maybe this is the same as saying my values bind to objects in reality, or maybe it’s different. I’m not sure.
I said something similar in this post:
Regarding your last point:
I think it’s the same thing – it’s how “caring about objects in reality” is concretely implemented given the constraints that you need a model (“map”) of reality to steer your actions.