Yes, an agent with a well-defined utility function “should” act to maximize it with a rigorous decision theory. Well, I’m glad I’m not such an agent. I’m very glad my life isn’t governed by a simple numerical parameter like money or number of offspring. Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!
Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!
No joy in that. We are ignorant and helpless in attempts to find this answer accurately. But we can still try, we can still infer some answers, the cases where our intuitive judgment systematically goes wrong, to make it better!
What if our mind has embedded in its utility function the desire not to be more accurately aware of it?
What if some people don’t prefer to be more self-aware than they currently are, or their true preferences indeed lie in the direction of less self-awareness?
Then it would be right for instrumental reasons to be as self-aware as we need to be during the crunch time that we are working to produce (or support the production of) a non-sentient optimizer (or at least another sort of mind that doesn’t have such self-crippling preferences) which can be aware on our behalf and reduce or limit our own self awareness if that actually turns out to be the right thing to do.
What if our mind has embedded in its utility function the desire not to be more accurately aware of it?
Careful. Some people get offended if you say things like that. Aversion to publicly admitting that they prefer not to be aware is built in as part of the same preference.
Yes, an agent with a well-defined utility function “should” act to maximize it with a rigorous decision theory. Well, I’m glad I’m not such an agent. I’m very glad my life isn’t governed by a simple numerical parameter like money or number of offspring. Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!
No joy in that. We are ignorant and helpless in attempts to find this answer accurately. But we can still try, we can still infer some answers, the cases where our intuitive judgment systematically goes wrong, to make it better!
What if our mind has embedded in its utility function the desire not to be more accurately aware of it?
What if some people don’t prefer to be more self-aware than they currently are, or their true preferences indeed lie in the direction of less self-awareness?
Then it would be right for instrumental reasons to be as self-aware as we need to be during the crunch time that we are working to produce (or support the production of) a non-sentient optimizer (or at least another sort of mind that doesn’t have such self-crippling preferences) which can be aware on our behalf and reduce or limit our own self awareness if that actually turns out to be the right thing to do.
Careful. Some people get offended if you say things like that. Aversion to publicly admitting that they prefer not to be aware is built in as part of the same preference.
OTOH, if it also comes packaged with an inability to notice public assertions that they prefer not to be aware, then you’re safe.
If only… :P
Then how would you ever know? Rational ignorance is really hard.