In that post Eliezer just explains in his usual long-winded manner that morality is our brain’s morality instinct, not something more basic and deep. So your morality instinct tells you that agents should follow rigorous decision theories? Mine certainly doesn’t. I feel much better in a world of quirky/imperfect/biased agents than in a world of strict optimizers. Is there a way to reconcile?
(I often write replies to your comments with a mild sense of wonder whether I can ever deconvert you from Eliezer’s teachings, back into ordinary common sense. Just so you know.)
To simplify one of the points a little. There are simple axioms that are easy to accept (in some form). Once you grant them, the structure of decision theory follows, forcing some conclusions you intuitively disbelieve. A step further, looking at the reasons the decision theory arrived at those conclusions may persuade you that you indeed should follow them, that you were mistaken before. No hidden agenda figures into this process, as it doesn’t require interacting with anyone, this process may theoretically be wholly personal, you against math.
Yes, an agent with a well-defined utility function “should” act to maximize it with a rigorous decision theory. Well, I’m glad I’m not such an agent. I’m very glad my life isn’t governed by a simple numerical parameter like money or number of offspring. Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!
Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!
No joy in that. We are ignorant and helpless in attempts to find this answer accurately. But we can still try, we can still infer some answers, the cases where our intuitive judgment systematically goes wrong, to make it better!
What if our mind has embedded in its utility function the desire not to be more accurately aware of it?
What if some people don’t prefer to be more self-aware than they currently are, or their true preferences indeed lie in the direction of less self-awareness?
Then it would be right for instrumental reasons to be as self-aware as we need to be during the crunch time that we are working to produce (or support the production of) a non-sentient optimizer (or at least another sort of mind that doesn’t have such self-crippling preferences) which can be aware on our behalf and reduce or limit our own self awareness if that actually turns out to be the right thing to do.
What if our mind has embedded in its utility function the desire not to be more accurately aware of it?
Careful. Some people get offended if you say things like that. Aversion to publicly admitting that they prefer not to be aware is built in as part of the same preference.
In that post Eliezer just explains in his usual long-winded manner that morality is our brain’s morality instinct, not something more basic and deep. So your morality instinct tells you that agents should follow rigorous decision theories? Mine certainly doesn’t. I feel much better in a world of quirky/imperfect/biased agents than in a world of strict optimizers. Is there a way to reconcile?
(I often write replies to your comments with a mild sense of wonder whether I can ever deconvert you from Eliezer’s teachings, back into ordinary common sense. Just so you know.)
To simplify one of the points a little. There are simple axioms that are easy to accept (in some form). Once you grant them, the structure of decision theory follows, forcing some conclusions you intuitively disbelieve. A step further, looking at the reasons the decision theory arrived at those conclusions may persuade you that you indeed should follow them, that you were mistaken before. No hidden agenda figures into this process, as it doesn’t require interacting with anyone, this process may theoretically be wholly personal, you against math.
Yes, an agent with a well-defined utility function “should” act to maximize it with a rigorous decision theory. Well, I’m glad I’m not such an agent. I’m very glad my life isn’t governed by a simple numerical parameter like money or number of offspring. Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!
No joy in that. We are ignorant and helpless in attempts to find this answer accurately. But we can still try, we can still infer some answers, the cases where our intuitive judgment systematically goes wrong, to make it better!
What if our mind has embedded in its utility function the desire not to be more accurately aware of it?
What if some people don’t prefer to be more self-aware than they currently are, or their true preferences indeed lie in the direction of less self-awareness?
Then it would be right for instrumental reasons to be as self-aware as we need to be during the crunch time that we are working to produce (or support the production of) a non-sentient optimizer (or at least another sort of mind that doesn’t have such self-crippling preferences) which can be aware on our behalf and reduce or limit our own self awareness if that actually turns out to be the right thing to do.
Careful. Some people get offended if you say things like that. Aversion to publicly admitting that they prefer not to be aware is built in as part of the same preference.
OTOH, if it also comes packaged with an inability to notice public assertions that they prefer not to be aware, then you’re safe.
If only… :P
Then how would you ever know? Rational ignorance is really hard.