In fact, it’s worse than that. Utility is still up for grabs, even if it does obey the axioms—because we will soon be in the condition of being able to modify our own utility functions! (If we aren’t already: Addictive drugs alter your ability to experience non-drug pleasure; and could psychotherapy change my level of narcissism, or my level of empathy?)
Indeed, the entire project of Friendly AI can be taken to be the project of specifying the right utility function for a superintelligent AI. If any utility that follows the axioms would qualify, then a paperclipper would be just fine.
So not only does “the utility function is not up for grabs” not work in this situation (because I’m saying precisely that women who behave this way are denying themselves happiness); I’m not sure it works in any situation. Even if you are sufficiently rational that you really do obey a consistent utility function in everything you do, that could still be a bad utility function (you could be a psychopath, or a paperclipper).
In fact, it’s worse than that. Utility is still up for grabs, even if it does obey the axioms—because we will soon be in the condition of being able to modify our own utility functions! (If we aren’t already: Addictive drugs alter your ability to experience non-drug pleasure; and could psychotherapy change my level of narcissism, or my level of empathy?)
Indeed, the entire project of Friendly AI can be taken to be the project of specifying the right utility function for a superintelligent AI. If any utility that follows the axioms would qualify, then a paperclipper would be just fine.
So not only does “the utility function is not up for grabs” not work in this situation (because I’m saying precisely that women who behave this way are denying themselves happiness); I’m not sure it works in any situation. Even if you are sufficiently rational that you really do obey a consistent utility function in everything you do, that could still be a bad utility function (you could be a psychopath, or a paperclipper).