To the extent that maternal instincts are some actual small concrete set of things, you are probably making two somewhat opposite mistakes here: Imagining something that doesn’t truly run on maternal instinct, and assuming that mothers actually care about their babies (for a certain definition of “care”).
You say that mothers aren’t actually “endlessly selfless, forever attuned to every cry, governed by an unshakable instinct to nurture”, that there are “identities beyond ‘mum’ to be kept alive” and that there are nights that instinct disappears. But that’s because you feel exhaustion, or also care about things other than your children. We don’t need to create things like that. If “maternal instincts” are or can be translated into utility functions, we could just set them as the only thing an AI optimises, and those issues would be gone. I’m not sure that the domination you talk about is actually part of maternal instincts. Even less that narcissism is.
Then, the case against maternal instincts might be stronger than you think. First, mothers have preferences about their babies that are clearly not caring about them (by which I mean caring about the baby’s values). They might want to be near them, to look at them, they might like when their baby holds onto them, none of which they do because it’s what their babies prefer. Then, they might not even care about what the babies care about at all. Rather, they might want to do something that we might call altruistic empathy: wanting what they think something like themselves would want if they were taking the actions the baby takes (which is a big issue if we want to do the same with AIs). A baby might cry because they are uncomfortable, and their mothers might comfort them, but the cry might (in some cases) be a hardcoded mechanism, potentially separate from any kind of consequentialist reasoning derived from their wants. If mothers actually cared about their babies (and were able to think through the consequences of that), they wouldn’t expose them to the world. At birth, babies don’t care about e.g. dinosaurs, because they haven’t seen them, or trees, or actually objects of any kind. By showing them reality, their values (if they have any) probably change, and they are transformed into something that a younger version of the baby would not endorse. It’s just that babies are not smart enough to do anything about it. I think something like this has already been talked about elsewhere, under the name of super babies (not the genetically modified ones), but right now I can’t find it. Mothers might have preferences over the preferences of their children, and use their babies as raw materials for creating the version of the children they want. With older children and other people, I’m not sure the mechanism for care is the same (although it might be, which would be worrying).
Also, you might want to learn about cooperative inverse reinforcement learning, in case you aren’t aware of it. Utility functions are probably perfectly fine, it’s just that there needs to be a mechanism for updating them from sensory data.
To the extent that maternal instincts are some actual small concrete set of things, you are probably making two somewhat opposite mistakes here: Imagining something that doesn’t truly run on maternal instinct, and assuming that mothers actually care about their babies (for a certain definition of “care”).
You say that mothers aren’t actually “endlessly selfless, forever attuned to every cry, governed by an unshakable instinct to nurture”, that there are “identities beyond ‘mum’ to be kept alive” and that there are nights that instinct disappears. But that’s because you feel exhaustion, or also care about things other than your children. We don’t need to create things like that. If “maternal instincts” are or can be translated into utility functions, we could just set them as the only thing an AI optimises, and those issues would be gone. I’m not sure that the domination you talk about is actually part of maternal instincts. Even less that narcissism is.
Then, the case against maternal instincts might be stronger than you think. First, mothers have preferences about their babies that are clearly not caring about them (by which I mean caring about the baby’s values). They might want to be near them, to look at them, they might like when their baby holds onto them, none of which they do because it’s what their babies prefer. Then, they might not even care about what the babies care about at all. Rather, they might want to do something that we might call altruistic empathy: wanting what they think something like themselves would want if they were taking the actions the baby takes (which is a big issue if we want to do the same with AIs). A baby might cry because they are uncomfortable, and their mothers might comfort them, but the cry might (in some cases) be a hardcoded mechanism, potentially separate from any kind of consequentialist reasoning derived from their wants.
If mothers actually cared about their babies (and were able to think through the consequences of that), they wouldn’t expose them to the world. At birth, babies don’t care about e.g. dinosaurs, because they haven’t seen them, or trees, or actually objects of any kind. By showing them reality, their values (if they have any) probably change, and they are transformed into something that a younger version of the baby would not endorse. It’s just that babies are not smart enough to do anything about it. I think something like this has already been talked about elsewhere, under the name of super babies (not the genetically modified ones), but right now I can’t find it. Mothers might have preferences over the preferences of their children, and use their babies as raw materials for creating the version of the children they want. With older children and other people, I’m not sure the mechanism for care is the same (although it might be, which would be worrying).
Also, you might want to learn about cooperative inverse reinforcement learning, in case you aren’t aware of it. Utility functions are probably perfectly fine, it’s just that there needs to be a mechanism for updating them from sensory data.