“One consideration is the differences in cognitive abilities between AGIs and humans may make human preferences easily changeable for AGIs. As an intuition pump: consider a system composed of a five year old child and her parents. The child obviously has some preferences, but the parents can usually change these.”
Notably, though, parents of young children have their purchasing preferences and general interaction with the economy massively skewed by “the preferences of their children”. A fun example of this is how cinema attendance patterns for people aged 35-44 are identical to those of children. Children are a massive and lucrative marketing demographic specifically because their preferences are strong, developmentally less set than those of adults, and less malleable-by-parents than said parents hope. (#2 and #3 can exist simultaneously; children like what their peers are into, what’s endorsed by or advertised alongside media they enjoy, and other ‘nonshared environment’.)
I’d make an even stronger claim, in that the quoted claim is basically false for a lot of things parents would hope to persuade their children away from, and that the actual reason we have laws to prevent people from say conversion therapy isn’t because we are concerned conversion therapists or parents will persuade people to be no longer LGBTQ (for an example), but rather to prevent people from torturing LGBTQ children or making environments where they will reliably kill themselves in the attempt to convert them away from LGBTQ.
Similar things happen for anti-grooming laws or anti-abuse laws, where in practice it prevents people from attempting abusive tactics to persuade the child to have sex or do something else that a parent or other person wants them to do, and the reason for concern isn’t because the child will be persuaded, but rather to prevent people from damaging the child’s health in pursuit of attempting to persuade them.
One of my core takeaways on the heritability of a lot of traits is that this means parents have very little control over what the child will actually be like, and most claims that parents can reliably persuade them to do something against their genetics is bullshit.
On the original topic of AIs being able to persuade humans to easily change their preferences, I do agree that if you let technology advance far enough, humans would be able to be persuadable into arbitrary things, so some form of paternalism is likely to be necessary in the long run, but that thankfully is well past the critical period where we need to handle x-risks from AI, where x-risks either have already played out into full-blown existential catastrophe or we manage to get into existential security, so the sorts of superpersuasion/changing people’s preferences massively mostly don’t matter from my perspective.
“One consideration is the differences in cognitive abilities between AGIs and humans may make human preferences easily changeable for AGIs. As an intuition pump: consider a system composed of a five year old child and her parents. The child obviously has some preferences, but the parents can usually change these.”
Notably, though, parents of young children have their purchasing preferences and general interaction with the economy massively skewed by “the preferences of their children”. A fun example of this is how cinema attendance patterns for people aged 35-44 are identical to those of children. Children are a massive and lucrative marketing demographic specifically because their preferences are strong, developmentally less set than those of adults, and less malleable-by-parents than said parents hope. (#2 and #3 can exist simultaneously; children like what their peers are into, what’s endorsed by or advertised alongside media they enjoy, and other ‘nonshared environment’.)
I’d make an even stronger claim, in that the quoted claim is basically false for a lot of things parents would hope to persuade their children away from, and that the actual reason we have laws to prevent people from say conversion therapy isn’t because we are concerned conversion therapists or parents will persuade people to be no longer LGBTQ (for an example), but rather to prevent people from torturing LGBTQ children or making environments where they will reliably kill themselves in the attempt to convert them away from LGBTQ.
Similar things happen for anti-grooming laws or anti-abuse laws, where in practice it prevents people from attempting abusive tactics to persuade the child to have sex or do something else that a parent or other person wants them to do, and the reason for concern isn’t because the child will be persuaded, but rather to prevent people from damaging the child’s health in pursuit of attempting to persuade them.
One of my core takeaways on the heritability of a lot of traits is that this means parents have very little control over what the child will actually be like, and most claims that parents can reliably persuade them to do something against their genetics is bullshit.
On the original topic of AIs being able to persuade humans to easily change their preferences, I do agree that if you let technology advance far enough, humans would be able to be persuadable into arbitrary things, so some form of paternalism is likely to be necessary in the long run, but that thankfully is well past the critical period where we need to handle x-risks from AI, where x-risks either have already played out into full-blown existential catastrophe or we manage to get into existential security, so the sorts of superpersuasion/changing people’s preferences massively mostly don’t matter from my perspective.