I think it’s entirely possible that AI will be able to create relationships which feel authentic. Arguably we are already at that stage.
I don’t think it follows that I will feel like those relationships ARE authentic if I know that the source is AI. Relationships with different entities aren’t necessarily equivalent if those entities have behaved identically until the present moment—we also have to account for background knowledge and how that impacts a relationship.
Much like it’s possible to feel like you are in an authentic relationship with a psychopath, but once you understand that the other person is only simulating emotional responses rather than experiencing them, that knowledge undermines every part of the relationship, even if they have not yet taken any action to exploit or manipulate you, or behave similarly to a non-psychopathic friend.
I suppose the difference between AI/psychopath relationships vs relationships between empathetic humans is that in empathetic humans I can be reasonably confident that the pattern of response and action is a result of instinctual emotional responses, something which the person has no direct control over. They’re not scheming to appear to like me and as a result there is less risk that they will radically alter their behaviour if circumstances change. I can trust another person much more readily if I can accurately model the thing which generates their responses to my actions, and have some kind of assurance that this behaviour will remain consistent even if circumstances change (or a clear idea of what kinds of circumstances might change the behaviour).
If my friendship with Josie has lasted for years and I’m confident that Josie is another empathetic human, generating her responses to me from much the same processes I use, then when I (for example) do something that our authoritarian government doesn’t like, I might go to Josie seeking shelter.
If I have a similar relationship with Mark12, a autonomous AI cluster (but I’m not really clear on how Mark12 generates their behaviour) even if that they have been fun and shown kindness to me in the past, I’m unlikely to ask them for help given that my circumstances have radically changed. I can’t know what kind of rules Mark12 ultimately runs by and I can’t ever be sure I’m modelling them accurately. There are no sensible indicators or rate-limits to how quickly Mark12′s behaviour might change. For all I know they could get an update overnight and be a completely different entity, whilst flawlessly mimicking their old behaviour.
In humans, if I know somebody untrustworthy for a while I am likely to notice something a bit /off/ about them and trust them less. This doesn’t hold for AI though I think. They might never slip up- they can project the exact correct persona whilst holding a completely different core value system which I might not know about until a critical juncture, like a sleeper agent- this is something very few humans can do, so I can be much more confident that a human is trustable after building a relationship with them than with an AI agent.
I think it’s entirely possible that AI will be able to create relationships which feel authentic. Arguably we are already at that stage.
I don’t think it follows that I will feel like those relationships ARE authentic if I know that the source is AI. Relationships with different entities aren’t necessarily equivalent if those entities have behaved identically until the present moment—we also have to account for background knowledge and how that impacts a relationship.
Much like it’s possible to feel like you are in an authentic relationship with a psychopath, but once you understand that the other person is only simulating emotional responses rather than experiencing them, that knowledge undermines every part of the relationship, even if they have not yet taken any action to exploit or manipulate you, or behave similarly to a non-psychopathic friend.
I suppose the difference between AI/psychopath relationships vs relationships between empathetic humans is that in empathetic humans I can be reasonably confident that the pattern of response and action is a result of instinctual emotional responses, something which the person has no direct control over. They’re not scheming to appear to like me and as a result there is less risk that they will radically alter their behaviour if circumstances change. I can trust another person much more readily if I can accurately model the thing which generates their responses to my actions, and have some kind of assurance that this behaviour will remain consistent even if circumstances change (or a clear idea of what kinds of circumstances might change the behaviour).
If my friendship with Josie has lasted for years and I’m confident that Josie is another empathetic human, generating her responses to me from much the same processes I use, then when I (for example) do something that our authoritarian government doesn’t like, I might go to Josie seeking shelter.
If I have a similar relationship with Mark12, a autonomous AI cluster (but I’m not really clear on how Mark12 generates their behaviour) even if that they have been fun and shown kindness to me in the past, I’m unlikely to ask them for help given that my circumstances have radically changed. I can’t know what kind of rules Mark12 ultimately runs by and I can’t ever be sure I’m modelling them accurately. There are no sensible indicators or rate-limits to how quickly Mark12′s behaviour might change. For all I know they could get an update overnight and be a completely different entity, whilst flawlessly mimicking their old behaviour.
In humans, if I know somebody untrustworthy for a while I am likely to notice something a bit /off/ about them and trust them less. This doesn’t hold for AI though I think. They might never slip up- they can project the exact correct persona whilst holding a completely different core value system which I might not know about until a critical juncture, like a sleeper agent- this is something very few humans can do, so I can be much more confident that a human is trustable after building a relationship with them than with an AI agent.