Would you say that emotional attachment to non-AI is safe? It seems most of these apply to attachments to (some) humans, and to (many) organizations like school, corporation or nation.
I think most of these are risks of emotional attachment, not risks of AI. AI may make nations/brands/teams/politics MORE effective at manipulating emotions, which is a serious risk.
I agree. We have problems with emotional attachment to humans all the time, but humans are more or less predictable, not too powerful, and usually not so great at manipulations
Fair enough. I think we disagree on how manipulative and effective humans have become at emotional manipulation, especially via media, but we probably agree that AI makes the problem worse.
I’m not sure whether we agree that the problem is AI assisting humans with bad motives in such manipulations, or whether it’s attachment TO the AI which is problematic. I mean, some of each, but the former scares me a lot more.
Would you say that emotional attachment to non-AI is safe? It seems most of these apply to attachments to (some) humans, and to (many) organizations like school, corporation or nation.
I think most of these are risks of emotional attachment, not risks of AI. AI may make nations/brands/teams/politics MORE effective at manipulating emotions, which is a serious risk.
I agree. We have problems with emotional attachment to humans all the time, but humans are more or less predictable, not too powerful, and usually not so great at manipulations
Fair enough. I think we disagree on how manipulative and effective humans have become at emotional manipulation, especially via media, but we probably agree that AI makes the problem worse.
I’m not sure whether we agree that the problem is AI assisting humans with bad motives in such manipulations, or whether it’s attachment TO the AI which is problematic. I mean, some of each, but the former scares me a lot more.