If we consider that some people have already fallen in love with their AI chatbot or have made it their best friend, this type of phenomenon is likely to amplify if the agents become even more human-like. It is reasonable to wonder if, instead of raising awareness among the general public about the risks of AI, this could have the opposite effect. Love is blind, as they say.
However I think that, good or bad, LLM-based AI will become more and more human-like in surface. The data training set is human (or human-like if synthetic), thus, because of RL process, we can expect that future AIs will match or map even better the human pattern in the set, encode a better theory of human’s mind, fooling even more the general public. And I’m not myself immune to AI anthropomorphism : who can pretend to be ?
Some humans will love their AI and be blinded by it; others will look at the strange and alarming things those AIs do and see the danger. Others will want to make AI workers/slaves, and people will be alarmed by the resulting job loss. It will be complex, and the sum total results are difficult to predict- but I think it’s likely that more thought about the issue with more evidence will push the average human closer to the truth: competent agents, like humans, are very very dangerous by default. Careful engineering is needed to make sure their goals align with yours.
If we consider that some people have already fallen in love with their AI chatbot or have made it their best friend, this type of phenomenon is likely to amplify if the agents become even more human-like. It is reasonable to wonder if, instead of raising awareness among the general public about the risks of AI, this could have the opposite effect. Love is blind, as they say.
However I think that, good or bad, LLM-based AI will become more and more human-like in surface. The data training set is human (or human-like if synthetic), thus, because of RL process, we can expect that future AIs will match or map even better the human pattern in the set, encode a better theory of human’s mind, fooling even more the general public. And I’m not myself immune to AI anthropomorphism : who can pretend to be ?
Some humans will love their AI and be blinded by it; others will look at the strange and alarming things those AIs do and see the danger. Others will want to make AI workers/slaves, and people will be alarmed by the resulting job loss. It will be complex, and the sum total results are difficult to predict- but I think it’s likely that more thought about the issue with more evidence will push the average human closer to the truth: competent agents, like humans, are very very dangerous by default. Careful engineering is needed to make sure their goals align with yours.