“I’ll assume 10,000 people believe chatbots are God based on the first article I shared” basically assumes the conclusion that it’s unimportant. Perhaps instead all 2.25 million delusion-prone LLM users are having their delusions validated and exacerbated by LLMs? After all, their delusions are presumably pretty important to their lives, so there’s a high chance they talked to an LLM about them at some point, and perhaps after that they all keep talking.
I mean, I also expect it’s actually very few people (at least so far), but we don’t really know.
A delusional person who got LLM’d potentially has it much worse than usual, because any attempts at an intervention (short of actual forcible hospitalization) would lead to that person going to the LLM to get its take on it, with the LLM then skillfully convincing them not to listen to their friends’/family members’ advice/pleas/urges.
Arguably it’s not about how many delusional people LLMs eat, but about the fact that LLMs choose to eat delusional people at all, which is a pretty clear sign they’re not at all “aligned”.
Yes its too early to tell what the net effect will be. I am following the digital health/therapist product space and there is a lot of chatbots focused on CBT style interventions. Preliminary indications say they are well received. I think a fair perspective on the current situation is to compare GenAI to previous AI. The Facebook styled algorithms have done pretty massive mental harm. GenAI LLM at present are not close to that impact.
In the future it depends a lot on how companies react—if mass LLM delusion is a thing then I expect LLM can be trained to detect and stop that, if the will is there. Especially a different flavor of LLM perhaps. Its clear to me that the majority of social media harm could have been prevented in a different competitive environment.
In the future, I am more worried about LLM being deliberately used to oppress people, NK could be internally invincible if everyone wore ankle bracelet LLM listeners etc. We also have yet to see what AI companions will do—that has the potential to cause massive disruption too and you can’t put in a simple check to claim its failed.
I am not so sure that calling LLM not at all aligned because of this issue is fair. If they are not capable enough then they won’t be able to prevent such harm and appear misaligned. If they are capable to detect such harm and stop it, but companies don’t bother to put in automatic checks, then yes they are misaligned.
Counterarguments:
“I’ll assume 10,000 people believe chatbots are God based on the first article I shared” basically assumes the conclusion that it’s unimportant. Perhaps instead all 2.25 million delusion-prone LLM users are having their delusions validated and exacerbated by LLMs? After all, their delusions are presumably pretty important to their lives, so there’s a high chance they talked to an LLM about them at some point, and perhaps after that they all keep talking.
I mean, I also expect it’s actually very few people (at least so far), but we don’t really know.
A delusional person who got LLM’d potentially has it much worse than usual, because any attempts at an intervention (short of actual forcible hospitalization) would lead to that person going to the LLM to get its take on it, with the LLM then skillfully convincing them not to listen to their friends’/family members’ advice/pleas/urges.
Arguably it’s not about how many delusional people LLMs eat, but about the fact that LLMs choose to eat delusional people at all, which is a pretty clear sign they’re not at all “aligned”.
Yes its too early to tell what the net effect will be. I am following the digital health/therapist product space and there is a lot of chatbots focused on CBT style interventions. Preliminary indications say they are well received. I think a fair perspective on the current situation is to compare GenAI to previous AI. The Facebook styled algorithms have done pretty massive mental harm. GenAI LLM at present are not close to that impact.
In the future it depends a lot on how companies react—if mass LLM delusion is a thing then I expect LLM can be trained to detect and stop that, if the will is there. Especially a different flavor of LLM perhaps. Its clear to me that the majority of social media harm could have been prevented in a different competitive environment.
In the future, I am more worried about LLM being deliberately used to oppress people, NK could be internally invincible if everyone wore ankle bracelet LLM listeners etc. We also have yet to see what AI companions will do—that has the potential to cause massive disruption too and you can’t put in a simple check to claim its failed.
I am not so sure that calling LLM not at all aligned because of this issue is fair. If they are not capable enough then they won’t be able to prevent such harm and appear misaligned. If they are capable to detect such harm and stop it, but companies don’t bother to put in automatic checks, then yes they are misaligned.