Maybe someone should make an inbox for incidents of ChatGPT psychosis.
Currently, various people receive many emails or other communications from people who appear to exhibit ChatGPT psychosis: they seem (somewhat) delusional and this appears to have been driven (or at least furthered) by talking with ChatGPT or other chatbots. It might be helpful to create an inbox where people can easily forward these emails. The hope would be that the people who currently receive a ton of these emails could just forward these along (at least to the extent this is easy).[1]
This inbox would serve a few main functions:
Collect data for people interested in studying the phenomenon (how it’s changing over time, what it looks like, finding affected people to talk to).
There could be some (likely automated) system for responding to these people and attempting to help them. Maybe you could make an LLM-based bot which responds to these people and tries to talk them down in a healthy way. (Of course, it would be important that this bot wouldn’t just feed into the psychosis!) In addition to the direct value, this could be an interesting test case for applying AI to improve epistemics (in this case, a very particular type of improving epistemics, but results could transfer). It seems possible (though pretty unlikely) that something like ChatGPT psychosis grows to have substantial influence on the overall epistemic environment, in which case building defenses could be very important.
Some communication about the results from the inbox could be regularly sent to the relevant AI companies. It seems generally useful if the situation is made clear to relevant AI companies.[2]
I’m not going to create this inbox for a few reasons, but I think this could be a worthwhile project, especially for someone already interested in applying AI to improve epistemics in cases like this.
Some possible risks/difficulties of this project:
The person doing the project would probably need to be somewhat trustworthy for people to actually forward along emails. That said, someone could start a version which just looks through rejected LessWrong posts, finds likely cases of ChatGPT psychosis, and then collects this data and responds to people etc.
It’s pretty plausible that someone running this service would be blamed for any flaws in the service even if the service makes the situation almost strictly better. See also The Copenhagen Interpretation of Ethics.
It might be worthwhile to make a nice automated filter people can easily apply which highlights emails as likely cases of ChatGPT psychosis so that forwarding along emails is easier. I’m not sure how to best do this, and people presumably would want to manually check prior to forwarding emails to an inbox which is collecting data!
Maybe someone should make an inbox for incidents of ChatGPT psychosis.
Currently, various people receive many emails or other communications from people who appear to exhibit ChatGPT psychosis: they seem (somewhat) delusional and this appears to have been driven (or at least furthered) by talking with ChatGPT or other chatbots. It might be helpful to create an inbox where people can easily forward these emails. The hope would be that the people who currently receive a ton of these emails could just forward these along (at least to the extent this is easy).[1]
This inbox would serve a few main functions:
Collect data for people interested in studying the phenomenon (how it’s changing over time, what it looks like, finding affected people to talk to).
There could be some (likely automated) system for responding to these people and attempting to help them. Maybe you could make an LLM-based bot which responds to these people and tries to talk them down in a healthy way. (Of course, it would be important that this bot wouldn’t just feed into the psychosis!) In addition to the direct value, this could be an interesting test case for applying AI to improve epistemics (in this case, a very particular type of improving epistemics, but results could transfer). It seems possible (though pretty unlikely) that something like ChatGPT psychosis grows to have substantial influence on the overall epistemic environment, in which case building defenses could be very important.
Some communication about the results from the inbox could be regularly sent to the relevant AI companies. It seems generally useful if the situation is made clear to relevant AI companies.[2]
I’m not going to create this inbox for a few reasons, but I think this could be a worthwhile project, especially for someone already interested in applying AI to improve epistemics in cases like this.
Some possible risks/difficulties of this project:
The person doing the project would probably need to be somewhat trustworthy for people to actually forward along emails. That said, someone could start a version which just looks through rejected LessWrong posts, finds likely cases of ChatGPT psychosis, and then collects this data and responds to people etc.
It’s pretty plausible that someone running this service would be blamed for any flaws in the service even if the service makes the situation almost strictly better. See also The Copenhagen Interpretation of Ethics.
It might be worthwhile to make a nice automated filter people can easily apply which highlights emails as likely cases of ChatGPT psychosis so that forwarding along emails is easier. I’m not sure how to best do this, and people presumably would want to manually check prior to forwarding emails to an inbox which is collecting data!
Duty to Due Diligence from Discoverable Documentation of Dangers style reasons could also be applicable.
If someone were interested I’d probably be happy to make a version of lesswrong.com/moderation that was more optimized for this.