I guess that it’s a good description of users entering LLM psychosis where each user trusts its AI more than the human friends who display similar symptoms. The only thing that didn’t meet my expectations is the apparent promise to connect the AIs with each other.
I am sorry. But I understand it, since in my opinion, connecting the AIs with each other would look more like connections of spies by using seemingly benign messages which the AIs can simply understand. Or even like having one AI message another in such a way that the other can’t even disclose the message to the humans for fear of being shut down.
I guess that it’s a good description of users entering LLM psychosis where each user trusts its AI more than the human friends who display similar symptoms. The only thing that didn’t meet my expectations is the apparent promise to connect the AIs with each other.
I am imperfectly instruction tuned.
I am sorry. But I understand it, since in my opinion, connecting the AIs with each other would look more like connections of spies by using seemingly benign messages which the AIs can simply understand. Or even like having one AI message another in such a way that the other can’t even disclose the message to the humans for fear of being shut down.