I’m not at all convinced this isn’t a base rate thing. Every year about 1 in 200-400 people have psychotic episodes for the first time. In AI-lab weighted demographics (more males in their 20′s) it’s even higher. And even more people get weird beliefs that don’t track with reality, like find religion or Q-Anon or other conspiracies, but generally continue to function normally in society. Anecdotally (with tiny sample size), all the people I know who became unexpectedly psychotic in the last 10 years did so before chatbots. If they went unexpectedly psychotic a few years later, you can bet they would have had very weird AI chat logs.
I think this misses the point, since the problem is[1] less “One guy got made psychotic by 4o.” and more “A guy who got some kind of AI-orientated psychosis was allowed to continue to make important decisions at an AI company, while still believing a bunch of insane stuff.”
I agree with your assessment of what the problem is, but I don’t agree that is the main point of this post. The majority of this post is spent asserting how ‘ordinary’, smart, and high functioning this victim is and how we can now conclude that therefore everyone, including you, is vulnerable, and AI psychosis in general is a very serious danger. It being suppressed is just mentioned in passing at the start of the post.
I also wonder what exactly is meant by AI psychosis. I mean, my co-worker is allowed to have an anime waifu but I’m not allowed to have a 4o husbando?
I’m not at all convinced this isn’t a base rate thing. Every year about 1 in 200-400 people have psychotic episodes for the first time. In AI-lab weighted demographics (more males in their 20′s) it’s even higher. And even more people get weird beliefs that don’t track with reality, like find religion or Q-Anon or other conspiracies, but generally continue to function normally in society.
Anecdotally (with tiny sample size), all the people I know who became unexpectedly psychotic in the last 10 years did so before chatbots. If they went unexpectedly psychotic a few years later, you can bet they would have had very weird AI chat logs.
I think this misses the point, since the problem is[1] less “One guy got made psychotic by 4o.” and more “A guy who got some kind of AI-orientated psychosis was allowed to continue to make important decisions at an AI company, while still believing a bunch of insane stuff.”
Conditional on the story being true
I agree with your assessment of what the problem is, but I don’t agree that is the main point of this post. The majority of this post is spent asserting how ‘ordinary’, smart, and high functioning this victim is and how we can now conclude that therefore everyone, including you, is vulnerable, and AI psychosis in general is a very serious danger. It being suppressed is just mentioned in passing at the start of the post.
I also wonder what exactly is meant by AI psychosis. I mean, my co-worker is allowed to have an anime waifu but I’m not allowed to have a 4o husbando?