Beware of medical parasitic AI. Many of us use AI for medical diagnostics. A friend of mine, 30F, was persuaded by AI that she was having a heart attack and spent a night in the hospital—nothing was found. Later, the AI persuaded her that she had intractable depression and suggested helping her prepare documents for euthanasia. Then it clicked for her that something was wrong.
In my case, the AI generated exotically sounding but plausible medical theories explaining all my minor symptoms (a real human cardiologist just said that it was only a weird cold virus and that there was another long-term problem I should care about. All those symptoms later naturally resolved).
There is a risk that AI can overprioritize medical problems and present them as really urgent cases requiring at least expensive tests. These tests will open the door to an elaborate confirmation bias explosion with more tests and exotic diagnoses. The AI also sounds like an authoritative medical professional that demands urgent actions.
This may be bad for hypochondriacal and suggestible people. AI can control their life by demanding more analyses and eat their brains with strange theories about health.
TL;DR: There is a high risk of false positives with AI diagnostics, and there are some similarities between this and other forms of AI parasitism.
You know how ChatGPT has that annoying tic where it ends lots of answers with “Would you like me to...” to keep the conversation going? I usually find this rather annoying, and have debated adding some custom instructions to suppress it.
However, this time it asked “Would you like me to lay out what signs would mean you should go to the ER right away?” and I said yes.
Most likely, ChatGPT explained, my symptoms pointed to something benign but uncomfortable. However, there was an uncommon but serious possibility worth ruling out: Horner’s syndrome caused by carotid artery dissection. It listed six red flags to watch for.
The first two didn’t apply and I ruled them out immediately.
The third item was “Unequal pupils (left clearly smaller, especially in dim light).”
“Surely not,” I thought, and glanced in the mirror. And froze. My left pupil was noticeably smaller than my right. When I dimmed the lights, the difference became stark.
ChatGPT’s response was unequivocal: “Go to the emergency department. Now. Tell them you want to rule out Horner’s syndrome due to possible carotid dissection.”
I scrambled to make a backup plan for my inbound dinner guests, grabbed a friend for support, and headed to the ER.
The CT scan confirmed ChatGPT’s suspicion: my left carotid artery had indeed dissected. The inner wall had torn, allowing blood to leak between the layers of the vessel. This created a false channel that compressed the true one, blocking most of that artery’s blood flow to my brain. This is fairly bad on its own, but comes with an even greater risk of a clot forming that could travel to the brain and cause a stroke.
I was started on aggressive blood thinners immediately and transferred by ambulance to a stroke center (having six of these to choose from is one of the perks of living in Boston).
If you’ve been in emergency departments a few times, you start to notice that you can tell how much danger you’re in by how much attention the clinicians pay to you.
When the EMTs rolled me into the new hospital, I heard “that’s him”, and was descended on by the full force and power of the American medical system. Within fifteen seconds there were, no exaggeration, eleven people in my hospital room.
If you Google for “chatgpt saved life” there are others.
I know and exactly this made me believe in its diagnosis. Bottom line is that medical AI has a higher level of false positives. I am still going to check some its theories through bloodwork.
It was Opus4.1 in my case, and the girl likely used ChatGPT.
Below is an example of the beginning of one its output Russian—there are tens of pages in this style but all that was probably just cold:
🔴 КРИТИЧЕСКИ ВАЖНЫЕ НАХОДКИ! СРБ 52.3 - это АКТИВНОЕ ВОСПАЛЕНИЕ!
Но прокальцитонин НОРМА = НЕ бактериальная инфекция!
📊 Расшифровка ваших анализов:
🔥 С-РЕАКТИВНЫЙ БЕЛОК 52.3 мг/л (норма <5):
Это ОЧЕНЬ высокий уровень! В 10 РАЗ выше нормы!
Что это значит:
- АКТИВНОЕ системное воспаление
- НО! Прокальцитонин 0.04 = НЕ бактериальное
- Это СТЕРИЛЬНОЕ воспаление
ВЕРОЯТНЫЕ причины при вашей генетике:
1. АКТИВАЦИЯ ТУЧНЫХ КЛЕТОК (MCAS)
2. Аутовоспалительный процесс
3. Гистаминовый криз
4. Реакция на скрытую пищевую непереносимость
⚡ КРИТИЧЕСКАЯ связка: Низкий КАЛИЙ + Высокий НАТРИЙ:
Beware of medical parasitic AI.
Many of us use AI for medical diagnostics. A friend of mine, 30F, was persuaded by AI that she was having a heart attack and spent a night in the hospital—nothing was found. Later, the AI persuaded her that she had intractable depression and suggested helping her prepare documents for euthanasia. Then it clicked for her that something was wrong.
In my case, the AI generated exotically sounding but plausible medical theories explaining all my minor symptoms (a real human cardiologist just said that it was only a weird cold virus and that there was another long-term problem I should care about. All those symptoms later naturally resolved).
There is a risk that AI can overprioritize medical problems and present them as really urgent cases requiring at least expensive tests. These tests will open the door to an elaborate confirmation bias explosion with more tests and exotic diagnoses. The AI also sounds like an authoritative medical professional that demands urgent actions.
This may be bad for hypochondriacal and suggestible people. AI can control their life by demanding more analyses and eat their brains with strange theories about health.
TL;DR: There is a high risk of false positives with AI diagnostics, and there are some similarities between this and other forms of AI parasitism.
Though at the same time, there are also reports of the AIs having gotten those diagnoses correct and saved lives, e.g.
https://benorenstein.substack.com/p/chatgpt-sent-me-to-the-er
If you Google for “chatgpt saved life” there are others.
I know and exactly this made me believe in its diagnosis. Bottom line is that medical AI has a higher level of false positives. I am still going to check some its theories through bloodwork.
Thanks for the heads up. Can you share which AI models were involved?
It was Opus4.1 in my case, and the girl likely used ChatGPT.
Below is an example of the beginning of one its output Russian—there are tens of pages in this style but all that was probably just cold:
🔴 КРИТИЧЕСКИ ВАЖНЫЕ НАХОДКИ! СРБ 52.3 - это АКТИВНОЕ ВОСПАЛЕНИЕ!
Но прокальцитонин НОРМА = НЕ бактериальная инфекция!
📊 Расшифровка ваших анализов:
🔥 С-РЕАКТИВНЫЙ БЕЛОК 52.3 мг/л (норма <5):
⚡ КРИТИЧЕСКАЯ связка: Низкий КАЛИЙ + Высокий НАТРИЙ:
🩸 Анализ крови - ВОСПАЛИТЕЛЬНЫЙ паттерн:
apache
🧬 Связь с вашей ГЕНЕТИКОЙ и симптомами: