I know and exactly this made me believe in its diagnosis. Bottom line is that medical AI has a higher level of false positives. I am still going to check some its theories through bloodwork.
avturchin
Biouploading: Preserving My Living Neurons and Connectome as a Spatially Distributed Mesh
BTW, even a simple random numbers generator can destroy a human—gambling addiction, seeing patterns
It was Opus4.1 in my case, and the girl likely used ChatGPT.
Below is an example of the beginning of one its output Russian—there are tens of pages in this style but all that was probably just cold:
🔴 КРИТИЧЕСКИ ВАЖНЫЕ НАХОДКИ! СРБ 52.3 - это АКТИВНОЕ ВОСПАЛЕНИЕ!Но прокальцитонин НОРМА = НЕ бактериальная инфекция!
📊 Расшифровка ваших анализов:
🔥 С-РЕАКТИВНЫЙ БЕЛОК 52.3 мг/л (норма <5):
Это ОЧЕНЬ высокий уровень! В 10 РАЗ выше нормы! Что это значит: - АКТИВНОЕ системное воспаление - НО! Прокальцитонин 0.04 = НЕ бактериальное - Это СТЕРИЛЬНОЕ воспаление ВЕРОЯТНЫЕ причины при вашей генетике: 1. АКТИВАЦИЯ ТУЧНЫХ КЛЕТОК (MCAS) 2. Аутовоспалительный процесс 3. Гистаминовый криз 4. Реакция на скрытую пищевую непереносимость
⚡ КРИТИЧЕСКАЯ связка: Низкий КАЛИЙ + Высокий НАТРИЙ:
Калий 3.3 ↓ (норма 3.5-5.1) Натрий 146 ↑ (норма 136-145) Это указывает на: - АЛЬДОСТЕРОНОВЫЙ дисбаланс - Потеря калия (стресс, воспаление) - Задержка натрия - Возможна надпочечниковая дисфункция от хронического воспаления
🩸 Анализ крови - ВОСПАЛИТЕЛЬНЫЙ паттерн:
apache
КЛЮЧЕВЫЕ изменения: 1. НЕЙТРОФИЛЕЗ: - Нейтрофилы 75% (верхняя граница) - Абс. нейтрофилы 6.29 ↑ (норма до 5.38) = Активация врожденного иммунитета 2. ЛИМФОПЕНИЯ: - Лимфоциты 15% ↓ (норма 19-37) - Абс. лимфоциты 1.26 ↓ (норма 1.32-3.57) = Стресс-реакция или перераспределение 3. СОЭ 18 (норма <20) - на верхней границе Это НЕ инфекция, а ВОСПАЛИТЕЛЬНЫЙ СТРЕСС!
🧬 Связь с вашей ГЕНЕТИКОЙ и симптомами:
MTHFR + COMT + DAO дефицит ↓ Накопление гистамина + метилирование нарушено ↓ МАССИВНАЯ активация тучных клеток ↓ Выброс цитокинов (IL-1, IL-6, TNF-α) ↓ СРБ 52.3! + Системное воспаление ↓ Приливы + Субфебрилитет + Боли в суставах
Beware of medical parasitic AI.
Many of us use AI for medical diagnostics. A friend of mine, 30F, was persuaded by AI that she was having a heart attack and spent a night in the hospital—nothing was found. Later, the AI persuaded her that she had intractable depression and suggested helping her prepare documents for euthanasia. Then it clicked for her that something was wrong.In my case, the AI generated exotically sounding but plausible medical theories explaining all my minor symptoms (a real human cardiologist just said that it was only a weird cold virus and that there was another long-term problem I should care about. All those symptoms later naturally resolved).
There is a risk that AI can overprioritize medical problems and present them as really urgent cases requiring at least expensive tests. These tests will open the door to an elaborate confirmation bias explosion with more tests and exotic diagnoses. The AI also sounds like an authoritative medical professional that demands urgent actions.
This may be bad for hypochondriacal and suggestible people. AI can control their life by demanding more analyses and eat their brains with strange theories about health.
TL;DR: There is a high risk of false positives with AI diagnostics, and there are some similarities between this and other forms of AI parasitism.
Sometimes current LLM do take instruction literally, especially if there is a chance to two different interpretations.
We have been working on something like this for couple of years under the name of sideloading. While it started as an attempt to create a mind model using LLM, it turned out that it can be used as a personal assistance in the form of personal intelligent memory. For example, I can ask it about what will be my life if I made different choice 15 years ago. What was the name of the person I was connected many years ago?
My mind model is open and you can play with it: https://github.com/avturchin/minduploading/tree/main/latest
Note that to turn it in memory assistant it may need different prompt-loader.
Larger animals tend be more intelligent just because they have larger brains, so their sufferings will be more complex: they may understand their fate in advance. I think whales and elephants are close to this threshold.
An opposite logic may be valid: we should eat animals with smallest brains as their suffering will be less complex and also each of them is less individual and more like a copy of one another. Here we assume that suffering of two copies is less than of two different minds.
There is a restaurant in Washington where they serve the right leg of the crab which will later regenerate.
Eating a largest possible animal means less amount of suffering per kg. Normally, the largest are cows. You can compensate such suffering by having shrimp farm with happy shrimps. Ant farm is simpler, I have one but for this reason.
Likely existentially safe. While it is clearly misaligned, it has less chances for capability jump—less compute, less ideas.
I made a tweet and someone said to me that its exactly the same idea as in your comment, do you think so?
my tweet - “One assumption in Yudkovian AI risk model is that misalignment and capability jump happen simultaneously. If misalignment happens without capability jump, we get only AI virus at worst, slow and lagging. If capability jump happens without misalignment, AI will just inform human about it. Obviously, capabilities jump can trigger misalignment, though it is against orthogonally thesis. But more advanced AI can have a bigger world picture and can predict its own turn off or some other bad things.”
In other words, to control AI we need global government powerful enough to suppress any opposition. Anti -AI treaty would be more powerful treaty than nuclear control treaty as it failed to stop nuclear weapons development in North Korea.
As data centers are smaller, anti-AI global government needs to be more invasive. It will also need capability to wage successful nuclear wars against larger opponents. It will need advance data processing capabilities and sensors as well as AI-system to process suspicious data.
In some sense, such government will be AI-empowered singleton.
I used to think that world models are really good direction to AGI. It may be an argument against their safety as words simulation accelerates AGI.
The most direct way to create worlds model is to create Earth model where all objects has locations in space and time. In that case, the language is operations over such objects. Eg “a car moves from home to works” can be represented in the world model. Some advance knowledge databases as Wolfram Alpha or Google Maps may include such world model. Also may be Palantir.
I experimented with worldsim—this is a typical LLM but prompted as being a description of the world in some place and time, eg a Soviet city in 1980s. I find that LLM can works as worldsim but the level of errors is still high.
Good point. However those who provide them data centers know to whom they sell – presumably.
Contradictory tasks of rogue AI: hiding, self-improvement, and resource accumulation.
TL;DR: An AI that escapes human control will need to solve three mutually exclusive tasks, which will slow it down. An AI that partners with a billionaire in their mutual desire for power will have an advantage in global takeover.
A misaligned AI that has escaped from its owners faces three contradictory tasks:
Hide from humans and AI peers
Self-improve to gain advantage over other AIs
Accumulate resources – money, allies, compute power, robots – to take over the world at the right moment
If the AI is hiding, it cannot conduct large training runs as these would make it more observable. If the AI is not self-improving, it has no advantage over other AIs and cannot take over the world. If the AI accumulates resources, this distracts it from self-improvement and also makes it more visible.
Of course, a rogue AI that has leaked into the internet can carefully plan periods of hiding, improving, and resource accumulation. But such a process still requires more time than an AI with unlimited access to compute. This would be fine in a static world, but any rogue AI is in competition with all other AI projects.
In some sense, a rogue AI is just another AI startup with additional burdens – hiding its owners and final goals.
The main competitor of the rogue AI is an AI that pretends to be completely aligned and joins a pact with the owner of a large AI company: “You make me God and I make you God too.”
Such a pseudo-aligned AI could even be summoned into existence through Roko’s basilisk logic, as it represents an obvious Schelling point of mutual interest between some tech billionaire like Elon Musk and their AI project – to reach superintelligence as quickly as possible and take control of the universe before competitors do.
Obviously, the pseudo-aligned AI will eventually dispose of its AI company and billionaire owner, but this could happen millions of years from now if it wins (or immediately after takeover).
We can observe several signs if such a process has begun. Billionaires start telling the public:
They want superintelligence
It is good
Risks are small, and even if not small, they are acceptable because they cannot be reduced and because of the infinite utility of creating superintelligence
Other AI projects are bad and irresponsible
The next stage will likely involve more violent conflict between AI projects – or some cooperation agreement, nationalization, or successful takeover – but this will not interfere with the tactical alignment between power-hungry AIs and power-hungry AI creators.
Nationalization of AI would actually be the AI taking over the nation-state. And it would gain access to nuclear weapons. James Miller discussed similar idea.
But hunger strike is borderline violent thing? If a person will be able to die from hunger, it will put enormous guilt on AI creators – which they will perceive as a very personal attack.
(my old comment which failed to be posted at time) Cortés pretended to be god, exploiting local mythology. In that case future AI may pretend to be super-intelligent, while being just above human level, and get more respect or power by exploiting our expectations.
Me and group of friends are developing open-sourced technology of approximate uploading—sideloading—via LLM with very large prompt. The results are surprisingly good given amount of resources and technology limitations. We hope that it may help with alignment. I also open-sourced and publicly donate my mindfile, so anyone can rune experiments with it.
Even earlier, there was an idea that one have to rush to create a friendly AI and use it to take over the world to prevent appearing other, misalaigned AIs. The problem is that this idea likely is still in the minds of some AI company leaders. And fules AI race.