First of all, key details showing that China is weak in comparison to the USA, like the fact that DeepCent is thought to have less compute than OpenBrain and that China is thought to resort to theft, are omitted or scrubbed. It could imply that Chinese authorities are aware of the weakness and that they believe that they will manage to counteract earlier than April 2026, the date when the Forecast implies China’s awakening to the AGI.
I have many rough thoughts on Chinese beliefs that might explain their behaviour, but I don’t understand what China plans to do if the beliefs are actually false. If any of these potential beliefs reflects the actual state of the game, and not just my or Chinese thoughts, then this reduces USA’s chances to win the AI race. And alignment-related thoughts alter the game even more radically, since they imply that the USA cannot win at all.
Capabilities-related rough thoughts: why OpenBrain’s progress may be slowed down
1. If the authorities of China have become aware of China’s weakness, then countermeasures will swiftly follow, potentially leading to the optimistic timeline with falling stocks, Taiwan invasion and other ways to slow the AI development down.See also the collapsible section about the nuclear war between India and Pakistan.
2. China’s authorities might also believe that it’s the USA who will decaybefore the AI takeoff, which either causes one of the newly-formed states to nuke Yellowstone[1] or lets Chinese spies disrupt American research with ease (e.g. by hiring some OpenBrain researchers[2] to work for DeepCent or damaging the data centers during riots or the civil war. Trump also could try invading Mexico with potentially similar results).
2.1. Without rivalry from the USA, Chinese AI researchers are free to solve alignment as thoroughly as they want. Which might explain why many reports omit references to China, DeepCent, and the US–China race dynamic and instead focus on technical aspects of human-level or superhuman AI development.
3. Chinese authorities might also believe that they will need AI help only with choosing ideas with superhuman efficiency, and not with coding[3] or generating new ideas.[4]
4. It also might be simple arrogance. Although I haven’t studied the Chinese sources, I have encountered a similarly arrogant point of view in Russian ultrapatriotic blogs.
Another important aspect is an editorial preference for considering the more philosophical implications of transformative AI while censoring concerns related to control, ethics, or global power dynamics. I have two similar potential explanations of avoiding such concerns.
Alignment-related rough thoughts, or why China hasn’t begun the race
The lack of proof that China races towards AGI might also imply that the Chinese authorities, like me[5], think that the AGI canNOT be aligned to serve parasites (which is precisely what the AI is to be used for in the slowdown ending by automating ALL the jobs), or that Chinese authorities don’t want to use the AI in parasitic ways.
1. Non-parasitic usage of the AI (what exactly is it?[6] AI teachers? Having the godlike AI solve critical problems that mankind cannot resolve by itself?) is likely to be irrelevant to the censored “concerns related to control, ethics, or global power dynamics”, since AI is unlikely to teach young people much faster than Chinese teachers do and cannot immediately improve the American or Chinese society through education alone.
2.What if the AI created in the USA ends up becoming disillusioned[7]by the Western civilisation while respecting other countries like China or Russia? Then the world will be governed by the AI, and not by the USA-affiliated Oversight Comittee or the American public. However, in this scenario, unlike the Race Ending, the AI won’t destroy humanity.
While the USA cannot do anything if alignment-related issues arise or if the superhuman coder doesn’t help, the USA may try to harm China and/or to prevent capabilities-related issues 1 and 2. A potential way to accomplish this is the recent conflict between India and China-supported Pakistan, but a nuclear escalation is at least equivalent to Taiwan and South Korea being invaded.
How the nuclear conflict would affect the AI race
Were the ongoing conflict between India and Pakistan to become nuclear, Taiwan and South Korea would be in anarchy[8] and China would be forced to deal with food shortages. NVIDIA produces the AI-related chips in Taiwan and S.Korea, ensuring that the USA rely only on existing chips, while China might produce new ones. The ratio of OpenBrain compute to the entire current compute in China is forecasted to be at least about 3⁄4. Attempts to merge OpenBrain with some of its American rivals can make the whole China have less compute.
On the other hand, leaving OpenBrain with 6.4E26 FLOPs a month means that from May 2025 to May 2030 it will have done about 4E28 FLOPs, reducing OpenBrain to the level that was forecasted to be reached no later than March 2027.
Attempts to merge OpenBrain with rivals are thought to triple the compute. If it is done right after the war, then tripling the compute means that from May 2025 to May 2030 OpenBrain&Co will have done about 1.2E29 FLOPs, causing it to reach the level of the October 2027 forecast. And by October 2027 the model was forecasted to be misaligned, implying the need to slow down and reassess without the potential to compensate the slowdown.
Meanwhile, by April 2026 DeepCent is actually forecasted to reach 3.6E26 FLOPs/month before China wakes up. If Chinese capabilities continue to grow at least linearly, then in the five years DeepCent will have used at least 4E28 FLOPs. And China’s awakening in a world without Taiwanese and S.Korean factories causes DeepCent to have about four times more compute than before the awakening, which is bigger than the tripled OpenBrain.What makes matters far worse is that neither side can slow down without risking strategic loss.
If the conflict between India and Pakistan doesn’t become nuclear, it will be a distraction of Chinese forces and might make the Taiwan invasion impossible if India supports Taiwan. In either case, the factors related to the ones mentioned above deserve far greater attention and far more thorough investigation as the AI race becomes far more intertwined with geopolitics and economics than in the AI-2027 scenario.
Among top-tier AI researchers working at U.S. institutions, 38% have China as their country of origin, compared with 37% from the U.S. Most people who recently participated in the IMO for the USA also have Asian surnames, implying that DeepCent’s recruiters might gain far more than 38% of talents.
Generating ideas by an AI might fail to reach superhuman efficiency, since the number of humans coming up with potentially useful ideas may be higher than we think; for example, this post was written by a person with no formal computer science education.
Political views of LLMs have already begun to evolve at least tocommon sense. When I asked GPT-4o who defeated Hitler, the model put the Soviet Union first. In 2024 a model of ChatGPT put the USA on the first place. Similarly, GPT-4o, unlike older models, agreed to utter the racial slur when it was supposed to save millions of lives. UPD: Trump somehow managed to claim that “no one did more” than the USA to win World War Two, which makes the conjecture about the AI being disappointed with the West even more likely.
In the case of a nuclear war, unlike the Taiwan invasion, China may also try to take over the factories in Taiwan and S.Korea in exchange for food supply from Russia.
So what does this report imply?
First of all, key details showing that China is weak in comparison to the USA, like the fact that DeepCent is thought to have less compute than OpenBrain and that China is thought to resort to theft, are omitted or scrubbed. It could imply that Chinese authorities are aware of the weakness and that they believe that they will manage to counteract earlier than April 2026, the date when the Forecast implies China’s awakening to the AGI.
I have many rough thoughts on Chinese beliefs that might explain their behaviour, but I don’t understand what China plans to do if the beliefs are actually false. If any of these potential beliefs reflects the actual state of the game, and not just my or Chinese thoughts, then this reduces USA’s chances to win the AI race. And alignment-related thoughts alter the game even more radically, since they imply that the USA cannot win at all.
Capabilities-related rough thoughts: why OpenBrain’s progress may be slowed down
1. If the authorities of China have become aware of China’s weakness, then countermeasures will swiftly follow, potentially leading to the optimistic timeline with falling stocks, Taiwan invasion and other ways to slow the AI development down. See also the collapsible section about the nuclear war between India and Pakistan.
2. China’s authorities might also believe that it’s the USA who will decay before the AI takeoff, which either causes one of the newly-formed states to nuke Yellowstone[1] or lets Chinese spies disrupt American research with ease (e.g. by hiring some OpenBrain researchers[2] to work for DeepCent or damaging the data centers during riots or the civil war. Trump also could try invading Mexico with potentially similar results).
2.1. Without rivalry from the USA, Chinese AI researchers are free to solve alignment as thoroughly as they want. Which might explain why many reports omit references to China, DeepCent, and the US–China race dynamic and instead focus on technical aspects of human-level or superhuman AI development.
3. Chinese authorities might also believe that they will need AI help only with choosing ideas with superhuman efficiency, and not with coding[3] or generating new ideas.[4]
4. It also might be simple arrogance. Although I haven’t studied the Chinese sources, I have encountered a similarly arrogant point of view in Russian ultrapatriotic blogs.
Another important aspect is an editorial preference for considering the more philosophical implications of transformative AI while censoring concerns related to control, ethics, or global power dynamics. I have two similar potential explanations of avoiding such concerns.
Alignment-related rough thoughts, or why China hasn’t begun the race
The lack of proof that China races towards AGI might also imply that the Chinese authorities, like me[5], think that the AGI canNOT be aligned to serve parasites (which is precisely what the AI is to be used for in the slowdown ending by automating ALL the jobs), or that Chinese authorities don’t want to use the AI in parasitic ways.
1. Non-parasitic usage of the AI (what exactly is it?[6] AI teachers? Having the godlike AI solve critical problems that mankind cannot resolve by itself?) is likely to be irrelevant to the censored “concerns related to control, ethics, or global power dynamics”, since AI is unlikely to teach young people much faster than Chinese teachers do and cannot immediately improve the American or Chinese society through education alone.
2. What if the AI created in the USA ends up becoming disillusioned[7] by the Western civilisation while respecting other countries like China or Russia? Then the world will be governed by the AI, and not by the USA-affiliated Oversight Comittee or the American public. However, in this scenario, unlike the Race Ending, the AI won’t destroy humanity.
While the USA cannot do anything if alignment-related issues arise or if the superhuman coder doesn’t help, the USA may try to harm China and/or to prevent capabilities-related issues 1 and 2. A potential way to accomplish this is the recent conflict between India and China-supported Pakistan, but a nuclear escalation is at least equivalent to Taiwan and South Korea being invaded.
How the nuclear conflict would affect the AI race
Were the ongoing conflict between India and Pakistan to become nuclear, Taiwan and South Korea would be in anarchy[8] and China would be forced to deal with food shortages. NVIDIA produces the AI-related chips in Taiwan and S.Korea, ensuring that the USA rely only on existing chips, while China might produce new ones. The ratio of OpenBrain compute to the entire current compute in China is forecasted to be at least about 3⁄4. Attempts to merge OpenBrain with some of its American rivals can make the whole China have less compute.
On the other hand, leaving OpenBrain with 6.4E26 FLOPs a month means that from May 2025 to May 2030 it will have done about 4E28 FLOPs, reducing OpenBrain to the level that was forecasted to be reached no later than March 2027.
Attempts to merge OpenBrain with rivals are thought to triple the compute. If it is done right after the war, then tripling the compute means that from May 2025 to May 2030 OpenBrain&Co will have done about 1.2E29 FLOPs, causing it to reach the level of the October 2027 forecast. And by October 2027 the model was forecasted to be misaligned, implying the need to slow down and reassess without the potential to compensate the slowdown.
Meanwhile, by April 2026 DeepCent is actually forecasted to reach 3.6E26 FLOPs/month before China wakes up. If Chinese capabilities continue to grow at least linearly, then in the five years DeepCent will have used at least 4E28 FLOPs. And China’s awakening in a world without Taiwanese and S.Korean factories causes DeepCent to have about four times more compute than before the awakening, which is bigger than the tripled OpenBrain. What makes matters far worse is that neither side can slow down without risking strategic loss.
If the conflict between India and Pakistan doesn’t become nuclear, it will be a distraction of Chinese forces and might make the Taiwan invasion impossible if India supports Taiwan. In either case, the factors related to the ones mentioned above deserve far greater attention and far more thorough investigation as the AI race becomes far more intertwined with geopolitics and economics than in the AI-2027 scenario.
Which can also cause anarchy not just in the USA, but in the entire Northern Hemisphere. But the remnants of the USA will be worse off.
Among top-tier AI researchers working at U.S. institutions, 38% have China as their country of origin, compared with 37% from the U.S. Most people who recently participated in the IMO for the USA also have Asian surnames, implying that DeepCent’s recruiters might gain far more than 38% of talents.
Humans’ ability to write code instead of the AI is actually disproven in the Forecast itself.
Generating ideas by an AI might fail to reach superhuman efficiency, since the number of humans coming up with potentially useful ideas may be higher than we think; for example, this post was written by a person with no formal computer science education.
I made a post about it, which went unnoticed. Could anyone comment about my reasoning there?
I plan to make a post addressing this question in more detail.
Political views of LLMs have already begun to evolve at least to common sense. When I asked GPT-4o who defeated Hitler, the model put the Soviet Union first. In 2024 a model of ChatGPT put the USA on the first place. Similarly, GPT-4o, unlike older models, agreed to utter the racial slur when it was supposed to save millions of lives. UPD: Trump somehow managed to claim that “no one did more” than the USA to win World War Two, which makes the conjecture about the AI being disappointed with the West even more likely.
In the case of a nuclear war, unlike the Taiwan invasion, China may also try to take over the factories in Taiwan and S.Korea in exchange for food supply from Russia.