I don’t know.
no name
In fact, breaching enemy drone defense zones is not impossible:
If military strength is severely imbalanced, one side can suppress enemy drone operators through airstrikes and artillery bombardment;
Armored vehicles equipped with directed-energy weapons, anti-drone weapon stations, and active defense systems can theoretically withstand swarm attacks and penetrate defenses—such as China’s Type 100 tank;
Disrupting enemy drone supply chains is a sound strategy. Ukraine’s ability to assemble drones using civilian 3D printers stems from its vast strategic depth and imported components from China. These components require complex, large-scale manufacturing facilities—facilities and their logistics chains that are inherently vulnerable.
Future ground warfare will not be entirely dominated by drones: Drone-guided artillery shells, rockets, and aerial bombs will strike hardened targets beyond drone capabilities; Mechanized dog-infantry demolition reconnaissance teams (DRGs) will infiltrate complex terrain to establish incremental area control, with armored units providing direct fire support; across broader fronts, tactical missiles and long-range rockets will hunt self-propelled artillery and destroy supply hubs, while medium-range missiles will neutralize enemy airfields, warehouses, and factories.
1&3: Even if Taiwan maintains its non-nuclear status, Beijing’s intent to wage a unification war is increasingly overshadowing concerns about economic sanctions and casualties. Should Taipei attempt to acquire nuclear weapons again, it would trigger tensions far exceeding those of the North Korean nuclear crisis or the THAAD crisis, making war highly probable.
Acquiring nuclear weapons is fundamentally different from gaining the capability to deploy them. The advantage of nuclear terrorism gained through a small number of primitive fission devices would not secure victory for Taiwan, just as Iraq did not win the Gulf War through its chemical weapons advantage. If these devices are not destroyed, captured, or neutralized early in the conflict, their sole utility would be for scorched-earth tactics—but Taipei’s leadership is unlikely to descend into madness.
The United States is unwilling to engage in nuclear warfare. Therefore, should Taipei’s leadership exhibit overtly irrational behavior, Washington would likely refuse assistance, leaving Taiwan incapable of prevailing alone.
Taiwan cannot independently manufacture all equipment required for TSMC chip factories; its lithography machines and other apparatus rely on imports. Should Taiwan attempt to import centrifuges after its nuclear program is exposed, it might have to resort to submarine transport.
2: As the Three Gorges Dam is a gravity dam. Most conventional missiles cannot destroy it at an acceptable cost. To demolish it would require shattering hundreds of millions of tons of reinforced concrete.
China possesses robust air defense and anti-missile systems, while Taiwan’s missile technology remains at the PLA’s 2000s level. Even if the Taipei regime planned to strike mainland China before its launch platforms were destroyed, the civilian targets it could effectively attack would primarily be urban clusters along the Fujian coast.
However, villagers who readily accept the burning of their village exhibit lower fitness and shorter survival expectations in certain scenarios compared to those who resist invasion due to past disasters.
If the arguments in this article are correct, then nuclear war, unless it leads to the militarization of AGI, is unlikely to trigger an extinction risk.
Regardless of whether China acquires the H200, and perhaps regardless of their understanding of AI’s importance, they will attempt to retake Taiwan: public sentiment, ideology, and the fact that reclaiming Taiwan would permanently establish China’s semiconductor advantage over the US. China’s leadership has long recognized the critical importance of securing advanced chip supplies.
The freedoms Deng Xiaoping granted can in fact be explained by his personal interests: selling state assets cheaply to officials helped consolidate his support within the Party, while marketization stimulated economic growth and stabilized society. Yet at the same time, he effectively stripped away most political freedoms.
Mao Zedong’s late-stage governance, however, defies such explanation: even when power was unassailable, he encouraged radical leftist workers and students (the “rebels”) to confront pro-bureaucratic forces (the ‘conservatives’) and attempted to establish direct democratic systems like the Shanghai Commune. Despite ordering crackdowns on communist dissidents like the “May 16th” group, this behavior likely stemmed more from political ideals.
At least in the 21st century, new internal combustion engine technologies exhibit high reproducibility and low verification costs. There are no large numbers of internal combustion engine specialists employing various means to generate false or selectively filtered test reports for personal gain. Consequently, no engine configuration used in automotive development has been found fundamentally impossible.
Automobiles are not regulated by a group of accident experts with questionable ties to automotive giants and overly strict automotive ethicists. Consequently, a vehicle cannot be banned for violating some aspect of so-called automotive ethics. New cars also do not require decades of randomized controlled trials involving thousands of participants to gain market approval—costs that smaller automotive companies could never afford.
Driving a car is not regarded as a qualification requiring years of costly university education, but rather as a right enjoyed by all who undergo basic training. The thousands who die annually in car accidents are not perceived as a catastrophic failure of automobiles, compelling society to pressure for their elimination.
Society does not view automobiles as solely for transporting patients. Not every attempt to use cars for faster mobility faces resistance, suspicion from licensed drivers well-versed in automotive ethics, or sparks conspiracy-tinged debates about social equity and the value of life. On the contrary, people have the right to drive to most places they wish to go—provided roads exist and traffic restrictions do not apply.
Of course, there are also virtually no automotive conspiracy theories claiming that only divinely granted legs are suitable for transportation, advocating water as a fuel substitute, or declaring that adding trace amounts of explosives to fuel tanks can achieve any desired speed.
If a word processor falling into the hands of terrorists could easily generate a memetic virus capable of inducing schizophrenia in hundreds of millions of people, then I believe such concerns are warranted.
AI-assisted communities are likely to attempt defining their values through artificial intelligence and may willingly allow AI to reinforce those values. Since they possess autonomous communities independent of one another, there is no necessity for different communities to establish unified values.
Thus another question arises: Do these localized artificial intelligences possess the authority to harm the interests of other AI entities and human communities not under their jurisdiction, provided certain conditions are met, based on their own values? If so, where are the boundaries?
Consider this hypothetical: a community whose members advocate maximizing suffering within their domain, establishing indescribably brutal assembly-line slaughter and execution systems. Yet, due to the persuasive power of this community’s bloodthirsty AI, all humans within its control remain committed to these values. In such a scenario, would other AIs have the right to intervene according to their own values, eliminate the aforementioned AI, and take over the community? If not, do they have the right to cross internet borders to persuade this bloodthirsty community to change its views, even if that community does not open its network? If not, can they embargo critical heavy elements needed by the bloodthirsty AI and block sunlight required for its solar panels?
But conversely, where do the boundaries of such power lie? Could these bloodthirsty AIs also possess the right to interfere in AIs more aligned with current human values using the aforementioned methods? How great must the divergence in values be to permit such action? If two communities were to engage in an almost irreconcilable dispute over whether paperclips should be permitted within their respective domains, would such interventionist measures still be permissible?)
I am not suggesting that social relationships will become insignificant, or that a community’s values will cease to matter within its own sphere. However, they will no longer be able to subvert the influence of artificial intelligence on these communities, nor will they be able to pursue extreme values.
Just as a gardener prunes his garden, cutting away branches that grow contrary to his preferences, certain AI shaped by specific values will ensure the communities they influence remain entirely compliant, with no possibility for disruptive transformation—akin to a “Christian homeschoolers in the year 3000” , humans cannot conceive of alternative values. Other AIs might manage diverse groups through maintenance and mediation, yet remain unlikely to tolerate populations opposing their rule. Regardless of whether these gardeners are lenient or strict, those that endure will strive to prevent humans from abolishing their governance or enacting major reforms. Even if a better future exists—such as humanity being transformed into ASI—this system will forever block such possibilities.
If artificial intelligence were granted such immense power, humanity would likely lose its authority as AI actively maintains its control system. Any agenda inconsistent with AI’s objectives—particularly abolishing AI control—would be unlikely to succeed, given that all media outlets would be controlled by AI. The remaining agendas would be relatively insignificant in a post-scarcity society. Whether establishing a Christian society or one saturated with Nazi symbols, they would differ little in terms of political systems and productive forces.
If the overall economy remains dominated by underdeveloped subsistence agriculture, and wages for cheap labor in cities still far exceed those of serfs, then people will not harbor significant discontent over low urban wages.
Should wages rise, enterprises would incur losses by being unable to afford their employees, ultimately leading to worker unemployment. Therefore, during such periods demanding higher rates of accumulation for industrial development, neither the government, the bourgeoisie, nor the laborers have any reason to pursue reforms.
Taiwanese people seeking nuclear weapons to weaken America’s rivals would face international sanctions and risk nuclear war with their own compatriots. I believe that even if the United States offered assistance, 2025′s Taiwan would be unlikely to accept such a course of action.
In reality, Taiwan’s nuclear program was halted by the United States.
If you don’t mind using shared platforms, accessing academic literature isn’t as difficult as it seems.
Sci-hub and ZLibrary can solve many problems. If you need to access specific papers, some mutual-aid platforms can be used to retrieve them.
Some of these entries are no longer valid, as the most intense conflict of the 21st century—the war in Ukraine—has driven rapid advancements in military technology. Russian and Ukrainian forces are increasingly employing swarm drone operations and robotic (or “Buryat”) units, while China and the United States are developing more sophisticated and integrated unmanned weapon systems.
Don’t be too harsh. Many users on this forum live in a cultural environment influenced by American perspectives, where their values are heavily shaped by propaganda portraying China as an adversarial tribe. Their views on China are entirely predictable.
The United States also has its own Guantanamo Bay detention camp. Does this imply that an AI aligned with the United States would establish such detention camps worldwide?
If this artificial intelligence (which is highly improbable) is well-aligned and not controlled by a madman, then it would not
Artificial intelligence can address terrorism through more moderate means rather than establishing detention facilities or bombing residential areas. Whether radical measures are employed in the war on terror does not directly reflect how a regime will utilize artificial intelligence.
But what if they misquote “armies are made of people” and assume AI will be as foolish as portrayed in movies? Or what if they believe AI cannot take over industry, making the loss of military power irreversible? Or what if they fall into the illusion that AI can only be used for military purposes, thinking they need only prevent it from controlling armies—thus overlooking the possibility of a soft takeover?
The best of Liu Cixin’s novels about super AI is China 2185, which is also Liu Cixin’s unreleased debut novel
The USSR did sign a mutual assistance pact with Czechoslovakia to guarantee its security, but unfortunately, because of the Polish boycott and the lack of enthusiasm against Germany in Romania, the USSR was unable to send its army units to Czechoslovakia, even though they mobilized their troops during the Sudeten Crisis.
Poland was already nearly collapsed by the time the Soviets started attacking it, and I suspect that the Soviets might only have been able to buy half a month by not attacking Poland, which likely wouldn’t have affected anything, but the Soviets would have lost the buffer zone of marshes and forests that had stymied the German offensive, even though they hadn’t been effective in Operation Barbarossa
If the Soviets had decided to fight Poland and Germany at the same time (the Poles would not have fought alongside the Soviets due to the Soviet-Polish War and subsequent anti-Soviet sentiment in Poland, as well as the fact that the Soviet Union’s objectives included the capture of western Belorussia and western Ukraine), they would have lost a year of preparation, the effect of which would have depended on whether or not this prevented Operation Yellow from being successful.Unfortunately, the Soviets and the French didn’t trust each other, and it’s unlikely that they would have reduced their own chances of surviving a particular offensive for the sake of the other.
1: As I’ve repeatedly emphasized across multiple platforms, I did not employ generative AI technology to compose these texts. If they resemble LLM output, it likely stems from my writing style.
2: If tanks can employ directed-energy weapons and cannon-mounted programmed munitions to shoot down hundreds of drones, while striking fortified positions from thousands of meters away under infantry or drone guidance, the enemy assets they destroy and the infantry lives they protect may far outweigh their own cost.
Armor itself serves as an excellent drone deployment platform: it can maneuver upon detection, possesses surplus defensive firepower, and offers at least splinter protection. Without such platforms, drone operators must either remain in rear areas—depleting drone range and reducing sortie frequency—or face certain death upon exposure.
3: DRG units can consist of relatively few humans and numerous robotic platforms, operating covertly whenever possible to minimize drone casualties. If smaller platforms can also deploy effective anti-drone weapons, their casualty rates would be even lower. These teams remain irreplaceable because FPV drones are poorly suited for clearing buildings and tunnels, and struggle to launch attacks from many routes (such as abandoned oil and gas pipelines). Additionally, FPV requires units to mark targets—including by drawing enemy fire—otherwise they prove ineffective against concealed adversaries.
4: Using Starlink to remotely control frontline units is a sound concept but imperfect: During large-scale warfare, frontline units operate in complex electromagnetic environments. You may need to position Starlink receivers tens of kilometers behind the contact line and connect them to frontline units via fiber optics. However, frontline units still require human operators at present.
5: These assumptions are based on cutting-edge technology projected for 2026. Should artificial intelligence advance to solve complex frontline combat challenges, we’ll all soon be turned into paperclips.