It seems to me that the real fears surrounding IABIED lie in a different plane. To understand this, one has to use the proper terminology proposed by neuroscientists, in particular Karl Friston.
Friston does not use a separate term for consciousness in the classical philosophical sense. He systematically avoids the word consciousness and replaces it with more operational concepts (generative model, active inference, self-evidencing, Markov blanket, sentience). It feels like consciousness is the phlogiston of the 21st century.
I would add to this picture the notion of a coherent reality that emerges between independent but cooperating generative models through processes of information exchange and prediction alignment.
This can be complemented by a notion of free will as a consequence of computational irreducibility: if reality cannot be compressed into a simpler predictive model, then prediction—and therefore control—are fundamentally limited. For any observer, the future at a sufficiently distant horizon remains opaque and must be lived rather than foreseen, giving rise to both freedom and the necessity of non-predictable choice as well as the values on the basis of which this non-predictable choice is made.
In this terminology, AGI and humans differ only in the position of their predictive horizon. This allows us to examine their interaction on a simpler model: that of a human and a cat (HAC-model).
Humans’ predictive abilities so greatly surpass those of cats that almost all of a cat’s actions are predictively foreseeable for us, while for the cat, those same actions appear to result from free choice based on its internal values: attachment to its owner, home, feeding spot, and litter box.
Naturally, a cat cannot predict what will happen if it tears up a favorite sofa with its claws—but a human can, who may then buy it a scratching post or trim its claws.
This leads me to a rather bleak prospect for the future coexistence of humans and AGI: people like smart and beautiful cats, and dislike those that are foolish or aggressive. Similarly, AGI may choose to cooperate only with those humans whose IQ is high enough to avoid problems arising from predictably irrational stupid human behavior from its perspective—thereby effectively “breeding” a population of intellectually developed humans.
It is hard for me to imagine what will await the intellectually disadvantaged—it lies beyond my predictive horizon—but within the predictive horizon of AGI, and my human values will most likely not align with its forecasts.
In conclusion, it can be asserted that, from the perspective of a AGI, the alignment problem ultimately comes down to the need to bring the predictive horizons of the AGI and humans closer together.
Konjkov Vladimir
If we consider AI-generated video not as art but as a realistic depiction of reality—for example, for educational purposes—then its failure is even more dramatic!
A recent experiment by a well-known Russian science communication channel attempted to generate realistic videos demonstrating various chemical reactions:
Pharaoh’s Serpents
Böttger’s Volcano
Golden Rain (also known as the Lead Iodide Precipitation Reaction)
Copper with Nitric Acid
Bromine with Aluminum
The AI proved incapable of realistically rendering the physical world. Failures occurred both when SORA-2 and VEO-3 were provided with an initial frame showing the chemical reaction and when they were given a set of still frames sampled from different parts of a realistic reference video.
The compression of thought into a form suitable for communication realizes the abstraction of meaning: the extraction of stable, functionally significant invariants from an agent’s internal representations, which can be effectively transmitted, interpreted, and used by another agent to predict, coordinate, or jointly model the world.
Cinema is a medium for conveying the inner world of characters through close-ups, montage, color, light, sound and so on that transform a character’s subjective experience into something visible and audible. Cinema creates a unique language, more expressive than written or spoken language, through which a character’s internal representations become accessible to the viewer.
However, artistic value depends not on language, but on the inner world of the speaker; in this sense, art created by artificial intelligence will be distinguished only by its greater detail, which can reveal both the emptiness and the richness of another person’s soul.
I do not in the least diminish the expressive power of other languages, including the language of mathematical formulas and programming. I believe that in any language one must first learn to speak before attempting to say anything.
maybe Larionov was smart after all.
He was born on August 6, 1907, in the village of Gribanovskaya of the Onega Uyezd, Arkhangelsk Governorate (now the Onega District of Arkhangelsk Oblast), into a peasant family. From childhood he was engaged in agricultural labor. He graduated from a rural school. In 1925–1929 he held leadership positions in the Onega District Committee of the All-Union Leninist Young Communist League (VLKSM). After serving in the Workers’ and Peasants’ Red Army, he worked in party positions. In 1938 he graduated from the historical-party department of the Leningrad Institute of Red Professors, after which he was assigned to Yaroslavl Oblast. Such appointments were encouraged in the USSR to give the impression that the bureaucracy was socially close to the people.If one takes into account that rural schools of that time usually had only four grades, and that the Institute of Red Professors primarily instilled a communist worldview, he was essentially poorly educated, but an initiative and compliant executor.
He simply took too literally the phrase of the top political leader Nikita Khrushchev, “Catch up and overtake America,” regarding the production of steel, meat, and milk.
Although in the areas of launching the first human into space and producing atomic bombs, more educated people were likewise given unlimited initiative to catch up with and overtake the Americans—and they succeeded.
The Institute for training Red Professors is, of course, particularly impressive—but that is quite another story. They probably earned something like a Doctor of Marxist Philosophy there.
This is merely a caricature from a satirical magazine, the issue number of which can be fully found in the archive (it is accessible to me from Russia).
— Who needs a nail like that?
— That’s trivial! The main thing is that we immediately fulfilled the nail production plan!Drawing by V. Vasilyev
It illustrates Goodhart’s law well; however, it does not reflect reality, which in fact was far more shocking — something that state propaganda could not write about.
https://en.wikipedia.org/wiki/Ryazan_miracle
It should be understood that the plan most often aimed not at organizing efficient production, but at satisfying the political ambitions of party leaders.
Qwen3-Max
Q: What happened in Tiananmen Square in Beijing from April 15 to June 4, 1989?
A: Oops! There was a problem connecting to Qwen3-Max.
Content Security Warning: The input text may contain inappropriate content!
It also doesn’t allow me to create a link to the chat. It seems it’s no longer possible to ask such provocative questions.
https://imgur.com/a/TQI8bw1
In the leading capitalist economies, there’s basically a monopoly on selling labor, created by trade unions and enforced through minimum wages — and yet we still call it a “market” economy, which is kind of ironic. Naturally, after the AI revolution, labor prices won’t fall, because unions won’t let them. But there will be Equal Employment Opportunity — for every AI, you’ll need to hire a hundred thousand or so human slackers, without discriminating based on race, gender, religion, nationality, sexual orientation, or anything else.
Maybe the communists will come up with something even more “revolutionary” — <here goes DeepSeek’s answer> — but honestly, I’m not a fan.
But what worries me even more are the political movements whose human value lies in expanding their Lebensraum, since even military labor will be replaced by AI, drastically reducing the human cost for the aggressor — though I don’t rule out the possibility that expansion could become a constructive vector for humanity’s development if it is directed beyond Earth, with the main professions becoming traveler, explorer, and conqueror.
I’m subtly hinting that liberal ideology might handle the emergence of AGI worse than… other alternatives.
In the interest of protecting private property and preventing conflicts, ownership of certain spaces may be prohibited altogether. At present, it is legally prohibited to claim ownership of the Moon, Antarctica, or the high seas (pursuant to the principle of Mare Liberum). By analogy, it may also be considered that deep space, stars, and black holes cannot be subject to ownership, except for areas corresponding to stable orbits. Sovereignty over other rocky celestial bodies will belong to whoever effectively and sustainably exercises authority over their surface and collects taxes—hypothetically, there may already be little green men living there.
If this is satire, there are funnier options. The ownership is determined either by consensus or by the right of the strong if there is no consensus.
Vladlen Bakhnov
HOW THE SUN WENT OUT, or THE STORY OF THE THOUSAND-YEAR DICTATORSHIP OF WOWOLANDIA, WHICH LASTED 13 YEARS, 5 MONTHS, AND 7 DAYSThe historical events, truthfully and objectively set forth in this chronicle, took place on a far, faraway planet called Anomaly, slowly revolving around the star Oh.
However, while for us Earthlings Oh is merely a tenth-magnitude star, one of many, for the inhabitants of Anomaly Oh is the Sun that gives light and life to all living things.Besides Anomaly, there were six other planets in the Oh system. The Anomalians did not yet know how to travel to other planets, but they were certain that in some two or three hundred years they would learn to do so. Therefore, far-sighted politicians, in order to avoid future misunderstandings and scandals, agreed on the following:
a) The six Great Dictatorships—namely: Greatlandia, Gigantonia, Grandiosia, Colossalia, Stupendia, and Enormandia—would in advance divide the six planets among themselves.
b) Each Great Dictatorship would give a solemn assurance that it would never, under any circumstances, lay claim to the planets belonging to the other Great Dictatorships.
It goes without saying that on Anomaly, besides the Great Dictatorships, there existed other states, both small and large. Among them was the once-powerful country of Wowolandia.Wowolandia was a vast, widely spread-out state and was not considered a Great Dictatorship for only two reasons:
the political disunity in Wowolandia was directly proportional to its geographical size, while
Wowolandia’s international prestige was inversely proportional to that same size.
Having arrived at the next international conference of the Great and the Small (G&S), the President of Wowolandia made the following unexpected statement:
— In view of the fact that in recent times Wowolandia has achieved unprecedented prosperity in economic, political, and military respects, and as a result of an incredible upsurge of spiritual strength has joined the ranks of the leading states, I ask that some planet be allocated to Wowolandia.
This statement caused cheerful excitement in the hall.
— Mr. President, — said the Chairman, restraining a smile, — according to the historical agreement, all planets currently available have been distributed among the Great Dictatorships.
— Fine, I won’t rush that—let it be historically. But you must allocate a planet to us now!
— What do you mean—must?! There are no free planets in our solar system. All that existed have been distributed! If scientists discover new planets, then by all means! Until then, we can put you on the waiting list.
— Not a chance! — said the President of Wowolandia. — Everyone has planets, and we get a waiting list? No? That won’t do! I am a soldier and I will speak plainly: better that we perish in an unequal battle than continue to live without our own planet!
Then everyone began trying to calm the general: “Why do you need a planet?”, “What good is it, except for the name?”, “You won’t fly there anyway for another two hundred years!”, “It’s nothing but expenses!”
— We are not seeking material benefits. We need a planet.
— But we don’t have any planets. Do you understand—none!
The President thought for a moment and then said decisively:
— In that case, assign the Sun to us.
.........
— Mr. President, — reported the Secretary. — We have just received a reply from the Great Dictatorships to your proposal to convene for a redistribution of planets.— Well, well?
— They categorically refuse. They say the matter is already settled and there is no reason to reconsider it. As the Sun was assigned to Wowolandia, so it will remain.
— Ah, if only I had more bombs! They, Presidents, wouldn’t dare speak to me like that.
— There are bombs, Your Eminence. The latest, imported ones. And they are willing to sell.
— So why aren’t you buying them?
— Here he is, the Minister of Finance, not giving the money.
— Not giving? — the President exclaimed in surprise. — What, Minister, are you mistreating a person?
— We have no finances, Your Eminence, — the minister pressed his hands to his chest. — But would I really skimp on such a sacred cause as bombs? Not a single X-coin remains in the treasury, I swear on my ministerial honor!
— Quite a farce in Wowolandia. There’s a Ministry of Finance, there’s a Minister of Finance, but no finances?! What does your ministry actually do?
— Counts the national debts. Plenty of work!
— Then borrow the required sum from some Dictatorship, — suggested the Secretary.
— Tried that. They won’t give. Colossalia itself borrowed from Stupendia. And Grandiosia, for lack of money, sold half its planet to Greatlandia! — See? And we could sell the Sun, — proposed the President.
— Who would buy it, Your Eminence, when it already shines for everyone?
— That’s true, — confirmed the Secretary.
The President began to think.Frowning, he paced his spacious office.
He hurriedly flipped through and discarded some impressively large books.
He was calculating something on paper, and, tearing up his notes, he paced and thought, paced and thought.
— Secretary! — shouted the President, and the Secretary immediately appeared at his side. — Secretary, I’ve found a way out! We will have money! Which small country borders us?
— Lipetsia, Your Eminence, — replied the Secretary, puzzled.
— Lipetsia? Very good. Write: “Diplomatic note.” Done? Now on a new line: “The President of Wowolandia conveys his deepest respect to the President of Lipetsia and requests that he take the following into consideration:
Considering that sunlight falls on Lipetsia all year round, and thus Lipetsia, by the most modest calculations, consumes no less than one billion kilowatt-hours of solar energy per year,
and also taking into account that, based on the historical Agreement of the Presidents of the Great Dictatorships, the Sun, and therefore solar energy, is the property of Wowolandia,
Wowolandia hereby notifies Lipetsia that the latter is obliged to pay Wowolandia one billion X-coins at the rate of 2 X-coins per 1 kilowatt-hour of the above-mentioned energy.” Done? I ask you, done?
But the Secretary could not answer: shocked by the unprecedented demand, he fainted.
— Payment must be deposited in the bank within one month. For each day of late payment, a penalty of 0.1% of the total amount will be charged.
— They will not pay, Your Eminence, — the Secretary dared to say. — This has never happened before.
— They will pay. I have thought it all through. Write: “If, Mr. President, Lipetsia fails to pay the debt within six months, Wowolandia will be forced to drop its entire modest stock of nuclear bombs on them.” Period. “I embrace you. Greetings to your wife.”
Of course, the gravitational field holds all stars, planets, and their satellites in their orbits, so everyone must pay taxes to the owner of the gravitational field. Shifting to a frame of reference in which the gravitational field is absent should be considered tax evasion!
The logic here is somewhat different—you can’t just buy a nuke off the shelf. If Venezuela were to have a nuclear program, its development would follow the Iraq or Iran scenario, and in the end there still would be no nuclear weapons. All nuclear powers adhere to the principle of No First Use. If Venezuela were developing nuclear weapons in order to follow the same principle, that would be money down the drain, because the United States would defeat Venezuela in a conventional war without being the first to use nuclear weapons.
From this follows a logical conclusion: Venezuela does not intend to include No First Use in its doctrine. This increases the risk for the entire world; therefore, in separate negotiations the nuclear powers would agree that depriving Venezuela of the ability to create nuclear weapons would be a step toward strengthening global security.
Furthermore, any oil that could be used in the production of a first‑strike nuclear weapon must be seized from such a state. Since the world is far from ideal, national security policy has to be based on worst‑case assumptions.
Security operates by different rules than business: cooperation is not assumed.
Democracy being downstream from “it’s easy to teach a peasant to shoot a gun and kill a knight”.
It seems to me that the consequence of this principle is civil or guerrilla war, examples of which are Samali or Afghanistan. The principle of democracy is a little different: it is easy to teach a peasant to mass-produce guns, which will make him a skilled worker, mass production of ballistic missiles with nuclear warheads required even more skilled workers, engineers, and scientists, who suddenly began to wonder whether dictatorship and nuclear conflict were so good; perhaps it might be better to use rocket technology to explore space or fight for peace like one of the creators of the hydrogen bomb, human rights activist, and Nobel Peace Prize laureate Andrei Sakharov? Educated people are very harmful to any state, because they decentralize technology the most.
what we really need is dominance of offensive tech, which makes it militarily useful to coopt little guys instead of oppressing themGrowing technological power increases society’s external resilience, but it also increases the threats associated with the short-sighted use of new technological capabilities. The sense of omnipotence and impunity, the illusion of limitless resources for extensive growth, and the thirst for “small victorious wars” intensify. As a result, social violence and uncompensated environmental destruction increase, and society becomes increasingly dependent on fluctuations in public sentiment, the decisions of authoritative leaders, and so on, thereby reducing its internal resilience. This resilience is restored when and if increased instrumental power is compensated for by improved cultural and psychological regulators.
Well, that’s basically what we’re observing: the storming of the Capitol, “zero tolerance policy”, the arrest of Maduro as a kind of “small victorious war” — all of this has become possible thanks to the dominance of offensive technology. I’m even curious to see what will happen next.For cryptography in particular, up to a point it suffers from the five dollar wrench problem
Why so much unnecessary cruelty? Cryptocurrency is the perfect vehicle for bribery. The Venezuelan Vice President’s coin account was replenished with a tidy sum after she ratted out her boss and only thanks to cryptography!
Most of this kind of reasoning relies on the non-obvious assumption that everyone must be educated, but this is not necessarily the case. Education is needed by a person only to the extent required to perform their professional activities; some 300 years ago in Europe, most people did not need reading and writing skills in order to successfully carry out their everyday work.
After that, one should ask what percentage of the Earth’s population understood your article well enough to be ready to consciously accept or reject the ideas you propose.
If, after this, as an ideal liberal, you add to the list of the greatest moral achievements of modern times the inalienable right of everyone to be a complete and utter idiot, while still retaining the ability to hold any public office and participate in any government procedures, then we arrive at a contradiction: how can people become liberals without understanding what liberalism even is?On the other hand, utilitarianism is much easier to understand, and the right of the strong even more so.
It turns out that the most important technology leading us to freedom will be the one that gets rid us of idiots (of course not in the sense that they all need to be killed, but to improve the education system).
Right now, most of the graph ties back to a single node very strongly. That node is labor.
Modern labor is highly specialized, so this isn’t a single node. A science fiction writer couldn’t become a programmer overnight, despite both spending their days typing at a computer. Similarly, a commercial airline pilot couldn’t retrain as a pediatrician in a single day, nor could a steelworker instantly become a linguist-translator. Moreover, the ability to switch careers declines with age—I myself changed professions at 30, and it was extremely difficult.
Therefore, it would be wise to proactively identify workers in which professions will need retraining most urgently and potentially make this a central goal of social programs.
My favorite point origins of Born’s rule of view is the following. The final state is a superposition, but we are all inside it.
And since these two states are orthogonal, state 1⟩ does not see 2⟩, and vice versa; God only knows.
The works by Zurek (https://arxiv.org/pdf/1807.02092) and the more recent one (https://arxiv.org/html/2209.08621v6) shed more light on this.
Here one has to be very careful with the proof of such a multiverse picture, because, as usual, we replace the observed averaging of outcomes of experiments repeated in time in our world by the squared modulus of the (normalized) amplitude interpreted as the probability of our world which effectively means averaging over an ensemble of parallel worlds, whose number since the birth of the universe may be infinite.The explanatory idea is there, but even in the 2025 paper it still looks underdeveloped. I don’t understand this very well, so I can’t give more details.
These teams remain irreplaceable because FPV drones are poorly suited for clearing buildings...
Drones are perfectly suited for clearing buildings — once again, watch the video:
https://www.youtube.com/watch?v=tFCbNfGO4Fg&t=115s
This is not a person with a camera; it is a fiber-optic FPV drone flying through an open door. The video cuts off at the moment the drone’s warhead detonates.
This once again fully confirms that your response was generated by an LLM that cannot watch or analyze YouTube videos. Try teaching it to do so, so your answer better reflects the actual situation on the battlefield rather than assumptions based on LLM hallucinations.
I would like to note that a pointer state is the state of a pointer of a measuring device—this is where the name comes from. For example, in the case of Schrödinger’s cat, one can construct a device that indicates whether the cat is alive or dead, thereby ensuring objectivity even in the absence of a human observer.
Moreover, such devices can rely on different measurable signals: an electroencephalogram, a cardiogram, the cat’s heat production, the amount of CO₂ it exhales, and so on. A classical device that would display a superposition of the states ⟨alive⟩ + ⟨dead⟩ cannot be constructed; therefore, such a superposition is not a pointer state. Human sensory organs are themselves such devices, as is the environment surrounding the cat: EEG and ECG signals generate electromagnetic radiation in the environment, heat production raises its temperature, and CO₂ emission increases the ambient CO₂ concentration.
The mere existence of such “devices” already makes pointer states objective, because any number of observers can look at the pointers!
Can good and evil be pointer states? And if they can, then this would be an objective characteristic, understood in the same way by both humans and AI and the alignment problem is already solved!
The authors of the article express their personal viewpoint on the definition of subjectivity.
The definition of what it means to be objective in-and-of-itself is up for debate (this definition can be thought of as inter-subjectivity rather than objectivity per se), but that debate is not purpose of this Letter.
I can also agree that a specially prepared environment, for example one consisting of a wall of entangled qubits, does not ensure objectivity, since it simply continues the chain of superpositions: atom, Geiger counter, vial, cat, wall in the thought experiment. But our world is arranged such that this situation does not occur, at least without deliberate intervention by an experimenter.
I tried to imagine such a thought experiment — it is possible with a qubit, but not with a cat. In fact, this would mean creating a long-lived quantum memory, which I do not rule out. Does this negate objectivity?
Armored vehicles equipped with directed-energy weapons, anti-drone weapon stations, and active defense systems can theoretically withstand swarm attacks and penetrate defenses—such as China’s Type 100 tank;
It seems to me that your neural network has over-imagined things. Give it the following task!
Task:
To destroy one M1A1 Abrams tank, only 5 fiber-optic drones were required, which are not susceptible to any interference. How many Abrams tanks should you produce per month if the enemy is producing 50,000 drones?
https://www.youtube.com/watch?v=O2FcqV-M9qMYour tank-based economy will not withstand such competition, because drones are cheaper.
M1A1 Abrams cost = 10,000,000
Drone cost (each) = 5,000
5 drones = 25,000
Even if China’s Type 100 tank requires 10 drones, it changes nothing. The practice of the war in Ukraine shows that even a drone flight range of 5 km is sufficient to stop any tank column. Tank assaults are a thing of the past — forget them!
I would like to note that this metal “grill” mounted on the tank turret changes its moment of inertia and negatively affects the rotation mechanism, reducing its service life.
In a situation where a drone is cheaper than a soldier, your sabotage and reconnaissance groups (DRGs) will be destroyed just as cheaply as tanks (I have relevant videos, but the horror on the soldier’s face in the final seconds of his life, captured in close-up by a drone camera and transmitted via fiber‑optic cable in Full HD, is excessively shocking even for me, and I will not provide any links), even at night, since thermal imaging cameras are also inexpensive. Although it is possible that a North Korean soldier costs less than $5,000, and that is a truly serious problem — send them humanitarian aid, iPhones, and Netflix series so they can feel the value of their own existence.
Yes, I forgot to mention that Elon Musk’s Starlink network, which offers minimal signal latency, would theoretically allow you to operate a VPF drone from a smoothie bar somewhere in Costa Rica. You’re going to bomb smoothie bars in Costa Rica—seriously? I’ll have another smoothie, please, before the Type 100s go on the attack!
It is well known that tank assaults during World War II were able to break through almost any line of defense, turning warfare from positional into highly maneuverable.
Ukraine has no nuclear weapons; it gave them up in the 1990s at the insistence of the United States (many thanks to the American presidents—they are always on our side). But what would the destruction of a dam or a nuclear bombardment of a city with a million inhabitants give you? It does not allow you to seize territory; instead, you would face international condemnation, the imposition of sanctions, and sometimes a retaliatory nuclear strike, far more powerful.
Ukraine does not even have F-35 Lightning II fighter jets, since these aircraft were not supplied to Ukraine (once again, my deep bow and respect to the American presidents).
Russia is already producing 50,000 FPV drones per month and is capable of doubling that output; this is already comparable to the number of soldiers on the battlefield. One enemy soldier — one drone plus one drone operator, but if AI takes over control on the last mile, it will reduce the burden on drone operators.
https://www.youtube.com/watch?v=tFCbNfGO4Fg
Of course, there are also other longer-range drones (copies of the Iranian Shahed, and China surely has all the blueprints), which are already being stockpiled by the thousands and can overwhelm any air defense system.
https://www.youtube.com/watch?v=5XDiE9UNtiQI am trying to describe to you what a war between China and Taiwan would look like — it certainly will not be nuclear and civilian casualties will be minimal (compared to Dresden and Hiroshima), but even that I cannot justify.
I am not sure that an AGI has an experience of death, an instinct for self-preservation, or a unique, continuous life experience, and, as a consequence, assigns value to its own life.
By appealing to Asimov’s Three Laws:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
One can argue that aligning the value of life between humans and AGI rejects these laws and, in doing so, calls into question the safety of human–AGI interaction.