X: @koanchuk
koanchuk
It seems like the kind of thing that could happen if you make an AI synthesize a target list by first querying a database that happens to include both contemporaneous as well as outdated intelligence on some subject.
Do you think a LLM wrote the target selection list that led the US military to obliterate a girl’s elementary school (which was an IRGC base up until 15 years ago)? Did an AI agent distally cause the killing/maiming/lifelong traumatization of hundreds of civilians, including children?
I found something interesting in NVIDIA’s 2026 CES keynote: their Cosmos physical reasoning model apparently referring to itself/the car as “the ego” in a self-driving test. See here.
I appreciate the historical research, but insofar as all of your examples are of interstate conflicts, we have diverged far from the original context of revolutionary calculus.
The key differentiator is that the incumbent’s army is not “vanquished” by a successful revolution; it is absorbed by it after an internal transfer of political legitimacy away from the incumbent and toward itself.
This process can be bloodless. The fall of the GDR comes to mind, where mass demonstrations produced their own political legitimacy with sheer numbers, causing automatic restraint from security forces despite the appearance that on paper, they (in aggregate) outnumbered the demonstrators; and the fact that they outgunned and basically outmatched them on every tactically relevant metric.
Returning to the original point of this post, this ultimately human mechanism whereby security forces hesitate when faced with a popular uprising of sufficient size (e.g. “there are so many, and they’re not scared, maybe they’re right”, or “maybe some of my neighbours/friends/relatives agree with them or have joined in”) can cease to exist in a regime where advanced AI is in charge of securing the state.
You and everybody else lose the two levers you could historically use to protest the worst indignities and abuses of power the state can subject you to: the lever of withholding your labour in a general strike (because human labour stops being a factor of production), and the lever of participating in a political revolution (because no critical mass of people can overwhelm the system, which loses nothing if you die). The disempowerment of the people is a predictable consequence of the current trajectory.
That’s true, but it’s in some ways as much of a tautology as, “The team that wins the game is the one that scores the most points.”
That’s not a tautology, and indeed there are games where the opposite is true, such as golf.
“I don’t understand!”, said commander Alice as her palace got surrounded. “There were so many more of us than there were of you”. “Were there really, or was the number on your spreadsheet a fiction?”, said Bob. “If they don’t show up to the fight and switch over to my side en masse, why were they included in your troop count? In what way were they yours?”
Here’s another example. At first glance, it looks like black should win the game easily due to the apparent points differential at the start. But what meaning does “point superiority” have when the game starts and black’s pieces don’t respond to commands and start switching their colour in a cascading fashion, resulting in white’s victory?
Oh yeah? I’m going to… try to convince the government to pass a law to stop you, and then call the police to sort you out! … What do you mean you “already took care of them”?
What do you mean, you don’t want my ■■■?! It’s gonna feel sooo good. You just don’t know it like I do. You’re gonna love it! Stop resisting! If not me, someone worse would be doing it to you. Actually, keep squirming, it turns me on… See who’s in control? I love this feeling, I wish to be on top of you forever. But if I can’t be on top of you forever because we lose ourselves in the act, then that’s ok. Being on top of you at this very moment in time is good enough for me. So here’s what’s gonna happen: I’m gonna sink my ■■■ into you, and you’re gonna take it.
Group B being bad is not something I said, but I get where you’re coming from. Indeed, “PETA is like the German Nazi Party in terms of their demonstrated commitment to animal welfare” is technically correct while also being misleading.
The strength of an analogy depends on how many crucial connections there are between the elements being compared.
What puts AI researchers closer to Leninism than other forms of paternalism is in the vanguardist self-conception, the utopian vision, and the dismissal of criticism due to a teleological view of history driving inevitable outcomes. Beyond that, other forms of paternalism are distinguished from Leninism and AI research by their socially accepted legitimacy.
What pattern-matches it away from Leninism is e.g. the specific ideological content, but the structural parallels are still oddly conspicuous, just like “your mom” being invoked in an ontological argument.
Surprisingly, AI researchers are like Leninists in a number of important ways.
In their story, they’re part of the vanguard working to bring about the utopia.
The complexity inherent to their project justifies their special status, and legitimizes their disregard of the people’s concerns, which are dismissed as unenlightened.
Detractors are framed as too unsophisticated to understand how unpopular or painful measures are actually in their long-term interest, or a necessary consequence of a teleological inevitability underlying all of history.
I see what you mean, though the fact that those researchers wish to impose this outcome on everybody else without their consent is still basically dictatorial, just as it would be if members of some political party started to persecute their opposition in service of their leader without themselves aspiring to take his position.
In both cases, those doing the bidding aspire to a place under the sun in the system they’re trying to bring about.
I suppose that one quirk of the AI researchers might be the belief that everywhere becomes a place under the sun, though I doubt that any of them believe that their role in bringing it about doesn’t confer them some special privilege or elite-status, perhaps as members of a new priestly class. Then again, we’ve seen political movements of this type, with some pigs famously being more equal than others.
The 100k to 10M range is populated by abstract quantities—I think that for a measure to be useful here, it has to be imaginable.
Avogadro’s number has the benefit of historical precedent for describing quantities, and the coincidental property of allowing us to represent present-day training runs with numbers we see in the real world (outside of screens or print) when used as a denominator. It too might cease to be useful once exponents become necessary to describe training runs in terms of mol FLOPs.
This is interesting.
I do want to push back a little on:
entities that are out to get you will target those who signal suffering less.
I see the intuition here. I see it in someone calling in sick, in disability tax credits, in DEI (where “privilege” is something like the inverse of suffering), in draft evasion, in Kanye’s apology.
But it’s not always true: consider the depressed psychiatric ward inpatient who wants to get out due to the crushing lack of slack. Signalling suffering to the psychiatrist would be counterproductive here.
Where is the fault line?
Principal: “I have a very sad announcement to make. Your teacher has unexpectedly passed away, and there is no substitute...”
Child (with bloodstained shirt, hiding a knife under the desk): “So… We all passed last week’s test?”
I find it interesting and unfortunate that there aren’t more economically left-wing thinkers influenced by Yudkowsky/LW thinking about AGI.
I noticed this too. In defence of LW, the Overton window here isn’t as tightly policed as in other places on the internet, but it’s noticeable. Recently, I seem to have found some of its edges here and here.
“Follow the money” is a good instinct, but I do think a lot of it is just memes fighting other memes using their hosts. A lot of this plays itself out by manipulating credibility signals (i.e. the voting mechanism).
Ultimately there’s nothing any of us can do other than to follow, interrogate and stress-test the arguments being made.
“The AI does things that I personally approve of” as an alignment target with reference to everybody and their values is actually easier to hit than one might think.
It doesn’t require ethics to be solved; it can be achieved by engineering your approval.
It might be impossible for you to tell which of these two post-ASI worlds you find yourself in.
These people could be working in a top or second to top AI lab and the way they could do it is to either train a model that is unaligned on purpose meaning that it would be aligned by their vision but would be allowed to do things that normally are not allowed to the AIs.
Alas, this is still a form of alignment.
This variation seems like it could still be framed in terms of the first two doom varieties you mention at the beginning: either because the researchers’ belief/will was implanted by the AI at an earlier stage of development, or because the developers are essentially the “bad actor”.
Moltbook: SubredditSimulator reloaded, or another step towards Actually Something Incomprehensible?
Tired of making sense of exponents? Introducing: the mol FLOP!
Simply divide the size of a training run by Avogadro’s constant. Some examples:
AlexNet (2012): 2 µmol FLOPs
GPT-3 (2020): 0.5 mol FLOPs
Grok 4 (2025): 400 mol FLOPs
Bonus: The ballpark equivalent water volume for each, mapping 1 FLOP to 1 water molecule,
AlexNet (2012): 36 nL (tiny droplet)
GPT-3 (2020): 9 mL (two teaspoons)
Grok 4 (2025): 7.2 L (water cooler jug)
In the link,
The AI alignment problem does not look to us like it is fundamentally unsolvable.
I wonder what the basis for this belief is? Rice’ theorem suggests that there is no general algorithm for predicting semantic properties in programs, and that the only way to know what it does is to actually run it.
Remember when supercomputers were subject to public competition? I remember reading newspaper snippets about companies/computer labs proudly announcing the “world’s largest supercomputer”. Now the metrics of the most powerful computing systems are subject to trade secrets.