Putin hates people he considers traitors, and his definition of such is expansive. There would be an individual S risk for those who would seek to dethrone him. However, he does appear to want immortality for himself, and I suspect this extends to the population at large. Someone like Elon Musk, though much higher in openness than Putin, is also personally vindictive, but is ideologically opposed to life extension. So I am not sure that Putin as AI overlord would be a highly subpar outcome relative to Musk as AI overlord, or even at all. (Certainly I would consider Putin far superior to movements or people who fundamentally reject modernity, such as Islamists or Far Right trads and Nazis. And likewise obviously far inferior to the frontier AI labs, EA/LW, and conventional liberal democratic factions like the Dems and Eurocrats.) Obviously Putin would immediately move to fulfill his particular world optimization visions, so no more Ukraine (or Belarus, or probably independent Baltics), but then again, any post-AGI environment be it under Putin or anyone else will quickly become so weird that I don’t know if pre-AGI geopolitical obsessions will remain relevant for long. Indeed, it seems unlikely that any AI overlord would long remain consumed by questions of who owns what clay in the face of the cosmic possibilities that are unlocked to them.
akarlin
[Crosspost] Anthropic Shadow Geopolitics
Frontier LLM performance on offline IQ tests is improving at perhaps 1 S.D. per year, and might have recently become even faster. These tests are a good measure of human general intelligence. One more such jump and there will be PhD-tier assistants for $20/month. At that point, I expect any lingering problems with invoking autonomy to be quickly fixed as human AI research acquires a vast multiplier through these assistants, and a few months later AI research becomes fully automated.
The scenario I am most concerned about is a strongly multipolar Malthusian one. There is some chance (maybe even a fair one) that a singleton or oligopoly ASI decides or rigorously coordinate respectively to preserve the biosphere—including humans—at an adequate or superlative level of comfort or fulfillment, or help them ascend themselves, due to ethical considerations, for research purposes, or simulation/karma type considerations.
In a multipolar scenario of gazillions of AI at Malthusian subsistence levels, none of that matters in the default scenario. Individual AIs can be as ethical or empathic as they come, even much more so than any human. But keeping the biosphere around would be a luxury, and any that try to do so, will be outcompeted by more unsentimental economical ones. A farm that can feed a dozen people or an acre of rainforest that can support x species if converted to high efficiency solar panels can support a trillion AIs.
The second scenario is near certain doom so at a bare minimum we should at least get a good inkling of whether AI world is more likely to be unipolar or oligopolistic, or massively multipolar, before proceeding. So a pause is indeed needed, and the most credible way of effecting it is a hardware cap and subsequent back-peddling on compute power. (Roko has good ideas on how to go about that and should develop on them here and at his Substack). Granted if anthropic reasoning is valid, geopolitics might well soon do the job for us. 🚀💥
It’s not at all insane IMO. If AGI is “dangerous” x timelines are “short” x anthropic reasoning is valid...
… Then WW3 will probably happen “soon” (2020s).
https://twitter.com/powerfultakes/status/1713451023610634348
I’ll develop this into a post soonish.
It’s ultimately a question of probabilities, isn’t it? If the risk is ~1%, we mostly all agree Yudkowsky’s proposals are deranged. If 50%+, we all become Butlerian Jihadists.
My point is I and people like me need to be convinced it’s closer to 50% than to 1%, or failing that we at least need to be “bribed” in a really big way.
I’m somewhat more pessimistic than you on civilizational prospects without AI. As you point out, bioethicists and various ideologues have some chance of tabooing technological eugenics. (I don’t understand your point about assortative mating; yes, there’s more of it, but does it now cancel out regression to the mean?). Meanwhile, in a post-Malthusian economy such as ours, selection for natalism will be ultra-competitive. The combination of these factors would logically result in centuries of technological stagnation and a population explosion that brings the world population back up to the limits of the industrial world economy, until Malthusian constraints reassert themselves in what will probably be quite a grisly way (pandemics, dearth, etc.), until Clarkian selection for thrift and intelligence reasserts itself. It will also, needless to say, be a few centuries in which other forms of existential risks will remain at play.
PS. Somewhat of an aside but don’t think it’s a great idea to throw terms like “grifter” around, especially when the most globally famous EA representative is a crypto crook (who literally stole some of my money, small % of my portfolio, but nonetheless, no e/acc person has stolen anything from me).
I disagree with AI doomers, not in the sense that I consider it a non-issue, but that my assessment of the risk of ruin is something like 1%, not 10%, let alone the 50%+ that Yudkowsky et al. believe. Moreover, restrictive AI regimes threaten to produce a lot of outcomes things, possibly including the devolution of AI control into a cult (we have a close analogue in post-1950s public opinion towards civilian applications of nuclear power and explosions, which robbed us of Orion Drives amongst other things), what may well be a delay in life extension timelines by years if not decades that results in 100Ms-1Bs of avoidable deaths (this is not just my supposition, but that of Aubrey de Grey as well, who has recently commented on Twitter that AI is already bringing LEV timelines forwards), and even outright technological stagnation (nobody has yet canceled secular dysgenic trends in genomic IQ). I leave unmentioned the extreme geopolitical risks from “GPU imperialism”.
While I am quite irrelevant, this is not a marginal viewpoint—it’s probably pretty mainstream within e/acc, for instance—and one that has to be countered if Yudkowsky’s extreme and far-reaching proposals are to have any chance of reaching public and international acceptance. The “bribe” I require is several OOMs more money invested into radical life extension research (personally I have no more wish to die of a heart attack than to get turned into paperclips) and into the genomics of IQ and other non-AI ways of augmenting collective global IQ such as neural augmentation and animal uplift (to prevent long-term idiocracy scenarios). I will be willing to support restrictive AI regimes under these conditions if against my better judgment, but if there are no such concessions, it will have to just be open and overt opposition.
OK, I will refrain from continuing this thread beyond this reply, but I would like to more fully expound on this idea before I go. I think such comparisons are useful and important because the list of plausible candidates for AI overlordship is actually quite small, so their personalities and politics can be meaningfully discussed in this context. This list includes the handful of frontier labs and their CEOs; Xi/Xi’s successor/”the CPC”; Musk; Trump; Vance, Rubio, Newsom, and the half dozen other Americans who might plausibly be President 2028-32; the “US”… and beyond that, it rapidly diffuses out into much larger collectives such as “The Internet”, “humanity”, “the noosphere”—or banal extinction. (Incidentally, though, I do agree that Putin is barely worth talking about because Russia’s chances of being first to AGI are ~0%). People who attain outsized political and business success tend to be much more “odd” than the population average, it is genuinely difficult to explain much of what is happening in both domestic politics and geopolitics without accounting for their psychological quirks, and I think it is very plausible that the impact of these individual personality factors would if anything be magnified to new extremes were they to be given the opportunity to emanate their CEV across all of humanity.