Is there a statistically significant difference in how Democrats, Independents and Republicans rank different risks from AI?
Petropolitan
Does the company present the statistical uncertainty, or do you have to calculate it yourself?
the remaining third is split exactly in half on whether preventing AI x-risk feels like a Democratic or Republican issue
I expect this to change soon: there’s a very large difference between the parties regarding trust to experts in general and academia specifically (and we know academia and industry have different opinions regarding AI risks).
And do you think you could poll on other AI risks you identified? I expect there to be a party difference there.
Also, maybe you could poll respondents for their political affiliation before asking the questions
The most common reason people stop counting as participating in the labor force is that they grow old and living off savings, passive income, pension and/or social benefits is better than continuing working, which we call a retirement. With global graying of the population, 50% of formerly working people will necessarily become permanently unemployable in this sense eventually even without the AI progress.
Also, note that Finland has ~10% unemployment rate and they are quite OK because of the social safety net. If AI was to be heavily taxed and these funds were used to support the population suffering job losses (implausible for the US indeed but plausible for Europe), even in absence of “strong AGI” people might choose not to work in order to get welfare without actually being unemployable.
(Yeah I do care but Toby has not left a single comment here)
I realized I’m not sure how you define “50% of people permanently unemployable”. Surely it isn’t about global population? Is it about global labor force (which is ~45% of global population) or about developed countries only?
As of 2019, about a quarter of global labor force worked in primary agricultural production (mostly smallholder farmers who might only be impacted by AI indirectly, such as natural gas going to data centers instead of fertilizer plants) and half as much were employed in “off-farm segments of agrifood systems”. Surely people need to eat and those jobs are here to stay.
Please define specifically, 50% of which people in particular, and what does “permanently unemployable” mean exactly (for example, what about a laid-off white-collar worker who can return to the parents’ village and get a job at a local shop or school?)
the pace of conceptual work on AI algorithms is like >100x faster
In such a case I expect these AI researchers to pick all the low- and medium-hanging fruit at the then-current compute level/hardware technology, and then the algorithmic progress gets saturated until new-gen chips are produced in quality. Check this: https://www.lesswrong.com/posts/sGNFtWbXiLJg2hLzK
Why can’t Lockheed and Raytheon simply make way more of them?
The problem is not technological, it’s political and economical. We know how to scale the production (it’s really 20th century tech), the Congress just doesn’t give the funds. Half a billion dollars for a new plant is not really that large a figure for a country which spends over a trillion dollars on defense annually, but the priorities are not there (or maybe I could have said “the lobbyists are not there” but I don’t want to go deep into politics).
E.g., Raytheon claims to have the capacity to produce 600 Tomahawks per year but have orders for only ~100 (for reference, ~170 Tomahawks spent on Iran so far cost ~$600M). I guess (but have not checked) it’s more or less similar with the other munitions you list.
Except for the period between Vietnam and Yugoslavia, the US historically went into wars without significant stockpiles of munitions, and either ended the war before they ran out or used the industrial capability to ramp up production in the process
Well, there are plenty of long takes on X which are obviously based on authors’ ideas but LLM-generated (even before ones runs them through a detector) and still get pretty popular, audience not smelling an LLM. Do you count that as good or bad writing? I honestly don’t enjoy reading them for some reason even when I agree the underlying ideas make sense, but on the other hand, these authors reached a wider audience than they would presumably have without an LLM
“dynamite” (no relation)
Really? I have always thought your nickname is a pun on this word!
As long as there are only few nuclear states, absence of nuclear wars doesn’t seem unusual or unexpected, but if the non-proliferation paradigm was to fall apart and multiple new states got bombs in a decade or two, the situation would be likely to worsen significantly
If a company mines crypto on scale and gets caught, what would be the punishment, if any?
A Manifold market: https://manifold.markets/MaxHarms/did-alibabas-rome-ai-try-to-break-f
Note that cryptocurrency mining is prohibited in China, although I was unable to find legal details (presumably it’s punishable by fines proportional to scale).
See also https://www.astralcodexten.com/p/sakana-strawberry-and-scary-ai from 2024
entity, person or corporation, listed as owning the property with the tax
Why can’t the land be owned by tax-exempt organizations such as churches, charities or universities and then rented to rich people? It seems for me your suggestion is as loophole-prone as other ones proposed in the past
I agree that the models served to civilian customers over API can’t be realistically secured from the state adversaries, but if we are speaking about advanced AI R&D in the future like in AI 2027, than it looks feasible to conduct it on protected servers. Maybe I misunderstood author’s opinion
US investors
I think the essay could have been significantly shorter if you concentrated on this issue alone. US VC investment reached $340B in 2025 (about 60% of the global capacity) while it was only $58B in Europe according to Crunchbase, and the visible part of the Chinese VC market is even smaller.
Lots of ink has been spilled on the reasons why, but suffice to say, it’s nowhere near enough to train on scale in the second half of 2026, and European taxpayers don’t want state-funded AI programs either
I believe these things are connected with each other: if the server and the software system in general is safe enough to work with lots of classified information on a regular basis, it’s safe to store the weights as well
First of all, if share of Ls in the deck is higher than usual, you can always consult with the table what to do before the turn.
If you are a liberal president and you drew two Ls and an F, it’s better to pass LF in the beginning of the game and in rare situations later in the game when you urgently need to find a liberal player. In this case the information on your chancellor you and the tam gets is likely more valuable than the risk of a fascist policy being adopted. If the chancellor chooses to discard an L, which is actually usually optimal for a regular fascist, the liberal team will assume that there was probably one liberal and one fascist in your government and will have easier time finding liberals and fascist among the rest. If the chancellor chooses to camouflage as a liberal, you will become one liberal policy closer to a win and probably uncover him later
After thinking about the recent viral Citrini thought experiment and a bit more research I think I was able to sharpen my thesis a bit!
Transaction costs were divided into three broad categories by Dahlman in 1979:
search and information costs are driven by AI agents to essentially zero, that’s pretty obvious;
bargaining and decision costs depend on the nature of transaction itself: if everything is easy to specify (say, in a contract), capable agents can negotiate and decide very cheaply, otherwise economic actors need trust and relationship continuity rather than spot market efficiency (in which case, BTW, I expect RLVR to be practically doomed to fail);
policing and enforcement costs are likely going to increase, and if one combines crypto with agents prone to prompt injection, possibly even dominate the transaction costs entirely.
Even though most of the online intermediaries we use (delivery apps, ticket aggregators, Airbnb etc.) mostly center around search and information (as that was the easiest to automate with the 2010s tech), in fact elimination of the search costs doesn’t immediately start to favor a free market over intermediaries. Instead, future intermediaries will to a large degree fight fraud and money laundering
robustness to state-backed hacking programs was unachievable
How do you reconcile that with the fact that Claude has recently been used by the US Government to process classified information? Presumably they have a special version on special servers for that but still, this looks like some degree of robustness which might be achieved with a model not served to a wide audience
I don’t think these conflict show what you think they show.
In the former case, drones and riflemen fight together on both sides, with both sides capable of innovating and copying innovations. If anything, the conflict shows that thanks to drones, infantry grunts are as important as ever and expensive armor (although not obsolete and still necessary) is relatively less important than a generation ago.
In the latter case, the “uninformed” demonstrated that they can saturate air defenses of their neighbors with cheap drone technology and even occasionally shoot down ~100M jets with SAMs built with COTS chips (and even occasionally model turbojets). If they retain the HEU and their (illegal) toll booth in the strait after the war, the world will see them as victors regardless of the damage and casualties they suffered.
I believe that in a decade or two the main conclusion historians will derive from thee two wars that nothing but nukes (not expensive weapons, not cheap ones, not small armies, not large ones, and certainly not foreign security guarantees) can defend a country’s sovereignty if it’s in a geopolitical flashpoint, and even if you don’t, a flashpoint might appear from nowhere in a decade on a border with an autocratic state.
Coupled with the constant diminishing of the technological bar to nuclear proliferation due to progress, this seems to imply that the current nonproliferation paradigm is going to eventually break down and future generations will live in a much less safe world with much more numerous nuclear-armed states and much heightened risk of an accidental nuclear war.