Senator Bernie Sanders is planning to introduce legislation that would ban the construction of new AI data centers. You can find his video announcement here, and here is the transcript:
Thanks very much for joining me. I will soon be introducing legislation calling for a moratorium on the construction of new data centers.
Now, as a result, I’ve been called a luddite, anti-innovation, anti-progress, pro-Chinese, among many other things. So why am I doing that? Why am I calling for a moratorium on the construction of new data centers?
Bottom line: We are at the beginning of the most profound technological revolution in world history. That’s the truth. This is a revolution which will bring unimaginable changes to our world. This is a revolution which will impact our economy with massive job displacement. It will threaten our democratic institutions. It will impact our emotional well-being, and what it even means to be a human being. It will impact how we educate and raise our kids. It will impact the nature of warfare, something we are seeing right now in Iran.
Further, and frighteningly, some very knowledgeable people fear that that what was once seen as science fiction could soon become a reality—and that is that superintelligent AI could become smarter than human beings, could become independent of human control, and pose an existential threat to the entire human race. In other words, human beings could actually lose control over the planet.
And in the midst of all of that, all of this transformative change, what I have to tell you is that the United States Congress hasn’t a clue, not a clue, as to how to respond to these revolutionary technologies and protect the American people. And it’s not only not having a clue, they’re out busy raising money all day long from AI and their super PACs, which is a whole other problem.
As many of you know, the AI revolution is being pushed by the wealthiest people in our country, including Elon Musk, Jeff Bezos, Larry Ellison, Mark Zuckerberg, Peter Thiel, and others. All of these people are multi-billionaires who, if they are successful at AI, will become even richer and more powerful than they are today.
What I want to do now is not tell you my fears regarding AI and robotics. I want you to actually hear from them, the billionaires who are pushing these technologies. Listen carefully to what they are saying.
Elon Musk, wealthiest person alive, stated that quote, “AI and robots will replace all jobs.” All jobs. “Working will be optional.” End of quote.
Dario Amodai, the CEO of Anthropic, predicted that quote, “AI could displace half of all entry-level white collar jobs in the next 1 to 5 years.” And that quote, “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.” End quote. That’s Amodai.
According to Demis Hassabis, the head of Google’s DeepMind—this is Google’s DeepMind—the AI revolution will be 10 times bigger than the industrial revolution, and 10 times faster. All right, you got that? That means it will have a 100 times greater impact on society than the industrial revolution had.
Jeff Bezos, the fourth richest person in the world, has been pushing his staff for years to think big and envision what it would take for Amazon, which he owns, to fully automate its operations and replace at least 600,000 warehouse workers with robots. 600,000 jobs gone. Robots doing the work.
Bill Gates, also one of the wealthiest people on Earth, predicted that humans, quote, “won’t be needed for most things,” end quote, such as manufacturing products, delivering packages, or growing food over the next decade, due to artificial intelligence.
Mustafa Suleyman, the CEO of Microsoft AI, said most white-collar work quote, “will be fully automated by an AI within the next 12 to 18 months” end quote.
Jim Farley, the CEO of Ford, predicted that AI will eliminate quote, “nearly half, literally half, of all white-collar jobs in the US” end quote, within the next decade.
I want you to hear this one. Larry Ellison—also one of the richest people on Earth, and a major investor in AI—said that there will be an artificial intelligence-powered surveillance state where, quote, “citizens will be on their best behavior because we’re constantly recording and reporting everything that is going on.” End quote.
Dr. Jeffrey Hinton, considered to be the “godfather of AI,” believes there is a quote “10% to 20% chance for AI to wipe us out.” End quote.
Mark Zuckerberg, the fifth richest person in the world, is building a data center in the state of Louisiana—a data center that is the size of Manhattan, and will use three times the quantity of electricity that the entire city of New Orleans uses every year.
All right. Now, for many years now, leading experts have called for regulation and reasonable pauses to the development of artificial intelligence, to ensure the safety—the very safety—of humanity. Let’s go back to our good friend Elon Musk. He said back in 2018, quote—this is Elon Musk—“Mark my words, AI is far more dangerous than nukes. So why do we have no regulatory oversight? This is insane.” End quote, Elon Musk.
In March of 2023, over 1,000 business leaders in the big tech industry, prominent scientists, AI researchers, and academics co-signed an open letter entitled, quote, “Pause Giant AI Experiments” end quote, stating,
“We must ask ourselves: should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk control—loss of control—of our civilization?”
“Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more more powerful than GPT4; this pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” End of quote.
That is what some of the leaders in the AI industry have said. And clearly where we are right now, is that there has not been any pause. There has been massive amounts of competition between one company and the other, between the United States and China. So: bottom line is that, in my view, to protect our workers from losing their jobs, to protect human beings from attacks on their mental health, to protect our kids, to protect the safety of human life: yeah, we need a moratorium on data centers. We need to take a deep breath. We need to make sure that AI and robotics work for all of us, not just a handful of billionaires. Thanks very much.
It’s the kind of action that when universalized does indeed end the AGI death race! That is in an important sense proposing an end to the AGI death race.
If everyone stopped building datacenters you really have made a lot of progress towards stopping the death race (and of course, the algorithm that produces banning datacenter construction would probably not stop there).
I think this is a common misconception. I’m pretty sure algorithmic progress will eventually reach a point where what currently takes a datacenter will be possible on a single machine for a slightly longer training period. If that same algorithm runs on a datacenter it would produce something superhuman, but cutting down to only single-gpu training would then not be enough to completely stop. Algo progress is a slow slog of “grad student descent”, so it likely takes quite a bit longer, and maybe it takes enough longer to figure out alignment. But it doesn’t stop the death race, it just slows it down. Actual stopping would require shredding all silicon big enough to even run the fully trained AI, which doesn’t seem to be in the cards. I’m not saying datacenter construction is good or should continue, or that this won’t buy time, but I think people are wishful-thinking about how much time it buys.
Agree qualitatively (and possible quantitatively). However, there’s a quite large knock-on effect, which is a strong bundle of signals of “AGI is bad, don’t make AGI”. These signals move in various directions between different entities, carrying various messages, but they generally push against AGI. (E.g. signaling legitimacy of the Stop position; the US signaling to other states; society signaling to would-be capabilities researchers; Congress self-signaling “we’re trying to ban this whole thing and will continue to add patches to ban dangerous stuff”; etc.)
I mean, sure, eventually. The key question is how much of algorithmic progress is downstream of hardware scaling. My sense is around 50% of it, maybe a bit more, so that if you cut scaling, progress now happens at around 1/4th of the speed, which is of course huge and makes things a lot better.
Thinking this through step by step in the framework of the AI Futures Model:
First, I’ll check what the model says, then I’ll reconstruct the reasoning behind why it predicts that.
By default, with Daniel’s parameters, Automated Coder (AC) happens in 2030 and ASI happens in 2031 1.33 years later.
If I stop experiment and training compute growth at the start of 2027, then the model predictsAutomated Coder in 2039 rather than 2030. So 4x slower in calendar time (exactly matching habyrka’s guess). It also looks to have well over a 5 year takeoff from AC to ASI as opposed to the default of 1.33 years.
However, this is highly sensitive to the timing of the compute growth pause, because it’s a shock to the flow rather than the stock. e.g. if I instead stop growth at the start of 2029 as in this worksheet, then AC happens in Mar 2031, taking ~2.2 years instead of ~1.2, so slowing things down by <2x. It does still slow down takeoff from AC to ASI to 4 years, so by ~3x (and this is probably at least a slight underestimate because we don’t model hardware R&D automation).
Now I’ll reconstruct why this is the case using simplifications to the model (I actually did these calcluations before plugging the time series things into our model).
Currently, experiment compute is growing at around 3x/year, and human labor around 2x/year. Conditional on no AGI, we’re projecting experiment compute growth to slow to around 2x/year by 2030, and human labor growth to slow to around 1.5x/year.
Figuring out the effect of removing experiment growth is a bit complicated for various reasons.
On the margin, informed by interviews/surveys, we model a ~2.5x gain in “research effort” from 10x more experiment compute. Which if applied instantaneously, would mean a 2.5x slowdown in algorithmic progress.
We estimate that a 100x in human parallel coding labor gives a ~2x in research effort on the margin. We don’t model quantity of research labor, but I expect probably the gains would be relatively small as quality matters a lot more than quantity; let’s shade up to 3x.
So naively based on our model parameters and a simplified version of our model (Cobb-Douglas used to locally approximate a CES), by default the growth in research effort per year from experiment compute is 2.5^log(2-3)=~1.3-1.55x, and from human labor is sqrt(3)^log(1.5-2)=~1.1-1.2x. Meaning that roughly log(1.4)/(log(1.4)+log(1.15))=~70% of research effort growth is coming from experiment compute.
So to simplify from now on, let’s think about what happens if research effort growth has a one-time shock a constant 30% of what it otherwise would have been.
What does this actually mean in terms of the effect on algorithmic/software progess? This means that the shock in research effort growth will eventually cause the software growth rate to be 30% of what it would have been otherwise, but it steadily decreases toward 30% over time (intuitively, the immediate effect is 0 because you have to actually wait for the “missing new experiment compute” to take effect).
The above graph seems to give decent intuition for why the averaged-over-the-relevant-period slowdown in software progress from no more experiment compute might be about 2x (with the 2027.0 growth stoppage), and therefore the overall slowdown from no more compute might be about 4x. It looks like the average of the first 12 years might be close to 0.5.
All of the above is ignoring automation for simplicity. Taking into account automation would mean that more of the gains from labor, so the slowdown is smaller. But on the other hand, as you get more coding labor you get more bottlenecked on experiment compute (which the Cobb-Douglas doesn’t take into account); in the AIFM, you’d eventually get hard bottlenecked, you can have a maximum of 15x research effort gain from coding labor increases alone. Looks like these factors and other deviations from the model might roughly cancel out in the case we’re considering.
But it doesn’t stop the death race, it just slows it down.
My current default assumption is that, yes, someone will build something that obsoletes human intellectual and physical labor. In those futures, the best-case scenario is “Maybe the AIs like keeping humans as pets (and won’t breed us the way we breed pugs).”[1] And the other alternative futures go sharply downhill from there.
So I think of delay in terms of survivor curves, like someone with terminal cancer. How much time can we buy the human race? Can the children alive today get to enjoy a decent lifetime for however long we all have? So I heavily favor delay, in much the same way that I’d favor cancer remission.
A global AI halt might even buy us quite a bit of time.
If I had to bet on a specific model as liking humans and being a responsible “pet owner”, then I currently suspect we might have the best odds with a descendant of Claude. I do actually think that “enculturation” and building morally thoughtful models that like humans gives us non-zero chance of a more acceptable outcome. But I would still prefer humans to control their own destiny.
“Eventually”, sure, but I don’t think that’s operative here. If we had the ASI recipe and could study it safely for ten years, we’d find a way to implement it in a single datacenter. But discovering it in a single data center is much harder. There is actually something missing from current LLMs, there’s a part of intelligence they just don’t have, and the only thing that seems to mitigate that issue is model size, so without ever-increasing model size and analysis of their training dynamics, I think any attempts to get the missing piece are throwing darts with the lights off. (To be fair I have pretty unusual timelines compared to most of LW so maybe what’s convincing to me shouldn’t be to you.)
I agree with the general concern, but it’d be clearly a move in the right direction on that front?
With this kind of proposal I’m more worried that it could lead to a unilateral slowdown just after having animated China to be much more aggressive on AI.
I agree with the general concern, but it’d be clearly a move in the right direction on that front?
agreed, I’m not saying don’t do it. I’m replying to habryka saying it proposes an end to the race.
With this kind of proposal I’m more worried that it could lead to a unilateral slowdown just after having animated China to be much more aggressive on AI.
Doesn’t the USA have enough datacenters to stay ahead of china in the race to fully human-replacing AI for quite a while, even with a unilateral hardware pause? I’m not sure of this claim, but it’s currently my impression that the compute ratio is pretty dramatic.
I expect there’s already enough silicon in place to produce overwhelmingly superhuman AI that still has idiot-savant going on; completely stopping CPU and GPU production worldwide seems to me like it’d be a timeline-extending move, but not by more than a few years. Somewhere between 1.3x and 3x.
Doesn’t the USA have enough datacenters to stay ahead of china in the race to fully human-replacing AI for quite a while, even with a unilateral hardware pause?
As of June 2025 the US has 5x as much compute as China, I’d expect the gap has grown with substantially more American than Chinese data centers coming online in the past ~9 months
I dont see how. If AGI/ASI is powerful, then the existing deployed compute will suddenly become more powerful when AGI happens. If that isn’t X risk, then a few more data centers won’t change things. This only matter in long timelines where more data centers are required to get AGI. I don’t think that is the case. I think copying the mammal neocortex etc will get us there and that doesn’t need more compute.
This is cool! I’m sad he spends so much of his time criticising the good part (AI doing tonnes of productive labour). I say this not because I want to demand every ally agree with me on every point, but because I want to early disavow beliefs that political expediency might want me to endorse.
It seems to me a meaningfully open question whether automating all human labor will end up net benefiting humans, even assuming we survive; of course it might, but I think much more dystopian outcomes also seem plausible. Markets tend to benefit humans because the price signals we send tend to correlate with our relative needs, and hence with our welfare; I think it is not obvious that this correlation will persist once humans become unable to generate economic value.
While I’m glad that these things are starting to be seriously discussed in the open by those in positions of power, this sounds like a blanket ban on all new datacenters? This seems too broad to me. The ideal moratorium would be on large GPU clusters, or large AI specific infrastructure, right?
On another note, I’m somewhat concerned that banning large compute infrastructure will concentrate pressure on algorithmic progress if there is no “low effort”[1] outlet of just scaling the compute. Maybe regulating the rate at which large scale compute infrastructure is constructed will still allow researchers to be “intellectually lazy”[2] by offloading progress onto compute scaling instead of innovating on algorithms.
I don’t actually think the researchers are intellectually lazy or that scaling is low effort, these just seem like the most succinct terms to express a lack of pressure to adapt in a specific direction.
I don’t think people are currently being intellectually lazy. They might be rationally spending effort in ways that produce less insight per compute but faster insight per month than they would if they had less compute. I do think that despite the way compute limitation would make people try harder, things are still worse than they would be with less compute. But not as much worse as it naively seems, because of what you’ve mentioned here.
It seems the pro-Trump Polymarket whale may have had a real edge after all. Wall Street Journal reports (paywalled link, screenshot) that he’s a former professional trader, who commissioned his own polls from a major polling firm using an alternate methodology—the neighbor method, i.e. asking respondents who they expect their neighbors will vote for—he thought would be less biased by preference falsification.
I didn’t bet against him, though I strongly considered it; feeling glad this morning that I didn’t.
I don’t remember anyone proposing “maybe this trader has an edge”, even though incentivising such people to trade is the mechanism by which prediction markets work. Certainly I didn’t, and in retrospect it feels like a failure not to have had ‘the multi-million dollar trader might be smart money’ as a hypothesis at all.
Why do you focus on this particular guy? Tens of thousands of traders were cumulatively betting billions of dollars in this market. All of these traders faced the same incentives.
Note that it is not enough to assume that willingness to bet more money makes a trader worth paying more attention to. You need the stronger assumption that willingness to bet n times more than each of n traders makes the single trader worth paying more attention to than all the other traders combined. I haven’t thought much about this, but the assumption seems false to me.
Because I saw a few posts discussing his trades, vs none for anyone else’s, which in turn is presumably because he moved the market by ten percentage points or so. I’m not arguing that this “should” make him so salient, but given that he was salient I stand by my sense of failure.
Mmh, if there is no reason to take that particular trader seriously, but just the mere fact that his trades were salient, I don’t see why one should experience any sense of failure whatsoever for not having paid more attention to him at the time.
Still, my main point was about the reasons for taking that particular trader seriously, not the sense of failure for not having done so, and it seems like there is no substantive disagreement there.
Knowing now that he had an edge, I feel like his execution strategy was suspect. The Polymarket prices went from 66c during the order back to 57c on the 5 days before the election. He could have extracted a bit more money from the market if he had forecasted the volume correctly and traded against it proportionally.
On one hand, I feel a bit skeptical that some dude outperformed approximately every other pollster and analyst by having a correct inside-view belief about how existing pollster were messing up, especially given that he won’t share the surveys. On the other hand, this sort of result is straightforwardly predicted by Inadequate Equilibria, where an entire industry had the affordance to be arbitrarily deficient in what most people would think was their primary value-add, because they had no incentive to accuracy (skin in the game), and as soon as someone with an edge could make outsized returns on it (via real-money prediction markets), they outperformed all the experts.
On net I think I’m still <50% that he had a correct belief about the size of Trump’s advantage that was justified by the evidence he had available to him, but even being directionally-correct would have been sufficient to get outsized returns a lot of the time, so at that point I’m quibbling with his bet sizing rather than the direction of the bet.
Norvid on Twitter made the apt point that we will need to see the actual private data before we can really judge. Not unusual for lucky people to backrationalize their luck as a sure win.
I can proudly say that though I disparaged the guy in private, I not once put my money where my mouth was, which means outside observers can infer that all along I secretly agreed with his analysis of the situation.
Arguments criticizing the FDA often seem to weirdly ignore the “F.” For all I know food safety regulations are radically overzealous too, but if so I’ve never noticed (or heard a case for) this causing notable harm.
Overall, my experience as a food consumer seems decent—food is cheap, and essentially never harms me in ways I expect regulators could feasibly prevent (e.g., by giving me food poisoning, heavy metal poisoning, etc). I think there may be harmful contaminants in food we haven’t discovered yet, but if so I mostly don’t blame the FDA for that lack of knowledge, and insofar as I do it seems an argument they’re being under-zealous.
Criticizing FDA food regulations is a niche; it is hard to criticize ‘the unseen’, especially when it’s mostly about pleasure and the FDA is crying: ‘we’re saying lives! Won’t someone thinking of the children? How can you disagree, just to stuff your face? Shouldn’t you be on a diet anyway?’
But if you go looking, you’ll find tons of it: pasteurized cheese and milk being a major flashpoint, as apparently the original unpasteurized versions are a lot tastier. (I’m reminded of things like beef tallow for fries or Chipotle—how do you know how good McDonald’s french fries used to taste before an overzealous crusader destroyed them if you weren’t there 30+ years ago? And are you really going to stand up and argue ‘I think that we should let people eat fries made with cow fat, because I am probably a lardass who loves fries and weighs 300 pounds, rather than listen to The Science™’?) There’s also the recent backfiring of overzealous allergy regulations, which threatens to cut off a large fraction of the entire American food supply to people with sesame & peanut allergies, due solely to the FDA. (Naturally, of course, the companies get the blame.) Similarly, I read food industry people noting that the effect of the ever-increasing burden of FDA regulations is a constant collapse of diversity, as everyone converges on a handful of safe ingredients and having to outsource to centralized food processors who can certify FDA compliance; but how would you ever see this browsing your local Walmart and looking at the colorful labels at the front? (Normal people do not spend much time reading the ingredients label and wondering why everything seems to be made out of the same handful of ingredients, starting with corn syrup.)
Those are great examples, thanks; I can totally believe there exist many such problems.
Still, I do really appreciate ~never having to worry that food from grocery stores or restaurants will acutely poison me; and similarly, not having to worry that much that pharmaceuticals are adulterated/contaminated. So overall I think I currently feel net grateful about the FDA’s purity standards, and net hateful just about their efficacy standards?
The ‘Food’ and the ‘Drug’ parts behave very differently. By default food products are allowed. There may be purity requirements or restaurant regulations but you don’t need to run studies or get approvals to serve an edible product or a new combination. By default drugs are banned.
I think the FDA is under zealous about heavy metals and other contaminants. But the FDA does a decent job of regulating food. However the ‘drug’ side is a nightmare. But the two situations are de facto handled in very, very different ways. So its not obvious why an argument would cover both of them.
Have you ever visited a country without zealous food safety regulations? I think it’s one of those things where it’s hard to realize what the alternative looks like (plentiful, cheap, and delicious street food available wherever people gather, so that you no longer have to plan around making sure you either bring food or go somewhere with restaurants, and it is viable for individuals to exist without needing a kitchen of their own).
What countries are you imagining? I know some countries have more street food, but from what I anecdotally hear most also have far more food poisoning/contamination issues. I’m not sure what the optimal tradeoff here looks like, and I could easily believe it’s closer to the norms in e.g. Southeast Asia than the U.S. But it at least feels much less obvious to me than that drug regulations are overzealous.
(Also note that much regulation of things like food trucks is done by cities/states, not the FDA).
Mexico and Chile are the most salient examples to me. But also I’ve only ever gotten food poisoning once in my life despite frequent risky food behavior.
Strong agree that the magnitude of the overzealousness is much higher for drugs than for food.
Basically, because food is a domain where there are highly negative tail effects, but not highly positive tail effects, conditioning on eating food at all, and thus it’s an area where you can afford to be restrictive, and this is notably not the case for medicine, where high negative and positive tail effect effects exist, so you need to be more lenient on your standards.
I think one exception may be salmonellosis. In USA, you get 1.2 million illnesses, 23000 hospitalizations, and 450 deaths every year. To compare, in EU, selling contaminated chicken products is illegal, and when hundreds of people get sick, it becomes a scandal.
You have to refrigerate eggs in the us while you don’t have to in the EU because of FDA regulations about washing the eggs.
There’s a strain of pro-sugar and anti-fat policies in regards to food for which the FDA shares some of the blame. Through that they might have contributed to the obesity crisis.
It’s possible that the regulations on farmers markets reduce the amount of farmers markets (which might be central to lower obesity levels in France and better health).
When asked to compare French regulations for farmers markets with the US regulations, Claude told me:
Overall, while both countries prioritize food safety, the French system tends to be more flexible for small producers and traditional methods, whereas the US system is more uniformly applied regardless of scale. The French approach often allows for more diverse and traditional products at markets, while the US system provides more consistent safety standards across diverse regions.
You likely could argue that the FDA shares part the blame for the obesity epidemic by setting bad incentives for low-fat and high fructose corn syrup foods while at the same time making it harder for farmers markets to actually sell healthy food.
Apart from that you do have people who oppose forced labeling of GMO products which is also an FDA rule.
I think there may be harmful contaminants in food we haven’t discovered yet,
Do you see microplastics as “harmful contaminants that haven’t been discovered yet”? It would be possible to have regulations that limit the amount of microplastic in plastic drinking bottles but those currently don’t exist.
I’d be interested in an article looking at whether the FDA is better at regulating food safety. I do expect food is an easier area, because erring on the side of caution doesn’t really lose you much — most food products have close substitutes. If there’s some low but not extremely low risk of a chemical in a food being bad for you, then the FDA can more easily deny approval without significant consequences: Medicine has more outsized effects if you are slow to approve usage.
Yet, perhaps this has led to reduced variety in food choices? I notice less generic or lesser-known food and beverage brands relative to a decade ago, though I haven’t verified whether my that background belief is accurate. I’d be curious also for an investigation in such an article about the extent of the barriers to designing a new food product; especially food products that aren’t doing anything new, purely a mixture of ingredients already considered safe (or at least, considered allowed). Would there be more variety? Or notably cheaper food?
I was surprised to find a literature review about probiotics which suggested they may have significant CNS effects. The tl;dr of the review seems to be: 1) You want doses of at least 109 or 1010 CFU, and 2) You want, in particular, the strains B. longum, B. breve, B. infantis, L. helveticus, L. rhamnosus, L. plantarum, and L. casei.
I then sorted the top 15 results on Amazon for “probiotic” by these desiderata, and found that this one seems to be best.
Some points of uncertainty:
Probiotic manufacturers generally don’t disclose the strain proportions of their products, so there’s some chance they mostly include e.g. whatever’s cheapest, plus a smattering of other stuff.
One of the reviewed studies suggests L. casei may impair memory. I couldn’t find a product that didn’t have L. casei but did have at least 109 CFU of each other recommended strain, so if you take the L. casei/memory concern seriously your best option might be combining this and this.
For convenience, here’s a slightly edited-for-clarity version of the abstract:
38 studies (all randomized controlled trials) were included: 25 in animals and 15 in humans (2 studies were conducted in both). Most studies used Bifidobacterium (eg, B. longum, B. breve, and B. infantis) and Lactobacillus (eg, L. helveticus, and L. rhamnosus), with doses between 109 and 10^10 colony-forming units for 2 weeks in animals and 4 weeks in humans.
These probiotics showed efficacy in improving psychiatric disorder-related behaviors including anxiety, depression, autism spectrum disorder (ASD), obsessive-compulsive disorder, and memory abilities, including spatial and non-spatial memory.
Because many of the basic science studies showed some efficacy of probiotics on central nervous system function, this background may guide and promote further preclinical and clinical studies. Translating animal studies to human studies has obvious limitations but also suggests possibilities. Here, we provide several suggestions for the translation of animal studies. More experimental designs with both behavioral and neuroimaging measures in healthy volunteers and patients are needed in the future.
Possibly another good example of scientists failing to use More Dakka. The mice studies all showed solid effects, but then the human studies used the same dose range (10^9 or 10^10 CFU) and only about half showed effects! Googled for negative side effects of probiotics and the healthline result really had to stretch for anything bad. Wondering if, as much larger organisms, we should just be jacking up the dosage quite a bit.
On the other hand: half of mouse studies working in humans is an extremely good success rate. We should be quite suspicious of file-drawer effects and p-hacking.
I agree the effect is consistent enough that we should be suspicious of file drawer/p-hacking—although that’s also what you’d expect to see if the effect were in fact large—but note that they were different studies, i.e. the human studies mostly weren’t based on the non-human ones.
I was initially very concerned about this but then noticed that almost all the tested secondary endpoints were positive in the mice studies too. The human studies could plausibly still be meaningless though.
Has anyone (esp you Jim) looked into fecal transplants for this instead, in case our much longer digestive system is a problem?
In the early 1900s the Smithsonian Institution published a book each year, which mostly just described their organizational and budget updates. But they each also contained a General Appendix at the end, which seems to have served a function analogous to the modern “Edge” essays—reflections by scientists of the time on key questions of interest. For example, the 1929 book includes essays speculating about what “life” and “light” are, how insects fly, etc.
Another (unlikely, but more likely than almost all other ancient people) candidate for partial future revival: During the 79 AD eruption of Vesuvius, part of this man’s brain was vitrified.
Senator Bernie Sanders is planning to introduce legislation that would ban the construction of new AI data centers. You can find his video announcement here, and here is the transcript:
You can just do things (propose an end to the AGI death race)
Unfortunately, this policy action is not that.
It’s the kind of action that when universalized does indeed end the AGI death race! That is in an important sense proposing an end to the AGI death race.
It’s also the kind of action that’s within the Overton window and if passed moves the window.
It slows it down a bit, perhaps.
If everyone stopped building datacenters you really have made a lot of progress towards stopping the death race (and of course, the algorithm that produces banning datacenter construction would probably not stop there).
I think this is a common misconception. I’m pretty sure algorithmic progress will eventually reach a point where what currently takes a datacenter will be possible on a single machine for a slightly longer training period. If that same algorithm runs on a datacenter it would produce something superhuman, but cutting down to only single-gpu training would then not be enough to completely stop. Algo progress is a slow slog of “grad student descent”, so it likely takes quite a bit longer, and maybe it takes enough longer to figure out alignment. But it doesn’t stop the death race, it just slows it down. Actual stopping would require shredding all silicon big enough to even run the fully trained AI, which doesn’t seem to be in the cards. I’m not saying datacenter construction is good or should continue, or that this won’t buy time, but I think people are wishful-thinking about how much time it buys.
Agree qualitatively (and possible quantitatively). However, there’s a quite large knock-on effect, which is a strong bundle of signals of “AGI is bad, don’t make AGI”. These signals move in various directions between different entities, carrying various messages, but they generally push against AGI. (E.g. signaling legitimacy of the Stop position; the US signaling to other states; society signaling to would-be capabilities researchers; Congress self-signaling “we’re trying to ban this whole thing and will continue to add patches to ban dangerous stuff”; etc.)
I mean, sure, eventually. The key question is how much of algorithmic progress is downstream of hardware scaling. My sense is around 50% of it, maybe a bit more, so that if you cut scaling, progress now happens at around 1/4th of the speed, which is of course huge and makes things a lot better.
Thinking this through step by step in the framework of the AI Futures Model:
First, I’ll check what the model says, then I’ll reconstruct the reasoning behind why it predicts that.
By default, with Daniel’s parameters, Automated Coder (AC) happens in 2030 and ASI happens in 2031 1.33 years later.
If I stop experiment and training compute growth at the start of 2027, then the model predicts Automated Coder in 2039 rather than 2030. So 4x slower in calendar time (exactly matching habyrka’s guess). It also looks to have well over a 5 year takeoff from AC to ASI as opposed to the default of 1.33 years.
I got this by plugging in this modified version of our time series to this unreleased branch of our website.
However, this is highly sensitive to the timing of the compute growth pause, because it’s a shock to the flow rather than the stock. e.g. if I instead stop growth at the start of 2029 as in this worksheet, then AC happens in Mar 2031, taking ~2.2 years instead of ~1.2, so slowing things down by <2x. It does still slow down takeoff from AC to ASI to 4 years, so by ~3x (and this is probably at least a slight underestimate because we don’t model hardware R&D automation).
Now I’ll reconstruct why this is the case using simplifications to the model (I actually did these calcluations before plugging the time series things into our model).
Currently, experiment compute is growing at around 3x/year, and human labor around 2x/year. Conditional on no AGI, we’re projecting experiment compute growth to slow to around 2x/year by 2030, and human labor growth to slow to around 1.5x/year.
Figuring out the effect of removing experiment growth is a bit complicated for various reasons.
On the margin, informed by interviews/surveys, we model a ~2.5x gain in “research effort” from 10x more experiment compute. Which if applied instantaneously, would mean a 2.5x slowdown in algorithmic progress.
We estimate that a 100x in human parallel coding labor gives a ~2x in research effort on the margin. We don’t model quantity of research labor, but I expect probably the gains would be relatively small as quality matters a lot more than quantity; let’s shade up to 3x.
So naively based on our model parameters and a simplified version of our model (Cobb-Douglas used to locally approximate a CES), by default the growth in research effort per year from experiment compute is 2.5^log(2-3)=~1.3-1.55x, and from human labor is sqrt(3)^log(1.5-2)=~1.1-1.2x. Meaning that roughly log(1.4)/(log(1.4)+log(1.15))=~70% of research effort growth is coming from experiment compute.
So to simplify from now on, let’s think about what happens if research effort growth has a one-time shock a constant 30% of what it otherwise would have been.
What does this actually mean in terms of the effect on algorithmic/software progess? This means that the shock in research effort growth will eventually cause the software growth rate to be 30% of what it would have been otherwise, but it steadily decreases toward 30% over time (intuitively, the immediate effect is 0 because you have to actually wait for the “missing new experiment compute” to take effect).
(possibly wrong) I had Claude generate roughly the trajectory of the growth rate change:
The above graph seems to give decent intuition for why the averaged-over-the-relevant-period slowdown in software progress from no more experiment compute might be about 2x (with the 2027.0 growth stoppage), and therefore the overall slowdown from no more compute might be about 4x. It looks like the average of the first 12 years might be close to 0.5.
All of the above is ignoring automation for simplicity. Taking into account automation would mean that more of the gains from labor, so the slowdown is smaller. But on the other hand, as you get more coding labor you get more bottlenecked on experiment compute (which the Cobb-Douglas doesn’t take into account); in the AIFM, you’d eventually get hard bottlenecked, you can have a maximum of 15x research effort gain from coding labor increases alone. Looks like these factors and other deviations from the model might roughly cancel out in the case we’re considering.
My current default assumption is that, yes, someone will build something that obsoletes human intellectual and physical labor. In those futures, the best-case scenario is “Maybe the AIs like keeping humans as pets (and won’t breed us the way we breed pugs).” [1] And the other alternative futures go sharply downhill from there.
So I think of delay in terms of survivor curves, like someone with terminal cancer. How much time can we buy the human race? Can the children alive today get to enjoy a decent lifetime for however long we all have? So I heavily favor delay, in much the same way that I’d favor cancer remission.
A global AI halt might even buy us quite a bit of time.
If I had to bet on a specific model as liking humans and being a responsible “pet owner”, then I currently suspect we might have the best odds with a descendant of Claude. I do actually think that “enculturation” and building morally thoughtful models that like humans gives us non-zero chance of a more acceptable outcome. But I would still prefer humans to control their own destiny.
Do you expect algorithmic progress to never hit diminishing returns?
“Eventually”, sure, but I don’t think that’s operative here. If we had the ASI recipe and could study it safely for ten years, we’d find a way to implement it in a single datacenter. But discovering it in a single data center is much harder. There is actually something missing from current LLMs, there’s a part of intelligence they just don’t have, and the only thing that seems to mitigate that issue is model size, so without ever-increasing model size and analysis of their training dynamics, I think any attempts to get the missing piece are throwing darts with the lights off. (To be fair I have pretty unusual timelines compared to most of LW so maybe what’s convincing to me shouldn’t be to you.)
I agree with the general concern, but it’d be clearly a move in the right direction on that front?
With this kind of proposal I’m more worried that it could lead to a unilateral slowdown just after having animated China to be much more aggressive on AI.
agreed, I’m not saying don’t do it. I’m replying to habryka saying it proposes an end to the race.
Doesn’t the USA have enough datacenters to stay ahead of china in the race to fully human-replacing AI for quite a while, even with a unilateral hardware pause? I’m not sure of this claim, but it’s currently my impression that the compute ratio is pretty dramatic.
I expect there’s already enough silicon in place to produce overwhelmingly superhuman AI that still has idiot-savant going on; completely stopping CPU and GPU production worldwide seems to me like it’d be a timeline-extending move, but not by more than a few years. Somewhere between 1.3x and 3x.
As of June 2025 the US has 5x as much compute as China, I’d expect the gap has grown with substantially more American than Chinese data centers coming online in the past ~9 months
https://epoch.ai/data-insights/ai-supercomputers-performance-share-by-country
I dont see how. If AGI/ASI is powerful, then the existing deployed compute will suddenly become more powerful when AGI happens. If that isn’t X risk, then a few more data centers won’t change things. This only matter in long timelines where more data centers are required to get AGI. I don’t think that is the case. I think copying the mammal neocortex etc will get us there and that doesn’t need more compute.
This is cool! I’m sad he spends so much of his time criticising the good part (AI doing tonnes of productive labour). I say this not because I want to demand every ally agree with me on every point, but because I want to early disavow beliefs that political expediency might want me to endorse.
It seems to me a meaningfully open question whether automating all human labor will end up net benefiting humans, even assuming we survive; of course it might, but I think much more dystopian outcomes also seem plausible. Markets tend to benefit humans because the price signals we send tend to correlate with our relative needs, and hence with our welfare; I think it is not obvious that this correlation will persist once humans become unable to generate economic value.
What would AI labs do differently if this were made law? Couldn’t they build datacenters outside the US?
While I’m glad that these things are starting to be seriously discussed in the open by those in positions of power, this sounds like a blanket ban on all new datacenters? This seems too broad to me. The ideal moratorium would be on large GPU clusters, or large AI specific infrastructure, right?
On another note, I’m somewhat concerned that banning large compute infrastructure will concentrate pressure on algorithmic progress if there is no “low effort” [1] outlet of just scaling the compute. Maybe regulating the rate at which large scale compute infrastructure is constructed will still allow researchers to be “intellectually lazy” [2] by offloading progress onto compute scaling instead of innovating on algorithms.
I don’t actually think the researchers are intellectually lazy or that scaling is low effort, these just seem like the most succinct terms to express a lack of pressure to adapt in a specific direction.
Same as above.
I don’t think people are currently being intellectually lazy. They might be rationally spending effort in ways that produce less insight per compute but faster insight per month than they would if they had less compute. I do think that despite the way compute limitation would make people try harder, things are still worse than they would be with less compute. But not as much worse as it naively seems, because of what you’ve mentioned here.
It seems the pro-Trump Polymarket whale may have had a real edge after all. Wall Street Journal reports (paywalled link, screenshot) that he’s a former professional trader, who commissioned his own polls from a major polling firm using an alternate methodology—the neighbor method, i.e. asking respondents who they expect their neighbors will vote for—he thought would be less biased by preference falsification.
I didn’t bet against him, though I strongly considered it; feeling glad this morning that I didn’t.
I don’t remember anyone proposing “maybe this trader has an edge”, even though incentivising such people to trade is the mechanism by which prediction markets work. Certainly I didn’t, and in retrospect it feels like a failure not to have had ‘the multi-million dollar trader might be smart money’ as a hypothesis at all.
Why do you focus on this particular guy? Tens of thousands of traders were cumulatively betting billions of dollars in this market. All of these traders faced the same incentives.
Note that it is not enough to assume that willingness to bet more money makes a trader worth paying more attention to. You need the stronger assumption that willingness to bet n times more than each of n traders makes the single trader worth paying more attention to than all the other traders combined. I haven’t thought much about this, but the assumption seems false to me.
Because I saw a few posts discussing his trades, vs none for anyone else’s, which in turn is presumably because he moved the market by ten percentage points or so. I’m not arguing that this “should” make him so salient, but given that he was salient I stand by my sense of failure.
Mmh, if there is no reason to take that particular trader seriously, but just the mere fact that his trades were salient, I don’t see why one should experience any sense of failure whatsoever for not having paid more attention to him at the time.
Still, my main point was about the reasons for taking that particular trader seriously, not the sense of failure for not having done so, and it seems like there is no substantive disagreement there.
Knowing now that he had an edge, I feel like his execution strategy was suspect. The Polymarket prices went from 66c during the order back to 57c on the 5 days before the election. He could have extracted a bit more money from the market if he had forecasted the volume correctly and traded against it proportionally.
Wow, tough crowd
On one hand, I feel a bit skeptical that some dude outperformed approximately every other pollster and analyst by having a correct inside-view belief about how existing pollster were messing up, especially given that he won’t share the surveys. On the other hand, this sort of result is straightforwardly predicted by Inadequate Equilibria, where an entire industry had the affordance to be arbitrarily deficient in what most people would think was their primary value-add, because they had no incentive to accuracy (skin in the game), and as soon as someone with an edge could make outsized returns on it (via real-money prediction markets), they outperformed all the experts.
On net I think I’m still <50% that he had a correct belief about the size of Trump’s advantage that was justified by the evidence he had available to him, but even being directionally-correct would have been sufficient to get outsized returns a lot of the time, so at that point I’m quibbling with his bet sizing rather than the direction of the bet.
Norvid on Twitter made the apt point that we will need to see the actual private data before we can really judge. Not unusual for lucky people to backrationalize their luck as a sure win.
I can proudly say that though I disparaged the guy in private, I not once put my money where my mouth was, which means outside observers can infer that all along I secretly agreed with his analysis of the situation.
I think it can be both rational to doubt his edge and not trade on it.
Yes. https://www.lesswrong.com/posts/tDkYdyJSqe3DddtK4/alexander-gietelink-oldenziel-s-shortform?commentId=JqDaYkRyw2WSAZLDg
Arguments criticizing the FDA often seem to weirdly ignore the “F.” For all I know food safety regulations are radically overzealous too, but if so I’ve never noticed (or heard a case for) this causing notable harm.
Overall, my experience as a food consumer seems decent—food is cheap, and essentially never harms me in ways I expect regulators could feasibly prevent (e.g., by giving me food poisoning, heavy metal poisoning, etc). I think there may be harmful contaminants in food we haven’t discovered yet, but if so I mostly don’t blame the FDA for that lack of knowledge, and insofar as I do it seems an argument they’re being under-zealous.
Criticizing FDA food regulations is a niche; it is hard to criticize ‘the unseen’, especially when it’s mostly about pleasure and the FDA is crying: ‘we’re saying lives! Won’t someone thinking of the children? How can you disagree, just to stuff your face? Shouldn’t you be on a diet anyway?’
But if you go looking, you’ll find tons of it: pasteurized cheese and milk being a major flashpoint, as apparently the original unpasteurized versions are a lot tastier. (I’m reminded of things like beef tallow for fries or Chipotle—how do you know how good McDonald’s french fries used to taste before an overzealous crusader destroyed them if you weren’t there 30+ years ago? And are you really going to stand up and argue ‘I think that we should let people eat fries made with cow fat, because I am probably a lardass who loves fries and weighs 300 pounds, rather than listen to The Science™’?) There’s also the recent backfiring of overzealous allergy regulations, which threatens to cut off a large fraction of the entire American food supply to people with sesame & peanut allergies, due solely to the FDA. (Naturally, of course, the companies get the blame.) Similarly, I read food industry people noting that the effect of the ever-increasing burden of FDA regulations is a constant collapse of diversity, as everyone converges on a handful of safe ingredients and having to outsource to centralized food processors who can certify FDA compliance; but how would you ever see this browsing your local Walmart and looking at the colorful labels at the front? (Normal people do not spend much time reading the ingredients label and wondering why everything seems to be made out of the same handful of ingredients, starting with corn syrup.)
Those are great examples, thanks; I can totally believe there exist many such problems.
Still, I do really appreciate ~never having to worry that food from grocery stores or restaurants will acutely poison me; and similarly, not having to worry that much that pharmaceuticals are adulterated/contaminated. So overall I think I currently feel net grateful about the FDA’s purity standards, and net hateful just about their efficacy standards?
God bless Robert F. Kennedy Jr.
The ‘Food’ and the ‘Drug’ parts behave very differently. By default food products are allowed. There may be purity requirements or restaurant regulations but you don’t need to run studies or get approvals to serve an edible product or a new combination. By default drugs are banned.
I think the FDA is under zealous about heavy metals and other contaminants. But the FDA does a decent job of regulating food. However the ‘drug’ side is a nightmare. But the two situations are de facto handled in very, very different ways. So its not obvious why an argument would cover both of them.
Have you ever visited a country without zealous food safety regulations? I think it’s one of those things where it’s hard to realize what the alternative looks like (plentiful, cheap, and delicious street food available wherever people gather, so that you no longer have to plan around making sure you either bring food or go somewhere with restaurants, and it is viable for individuals to exist without needing a kitchen of their own).
What countries are you imagining? I know some countries have more street food, but from what I anecdotally hear most also have far more food poisoning/contamination issues. I’m not sure what the optimal tradeoff here looks like, and I could easily believe it’s closer to the norms in e.g. Southeast Asia than the U.S. But it at least feels much less obvious to me than that drug regulations are overzealous.
(Also note that much regulation of things like food trucks is done by cities/states, not the FDA).
Mexico and Chile are the most salient examples to me. But also I’ve only ever gotten food poisoning once in my life despite frequent risky food behavior.
Strong agree that the magnitude of the overzealousness is much higher for drugs than for food.
Basically, because food is a domain where there are highly negative tail effects, but not highly positive tail effects, conditioning on eating food at all, and thus it’s an area where you can afford to be restrictive, and this is notably not the case for medicine, where high negative and positive tail effect effects exist, so you need to be more lenient on your standards.
I think one exception may be salmonellosis. In USA, you get 1.2 million illnesses, 23000 hospitalizations, and 450 deaths every year. To compare, in EU, selling contaminated chicken products is illegal, and when hundreds of people get sick, it becomes a scandal.
You have to refrigerate eggs in the us while you don’t have to in the EU because of FDA regulations about washing the eggs.
There’s a strain of pro-sugar and anti-fat policies in regards to food for which the FDA shares some of the blame. Through that they might have contributed to the obesity crisis.
It’s possible that the regulations on farmers markets reduce the amount of farmers markets (which might be central to lower obesity levels in France and better health).
When asked to compare French regulations for farmers markets with the US regulations, Claude told me:
Overall, while both countries prioritize food safety, the French system tends to be more flexible for small producers and traditional methods, whereas the US system is more uniformly applied regardless of scale. The French approach often allows for more diverse and traditional products at markets, while the US system provides more consistent safety standards across diverse regions.
You likely could argue that the FDA shares part the blame for the obesity epidemic by setting bad incentives for low-fat and high fructose corn syrup foods while at the same time making it harder for farmers markets to actually sell healthy food.
Apart from that you do have people who oppose forced labeling of GMO products which is also an FDA rule.
Do you see microplastics as “harmful contaminants that haven’t been discovered yet”? It would be possible to have regulations that limit the amount of microplastic in plastic drinking bottles but those currently don’t exist.
I’d be interested in an article looking at whether the FDA is better at regulating food safety. I do expect food is an easier area, because erring on the side of caution doesn’t really lose you much — most food products have close substitutes. If there’s some low but not extremely low risk of a chemical in a food being bad for you, then the FDA can more easily deny approval without significant consequences: Medicine has more outsized effects if you are slow to approve usage.
Yet, perhaps this has led to reduced variety in food choices? I notice less generic or lesser-known food and beverage brands relative to a decade ago, though I haven’t verified whether my that background belief is accurate. I’d be curious also for an investigation in such an article about the extent of the barriers to designing a new food product; especially food products that aren’t doing anything new, purely a mixture of ingredients already considered safe (or at least, considered allowed). Would there be more variety? Or notably cheaper food?
I was surprised to find a literature review about probiotics which suggested they may have significant CNS effects. The tl;dr of the review seems to be: 1) You want doses of at least 109 or 1010 CFU, and 2) You want, in particular, the strains B. longum, B. breve, B. infantis, L. helveticus, L. rhamnosus, L. plantarum, and L. casei.
I then sorted the top 15 results on Amazon for “probiotic” by these desiderata, and found that this one seems to be best.
Some points of uncertainty:
Probiotic manufacturers generally don’t disclose the strain proportions of their products, so there’s some chance they mostly include e.g. whatever’s cheapest, plus a smattering of other stuff.
One of the reviewed studies suggests L. casei may impair memory. I couldn’t find a product that didn’t have L. casei but did have at least 109 CFU of each other recommended strain, so if you take the L. casei/memory concern seriously your best option might be combining this and this.
For convenience, here’s a slightly edited-for-clarity version of the abstract:
Possibly another good example of scientists failing to use More Dakka. The mice studies all showed solid effects, but then the human studies used the same dose range (10^9 or 10^10 CFU) and only about half showed effects! Googled for negative side effects of probiotics and the healthline result really had to stretch for anything bad. Wondering if, as much larger organisms, we should just be jacking up the dosage quite a bit.
On the other hand: half of mouse studies working in humans is an extremely good success rate. We should be quite suspicious of file-drawer effects and p-hacking.
I agree the effect is consistent enough that we should be suspicious of file drawer/p-hacking—although that’s also what you’d expect to see if the effect were in fact large—but note that they were different studies, i.e. the human studies mostly weren’t based on the non-human ones.
I was initially very concerned about this but then noticed that almost all the tested secondary endpoints were positive in the mice studies too. The human studies could plausibly still be meaningless though.
Has anyone (esp you Jim) looked into fecal transplants for this instead, in case our much longer digestive system is a problem?
In the early 1900s the Smithsonian Institution published a book each year, which mostly just described their organizational and budget updates. But they each also contained a General Appendix at the end, which seems to have served a function analogous to the modern “Edge” essays—reflections by scientists of the time on key questions of interest. For example, the 1929 book includes essays speculating about what “life” and “light” are, how insects fly, etc.
I made Twitter lists of DeepMind and OpenAI researchers, and find them useful for tracking team zeitgeists.
Apparently Otzi the Iceman still has a significant amount of brain tissue. Conceivably memories are preserved?
Another (unlikely, but more likely than almost all other ancient people) candidate for partial future revival: During the 79 AD eruption of Vesuvius, part of this man’s brain was vitrified.
I found LinkedIn’s background breakdown of DeepMind employees interesting; fewer neuroscience backgrounds than I would have expected.