Four views of AI automation: model, researcher, lab, economy
Every serious AI lab wants to automate themselves. I believe this sentence to hold predictive power over AI timelines and all other predictions about the future. In particular, I believe taking the AI-lab-centric view is the right way to think about automation.
In this post, I want to present the different levels of abstraction at which AI automation can be thought of:
Model: The model is being optimized by the lab. The right measure of acceleration is how much a model can autonomously do on tasks of a given purpose; ideally, research on future versions of itself. Examples of metrics are the METR time horizons study and OpenAI MLE-bench.
Researcher: The researcher uses tools and builds automations to make the research process faster. The right measure of acceleration is the productivity of the researcher on their core tasks. (METR uplift study)
AI lab: The lab trains models and hires researchers to be faster; allocates resources on bottlenecks; builds capabilities for external purposes to raise money and earn revenue; spends compute on external inference, internal inference, and training. (There are no public studies.)
Economy: Researchers and practitioners share models, automations, ideas; the labs earn money; the overall pace of progress increases, or keeps pace in case the technical problems become more difficult to solve. (GDP proxies, automation studies)
All views except the lab view have important drawbacks:
Model: as of 2025, the abilities of the model to do AI R&D do not imply that much about the future capabilities of the model, because many other things influence model capabilities; it’s not a self-reinforcing loop. The model’s capabilities do not directly translate to future model capabilities.
Researcher: Again, a researcher’s work is not self-reinforcing; the researcher’s productivity results in general model capabilities. This only rarely feeds back into their future productivity as a first-order effect. Researchers do work on their own automation; but this is a smaller fraction of their time.
Economy: society-wide metrics depend a lot on the capabilities that are not directly accelerating model R&D. Of course, higher total productivity does accelerate the compute buildup and hence AI progress at large; but this is a second-order effect. Additionally, society is ruled by democratic and other slow-moving institutions that have a difficult time optimizing for future productivity.
The AI lab is the unit with the most power over its own level of acceleration, because:
Labs know where their bottlenecks are and can decide to train models to complement the researchers;
Labs can record researcher actions and train models to do those actions;
Improvements that the lab makes to its own productivity very directly translate to its future ability to do more of these productivity improvements;
Labs benefit from scale (and hence can expend mental cycles to do proper allocation of resources towards productivity improvements);
Labs benefit from power concentration (and can just make decisions to redirect resources towards productivity improvements);
The concepts of “AI lab” and “model” will get closer as time goes on
In the future, a larger and larger fraction of decisions inside the lab will get taken over by the model. This is due to two separate processes:
imitation learning approaches to automation (existing activities done by people get optimized away to be done by the model).
reinforcement learning on new tasks makes activities possible that were never actually done by people in the first place.
As we get closer to certain notions of AI self-improvement, the model will do actions to improve itself. For instance, in the end, most actions of Anthropic, internal or external, will be taken by Claude. In an efficient world where AI labs are aiming to automate AI R&D, “researcher” view goes away, and metrics tracking “model” and “AI lab” views start tracking the same thing.
On the apocalyptic residual, or: the secret third way to save the world.
Bostrom’s Vulnerable World Hypothesis says our world is vulnerable. The most commonly discussed threat model are his “Type-1 vulnerabilities”: destructive technologies such as bioengineered pandemics.
There are three steps required for our world to be destroyed in this way:
The dangerous technology exists and is available to many people;
There are no defensive technologies to protect us;
A number of people want to destroy the world; we call those people the apocalyptic residual.
If all three are satisfied at any point in time, we die.
Most work on AI safety has been focused on (1) and (2). To be precise, here is a non-exhaustive list of efforts towards mitigating the vulnerable world hypothesis, focused on biorisk:
Control the availability of dangerous technology: Labs have been tracking offensive biorisk capabilities (see e.g. GDM Frontier Safety, OpenAI Preparedness, Anthropic Frontier Red Team) and developing model safeguards. There is also work on filtering pretraining data and unlearning to remove dangerous capabilities from LLMs. Regarding open-weight LLMs, there is work on biorisk capability elicitation and tamper-resistant safeguards.
Build defenses to dangerous technology: Vitalik’s d/acc and Michael Nielsen’s Notes on differential technological development have inspired a bunch of investment into defensive technology development. From the biorisk side, SecureBio exists; Open Phil has been funding work on biosecurity preparedness for a long time, and in 2025 AI labs have started funding biorisk prevention startups. To be clear, the infrastructure developed by the mRNA and Big Pharma companies since 2010 likely dwarfs the AI-lab-led and def/acc efforts in importance if we face an engineered pandemic before 2030.
Reduce the apocalyptic residual: ?????
I cannot think of a single person or paper working on managing the apocalyptic residual (number of people who would want to destroy the world and have enough agency to try to do so) recently.
Bostrom calls this “preference modification” and argues it is infeasible. Here are relevant quotes from the VWH paper:
“Radically re-engineering human nature on a fully global scale” seems unrealistic until we realize we’re about to do totally realistic things such as “automate all remote work” on the same timeframe; that technology is re-engineering human nature on a global scale; and that many people’s lives are increasingly a side quest to their main task of being fed video content from variously sized screens. Status-disaffected men used to go to war or become cult leaders or something; soon they will all waste their lives on video games, their attention on short-form video feeds, their salaries on parlay bets on Robinhood or parasocial relationships with OnlyFans models; living their lives vicariously through the Content they consume and egregores that control their brain. If the forces of technocapital can coordinate to do anything, they can coordinate on preference modification of every monetizable eyeball they can reach, to make their lives as inane and inconsequential and as glued to the screen as possible.
Now, of course, the timelines of preference modification and dangerous technology might not match. Market forces might only drain the agency of a fraction of the relevant population by the critical period. But it’s worth thinking about ways in which the impact of AGI actually expands society’s toolset to fight x-risk.