Research Associate at the Transformative Futures Institute, formerly of the MTAIR project and Center on Long-term Risk, Graduate researcher at Kings and AI MSc at Edinburgh. Interested in philosophy, longtermism and AI Alignment.
Sammy Martin
The ARC evals showing that when given help and a general directive to replicate a GPT-4 based agent was able to figure out that it ought to lie to a TaskRabbit worker is an example of it figuring out a self-preservation/power-seeking subgoal which is on the road to general self-preservation. But it doesn’t demonstrate an AI spontaneously developing self-preservation or power-seeking, as an instrumental subgoal to something that superficially has nothing to do with gaining power or replicating.
Of course we have some real-world examples of specification-gaming like you linked in your answer: those have always existed and we see more ‘intelligent’ examples like AIs convinced of false facts trying to convince people they’re true.
There’s supposedly some evidence here that we see power-seeking instrumental subgoals developing spontaneously but how spontaneous this actually was is debatable so I’d call that evidence ambiguous since it wasn’t in the wild.
>APS is less understood and poorly forecasted compared to AGI.
I should clarify that I was talking about the definition used by forecasts like the Direct Approach methodology and/or the definition given in the metaculus forecast or in estimates like the Direct Approach. The latter is roughly speaking, capability sufficient to pass a hard adversarial Turing tests and human-like capabilities on enough intellectual tasks as measured by certain tests.This is something that can plausibly be upper bounded by the direct approach methodology which aims to predict when an AI could get a negligible error in predicting what a human expert would say over a specific time horizon. So this forecast is essentially a forecast of ‘human-expert-writer-simulator AI’, and that is the definition that’s used in public elicitations like the metaculus forecasts.
However, I agree with you that while in some of the sources I cite that’s how the term is defined it’s not what the word denotes (just generality, which e.g. GPT-4 plausibly is for some weak sense of the word), and you also don’t get from being able to simulate the writing of any human expert to takeover risk without making many additional assumptions.
I guess it is down to Tyler’s personal opinion, but would he accept asking IR and defense policy experts on the chance of a war with China as an acceptable strategy or would he insist on mathematical models of their behaviors and responses? To me it’s clearly the wrong tool, just as in the climate impacts literature we can’t get economic models of e.g. how governments might respond to waves of climate refugees but can consult experts on it.
I recently held a workshop with PIBBSS fellows on the MTAIR model and thought some points from the overall discussion were valuable:
The discussants went over various scenarios related to AI takeover, including a superficially aligned system being delegated lots of power and gaining resources by entirely legitimate means, a WFLL2-like automation failure, and swift foom takeover. Some possibilities involved a more covert, silent coup where most of the work was done through manipulation and economic pressure. The concept of “$1T damage” as an intermediate stage to takeover appeared to be an unnatural fit with some of these diverse scenarios. There was some mention of whether mitigation or defensive spending should be considered as part of that $1T figure.
Alignment Difficulty and later steps merge many scenarios
The discussants interpreted “alignment is hard” (step 3) as implying that alignment is sufficiently hard that (given that APS is built), at least one APS is misaligned somewhere, and also that there’s some reasonable probability that any given deployed APS is unaligned. This is the best way of making the whole statement deductively valid.
However, proposition 3 being true doesn’t preclude the existence of other aligned APS AI (hard alignment and at least one unaligned APS might mean that there are leading conscientious aligned APS projects but unaligned reckless competitors). This makes discussion of the subsequent questions harder, as we have to condition on there possibly being aligned APS present as well which might reduce the risk of takeover.
This means that when assessing proposition 4, we have to condition on some worlds where aligned APS has already been deployed and used for defense, some where there have been warning shots and strong responses without APS, some where misaligned APS emerges out of nowhere and FOOMs too quickly for any response, and a slow takeoff where nonetheless every system is misaligned and there is a WFLL2 like takeover attempt, and add up the chance of large scale damage in all of these scenarios, conditioning on their probability, which makes coming to an overall answer to 4 and 5 challenging.
Definitions are value-laden and don’t overlap: TAI, AGI, APS
We differentiated between Transformative AI (TAI), defined by Karnofsky, Barnett and Cotra entirely by its impact on the world, which can either be highly destructive or drive very rapid economic growth; General AI (AGI), defined through a variety of benchmarks including passing hard adversarial Turing tests and human-like capabilities on enough intellectual tasks; and APS, which focuses on long-term planning and human-like abilities only on takeover-relevant tasks. We also mentioned Paul Christiano’s notion of the relevant metric being AI ‘as economically impactful as a simulation of any human expert’ which technically blends the definitions of AGI and TAI (since it doesn’t necessarily require very fast growth but implies it strongly). Researchers disagree quite a lot on even which of these are harder: Daniel Kokotaljo has argued that APS likely comes before TAI and maybe even before (the Matthew Barnett definition of) AGI, while e.g. Barnett thinks that TAI comes after AGI with APS AI somewhere in the middle (and possibly coincident with TAI).
In particular, some definitions of ‘AGI’, i.e. human-level performance on a wide range of tasks, could be much less than what is required for APS depending on what the specified task range is. If the human-level performance is only on selections of tasks that aren’t useful for outcompeting humans strategically (which could still be very many tasks, for example, human-level performance on everything that requires under a minute of thinking), the ‘AGI system’ could almost entirely lack the capabilities associated with APS. However, most of the estimates that could be used in a timelines estimate will revolve around AGI predictions (since they will be estimates of performance or accuracy benchmarks), which we risk anchoring on if we try to adjust them to predict the different milestones of APS.
In general it is challenging to use the probabilities from one metric like TAI to inform other predictions like APS, because each definition includes many assumptions about things that don’t have much to do with AI progress (like how qualitatively powerful intelligence is in the real world, what capabilities are needed for takeover, what bottlenecks are there to economic automation or research automation etc.) In other words, APS and TAI are value-laden terms that include many assumptions about the strategic situation with respect to AI takeover, world economy and likely responses.
APS is less understood and more poorly forecasted compared to AGI. Discussants felt the current models for AGI can’t be easily adapted for APS timelines or probabilities. APS carries much of the weight in the assessment due to its specific properties: i.e. many skeptics might argue that even if AGI is built, things which don’t meet the definition of APS might not be built.
Alignment and Deployment Decisions
Several discussants suggested splitting the model’s third proposition into two separate components: one focusing on the likelihood of building misaligned APS systems (3a) and the other on the difficulty of creating aligned ones (3b). This would allow a more nuanced understanding of how alignment difficulties influence deployment decisions. They also emphasized that detection of misalignment would impact deployment, which wasn’t sufficiently clarified in the original model.
Advanced Capabilities
There was a consensus that ‘advanced capabilities’ as a term is too vague. The discussants appreciated the attempt to narrow it down to strategic awareness and advanced planning but suggested breaking it down even further into more measurable skills, like hacking ability, economic manipulation, or propaganda dissemination. There are, however, disagreements regarding which capabilities are most critical (which can be seen as further disagreements about the difficulty of APS relative to AGI).
If strategic awareness comes before advanced planning, we might see AI systems capable of manipulating people, but not in ways that greatly exceed human manipulative abilities. As a result, these manipulations could potentially be detected and mitigated and even serve as warning signs that lower total risk. On the other hand, if advanced capabilities develop before strategic awareness or advanced planning, we could encounter AI systems that may not fully understand the world or their position in it, nor possess the ability to plan effectively. Nevertheless, these systems might still be capable of taking single, highly dangerous actions, such as designing and releasing a bioweapon.
Outside View & Futurism Reliability
We didn’t cover the outside view considerations extensively, but various issues under the “accuracy of futurism” umbrella arose which weren’t specifically mentioned.
The fact that markets don’t seem to have reacted as if Transformative AI is a near-term prospect, and the lack of wide scale scrutiny and robust engagement with risk arguments (especially those around alignment difficulty), were highlighted as reasons to doubt this kind of projection further.
The Fermi Paradox implies a form of X-risk that is self-destructive and not that compatible with AI takeover worries, while market interest rates also push the probability of such risks downward. The discussants recommended placing more weight on outside priors than we did in the default setting for the model, suggesting a 1:1 ratio compared to the model’s internal estimations.
Discussants also agreed with the need to balance pessimistic surviva- is-conjunctive views and optimistic survival-is-disjunctive views, arguing that the Carlsmith model is biased towards optimism and survival being disjunctive but that the correct solution is not to simply switch to a pessimism-biased survival is conjunctive model in response.
Difficult to separate takeover from structural risk
There’s a tendency to focus exclusively on the risks associated with misaligned APS systems seeking power, which can introduce a bias towards survival being predicated solely on avoiding APS takeover. However, this overlooks other existential risk scenarios that are more structural. There are potential situations without agentic power-seeking behavior but characterized by rapid changes could for less causally clear reasons include technological advancements or societal shifts that may not necessarily have a ‘bad actor’ but could still spiral into existential catastrophe. This post describes some of these scenarios in more detail.
This is a serious problem, but it is under active investigation at the moment, and the binary of regulation or pivotal act is a false dichotomy. Most approaches that I’ve heard of rely on some combination of positively transformative AI tech (basically lots of TAI technologies that reduce risks bit by bit, overall adding up to an equivalent of a pivotal act) and regulation to give time for the technologies to be used to strengthen the regulatory regime in various ways or improve the balance of defense over offense, until eventually we transition to a totally secure future: though of course this assumes at least (somewhat) slow takeoff.
You can see these interventions as acting on the conditional probabilities 4) and 5) in our model by driving down the chance that assuming misaligned APS is deployed, it can cause large-scale disasters.
4) Misaligned APS systems will be capable of causing a large global catastrophe upon deployment,
5) The human response to misaligned APS systems causing such a catastrophe will not be sufficient to prevent it from taking over completely,
6) Having taken over, the misaligned APS system will destroy or severely curtail the potential of humanity.This hasn’t been laid out in lots of realistic detail yet not least because most AI governance people are currently focused on near-term actions like making sure the regulations are actually effective, because that’s the most urgent task. But this doesn’t reflect a belief that regulations alone are enough to keep us safe indefinitely.
Holden Karnofsky has written on this problem extensively,
Oh, we’ve been writing up these concerns for 20 years and no one listens to us.′ My view is quite different. I put out a call and asked a lot of people I know, well-informed people, ‘Is there any actual mathematical model of this process of how the world is supposed to end?’...So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we’ve been talking about this seriously, there isn’t a single model done.
I think that MTAIR plausibly is a model of the ‘process of how the world is supposed to end’, in the sense that it runs through causal steps where each individual thing is conditioned on the previous thing (APS is developed, APS is misaligned, given misalignment it causes damage on deployment, given that the damage is unrecoverable), and for some of those inputs your probabilities and uncertainty distribution could itself come from a detailed causal model (e.g. you can look at the Direct Approach for the first two questions.
For the later questions, like e.g. what’s the probability that an unaligned APS can inflict large disasters given that it is deployed, we can enumerate ways that it could happen in detail but to assess their probability you’d need to do a risk assessment with experts not produce a mathematical model.
E.g. you wouldn’t have a “mathematical model” of how likely a US-China war over Taiwan is, you’d do wargaming and ask experts or maybe superforecasters. Similarly, for the example that he gave which was COVID there was a part of this that was a straightforward SEIR model and then a part that was more sociological talking about how the public response works (though of course a lot of the “behavioral science” then turned out to be wrong!).
So a correct ‘mathematical model of the process’ if we’re being fair, would use explicit technical models for technical questions and for sociological/political/wargaming questions you’d use other methods. I don’t think he’d say that there’s no ‘mathematical model’ of nuclear war because while we have mathematical models of how fission and fusion works, we don’t have any for how likely it is that e.g. Iran’s leadership decides to start building nuclear weapons.
I think Tyler Cowen would accept that as sufficiently rigorous in that domain, and I believe that the earlier purely technical questions can be obtained from explicit models. One addition that could strengthen the model is to explicitly spell out different scenarios for each step (e.g. APS causes damage via autonomous weapons, economic disruption, etc). But the core framework seems sufficient as is, and also those concerns have been explained in other places.
What do you think?
A Model-based Approach to AI Existential Risk
The alignment difficulty scale is based on this post.
I really like this post and think it’s a useful addendum to my own alignment difficulty scale (making it 2D, essentially). But I think I was conceptualizing my scale as running along the diagonal line you provide from GPT-4 to sovereign AI. But I think your way of doing it is better on reflection.
In my original post when I suggested that the ‘target’ level of capability we care about is the capability level needed to build positively transformative AI (pTAI), which is essentially the ‘minimal aligned AGI that can do a pivotal act’ notion but is more agnostic about whether it will be a unitary agentic system or many systems deployed over a period.
I think that what most people talk about when they talk about alignment difficulty isn’t how hard the problem ‘ultimately’ is but rather how hard the problem is that we need to solve, with disagreements also being about e.g. how capable an AI you need for various pivotal/positively transformative acts.
I didn’t split these up because I think that in a lot of people’s minds the two run together in a fairly unprincipled way, but if we want a scale that corresponds to real things in the world having a 2D chart like this is better.
Update
This helpful article by Holden Karnofsky also describes an increasing scale of alignment difficulty, although it’s focused on a narrower range of the scale than mine (his scale covers 4-7) and is a bit more detailed about the underlying causes of the misalignment. Here’s how my scale relates to his:
The “playing the training game” threat model, where systems behave deceptively only to optimize in-episode reward, corresponds to an alignment difficulty level of 4 or higher. This is because scalable oversight without interpretability tools (level 4) should be sufficient to detect and address this failure mode. The AI may pretend to be helpful during training episodes, but oversight exposing it to new situations will reveal its tendency toward deception.
(Situationally aware) Deception by default corresponds to a difficulty level of 6. If misaligned AIs form complex inner goals and engage in long-term deception, then scalable oversight alone will not catch intentionally deceptive systems that can maintain consistent deceitful behavior. Only interpretability tools used as part of the oversight process (level 6) give us the ability to look inside the system and identify deceptive thought patterns and tendencies.
Finally, the gradient hacking threat model, where AIs actively manipulate their training to prevent alignment, represents an alignment difficulty of 7 or higher. Even interpretability-based oversight can be defeated by sufficiently sophisticated systems that alter their internals to dodge detection.
I think that, on the categorization I provided,
‘Playing the training game’ at all corresponds to an alignment difficulty level of 4 because better than human behavioral feedback and oversight can reveal it and you don’t need interpretability.
(Situationally aware) Deception by default corresponds to a difficulty level of 6 because if it’s sufficiently capable no behavioral feedback will work and you need interpretability-based oversight
Gradient hacking by default corresponds to a difficulty level of 7 because the system will also fight interpretability based oversight and you need to think of something clever probably through new research at ‘crunch time’.
You’re right, I’ve reread the section and that was a slight misunderstanding on my part.
Even so I still think it falls at a 7 on my scale as it’s a way of experimentally validating oversight processes that gives you some evidence about how they’ll work in unseen situations.
In the sense that there has to be an analogy between low and high capabilities somewhere, even if at the meta level.
This method lets you catch dangerous models that can break oversight processes for the same fundamental reasons as less dangerous models, not just for the same inputs.
Excellent! In particular, it seems like oversight techniques which can pass tests like these could work in worlds where alignment is very difficult, so long as AI progress doesn’t involve a discontinuity so huge that local validity tells you nothing useful (such that there are no analogies between low and high capability regimes).
I’d say this corresponds to 7 on my alignment difficulty table.
There’s a trollish answer to this point (that I somewhat agree with) which is to just say: okay, let’s adopt moral uncertainty over all of the philosophically difficult premises too, so let’s say there’s only a 1% chance that raw intensity of pain matters and 99% that you need to be self reflective in certain ways to have qualia and suffer in a way that matters morally, or you should treat it as scaling with cortical neurons, or only humans matter.
...and probably the math still works out very unfavorably.
I say trollish because a decision procedure like this strikes me as likely to swamp and overwhelm you with way too many different considerations pointing in all sorts of crazy directions and to be just generally unworkable so I feel like something has to be going wrong here.
Still, I do feel like the fact that the answer is non-obvious in this way and does rely on philosophical reflection means you can’t draw many deep abiding conclusions about human empathy or the “worthiness” of human civilization (whatever that really means) from how we treat fish
Today, Anthropic, Google, Microsoft and OpenAI are announcing the formation of the Frontier Model Forum, a new industry body focused on ensuring safe and responsible development of frontier AI models. The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards.
The core objectives for the Forum are:
Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
This seems overall very good at first glance, and then seems much better once I realized that Meta is not on the list. There’s nothing here that I’d call substantial capabilities acceleration (i.e. attempts to collaborate on building larger and larger foundation models, though some of this could be construed as making foundation models more useful for specific tasks). Sharing safety-capabilities research like better oversight or CAI techniques is plausibly strongly net positive even if the techniques don’t scale indefinitely. By the same logic, while this by itself is nowhere near sufficient to get us AI existential safety if alignment is very hard (and could increase complacency), it’s still a big step in the right direction.
adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection. There will be a strong focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.
The mention of combating cyber threats is also a step towards explicit pTAI.
BUT, crucially, because Meta is frozen out we can know both that this partnership isn’t toothless, represents a commitment to not do the most risky and antisocial things Meta presumably doesn’t want to give up, and the fact that they’re the only major AI company in the US to not join will be horrible PR for them as well.
Once I am caught up I intend to get my full Barbieheimer on some time next week, whether or not I do one right after the other. I’ll respond after. Both halves matter – remember that you need something to protect.
That’s why it has to be Oppenheimer first, then Barbie. :)
When I look at the report, I do not see any questions about 2100 that are more ‘normal’ such as the size of the economy, or population growth, other than the global temperature, which is expected to be actual unchanged from AGI that is 75% to arrive by then. So AGI not only isn’t going to vent the atmosphere and boil the oceans or create a Dyson sphere, it also isn’t going to design us superior power plants or forms of carbon capture or safe geoengineering. This is a sterile AGI.
This doesn’t feel like much of a slam dunk to me. If you think very transformative AI will be highly distributed, safe by default (i.e. 1-3 on the table) and arise on the slowest end of what seems possible, then maybe we don’t coordinate to radically fix the climate and we just use TAI to adapt well individually, decarbonize and get fusion power and spaceships but don’t fix the environment or melt the earth and just kind of leave it be because we can’t coordinate well enough to agree on a solution. Honestly that seems not all that unlikely, assuming alignment, slow takeoff and a mediocre outcome.
If they’d asked about GDP and they’d just regurgitated the numbers given by the business as usual UN forecast after just being queried about AGI, then it would be a slam dunk that they’re not thinking it through (unless they said something very compelling!). But to me while parts of their reasoning feel hard to follow there’s nothing clearly crazy.
The view that the Superforecasters take seems to be something like “I know all these benchmarks seem to imply we can’t be more than a low number of decades off powerful AI and these arguments and experiments imply super-intelligence should be soon after and could be unaligned, but I don’t care, it all leads to an insane conclusion, so that just means the benchmarks are bullshit, or that one of the ‘less likely’ ways the arguments could be wrong is correct. (Note that they didn’t disagree on the actual forecasts of what the benchmark scores would be, only their meaning!)
One thing I can say is that it very much reminds me of Da Shi in the novel Three Body Problem (who—and I know this is fictional evidence—ended up being entirely right in this interaction that the supposed ‘miracle’ of the CMB flickering was a piece of trickery)
You think that’s not enough for me to worry about? You think I’ve got the energy to gaze at stars and philosophize?”
“You’re right. All right, drink up!”
“But, I did indeed invent an ultimate rule.”
“Tell me.”
“Anything sufficiently weird must be fishy.”
“What… what kind of crappy rule is that?”
“I’m saying that there’s always someone behind things that don’t seem to have an explanation.”
“If you had even basic knowledge of science, you’d know it’s impossible for any force to accomplish the things I experienced. Especially that last one. To manipulate things at the scale of the universe—not only can you not explain it with our current science, I couldn’t even imagine how to explain it outside of science. It’s more than supernatural. It’s super-I-don’t-know-what....”
“I’m telling you, that’s bullshit. I’ve seen plenty of weird things.”
“Then tell me what I should do next.”
“Keep on drinking. And then sleep.”
I think that before this announcement I’d have said that OpenAI was at around a 2.5 and Anthropic around a 3 in terms of what they’ve actually applied to existing models (which imo is fine for now, I think that doing more to things at GPT-4 capability levels is mostly wasted effort in terms of current safety impacts), though prior to the superalignment announcement I’d have said openAI and anthropic were both aiming at a 5, i.e. oversight with research assistance, and Deepmind’s stated plan was the best at a 6.5 (involving lots of interpretability and some experiments). Now OpenAI is also aiming at a 6.5 and Anthropic now seems to be the laggard and still at a 5 unless I’ve missed something.
However the best currently feasible plan is still slightly better than either. I think e.g. very close integration of the deception and capability evals from the ARC and Apollo research teams into an experiment workflow isn’t in either plan and should be, and would bump either up to a 7.
I don’t see how anyone can have high justifiable confidence that it’s zero instead of epsilon, given our general state of knowledge/confusion about philosophy of mind/consciousness, axiology, metaethics, metaphilosophy.
I tend to agree with Zvi’s conclusion although I also agree with you that I don’t know that it’s definitely zero. I think it’s unlikely (subjectively like under a percent) that the real truth about axiology says that insects in bliss are an absolute good, but I can’t rule it out like I can rule out winning the lottery because no-one can trust reasoning in this domain that much.
What I’d say is just in general in ‘weird’ domains (AI Strategy, thinking about longtermist prioritization, metaethics) because the stakes are large and the questions so uncertain, you run into a really large number of “unlikely but not really unlikely enough to honestly call it a pascal’s mugging” considerations, things you’d subjectively say are under 1% likely but over one in a million or so. I think the correct response in these uncertain domains is to mostly just ignore them like you’d ignore things that are under one in a billion in a more certain and clear domain like construction engineering.
The taskforce represents a startup government mindset that makes me optimistic
I would say it’s not just potentially a startup government mindset in the abstract but rather an attempt to repeat a specific, preexisting highly successful example of startup government, namely the UK’s covid vaccine task force which was name checked in the original Foundation model task force announcement.
That was also fast-moving attempt to solve a novel problem that regular scientific institutions were doing badly at and it substantially beat expectations, and was run under an administration that has a lot of overlap with the current administration, with the major exception being a more stable and reasonable PM at the top (Sunak not Boris) and no Dominic Cummings involved.
This is plausibly true for some solutions this research could produce like e.g. some new method of soft optimization, but might not be in all cases.
For levels 4-6 especially the pTAI that’s capable of e.g. automating alignment research or substantially reducing the risks of unaligned TAI might lack some of the expected ‘general intelligence’ of AIs post SLT and be too unintelligent for techniques that rely on it having complete strategic awareness, self-reflection, a consistent decision theory, the ability to self improve or other post SLT characteristics.
One (unrealistic) example, if we have a technique for fully loading the human CEV into a superintelligence ready to go that works for levels 8 or 9, that may well not help at all with improving scalable oversight of non-superintelligent pTAI which is incapable of representing the full human value function.
Roon also lays down the beats.
For those who missed the reference