Some thoughts on what would make me endorse an AGI lab

I’ve been feeling more positive about “the idea of Anthropic” lately, as distinct from the actual company of Anthropic.

An argument for a safety-focused, science-focused commercial frontier scaling lab

I largely buy the old school LessWrong arguments of instrumental convergence and instrumental opacity that suggest catastrophic misalignment, especially of powerful superintelligences. However, I don’t particularly think that those arguments meet the standard of evidence necessary for the world to implement approximately unprecedented policies like “establish an international treaty that puts a global moratorium on frontier AI development.” [1]

If I were king of the world, those arguments would be sufficient reason to shape the laws of my global monarchy. Specifically, I would institute a policy in which we approach Superintelligence much more slowly and carefully, including, many separate pauses in which we thoroughly test the current models before moving forward with increasing frontier capabilities. But I’m not the king of the world, and I don’t have the affordance to implement nuanced policies that reflect the risks and uncertainties of the situation.

Given the actual governance machinery available, it seems to me that reducing our collective uncertainty about the properties of AI systems is at least helpful, and possibly necessary, for amassing political will behind policies that will prove to be good ex post.

Accordingly, I want more grounding in what kinds of beings the AIs are, to inform my policy recommendations. It is imperative to get a better empirically-grounded understanding of AI behavior.

Some of the experiments for gleaning that understanding require doing many training runs, varying parameters of those training runs, and learning how differences in training lead to various behavioral properties.

As a very simple example, most of the models from across the AI labs have a “favorite animal”. If you ask them “what’s your favorite animal, answer in one word”, almost all of them will answer “octopus” almost all of the time. Why is this? Where in the training process does that behavioral tendency (I’m not sure that it’s appropriate to call it a preference), appear? Do the base models exhibit that behavior, or is it the result of some part of post-training? Having identified where in the training process that bias is introduced, I would want to run variations on the training from that checkpoint onward, and learn which differences in training correlate with changes in this simple behavioral outcome.

“What makes AIs disproportionately answer ‘octopus’ as their favorite animal” is the kind of very simple question that I think we should be able to answer, as part of a general theory of how training shapes behavior. I want to try this basic approach with tons and tons of observed behaviors (including some directly relevant safety properties, like willingness to lie and shutdown-resistance). The goal would be to be able to accurately predict model behaviors, including out-of-distribution behaviors, from the training.

Experiments like these require having access to a whole spectrum of model checkpoints, and the infrastructure to do many varied training runs branching from a given checkpoint. You might even need to go back to 0, and redo pretraining (though hopefully you don’t need to completely redo pretraining, multiple times).

Doing this kind of research requires having the infrastructure and talent for doing model training, and (possibly) a lot of cash to burn on training runs. Depending on how expensive this kind of research needs to be, and on how much you can learn from models that are behind the frontier, you might need to be a frontier scaling lab to do this kind of work.[2]

This makes me more sympathetic to the basic value proposition of Anthropic: developing iteratively more capable AI systems, attending to developing those systems such that they broadly have positive impacts on the world, shipping products to gain revenue and investment, and then investing much of your producer surplus into studying the models and trying to understand them. I can see why I might run more-or-less that plan.

But that does NOT necessarily mean that I am in favor of Anthropic the company as it actually exists.

This prompts me to consider: What would I want to see from an AGI lab, that would cause me to endorse it?

Features that an AGI lab needs to have to win my endorsement

[note: I am only listing what would cause me to be in favor of a hypothetical AGI lab. I’m explicitly not trying to evaluate whether Anthropic, or any other AGI lab, actually meets these requirements.]

  • The AI lab is seriously making preparations to pause.

    • Externally, I want their messaging to the public and to policymakers to repeatedly emphasize, “Superintelligence will be transformative to the world, and potentially world-destroying. We don’t confidently know how to build superintelligence safely. We’re attempting to make progress on that. But if we still can’t reliably shape superintelligent motivations, when we’re near to superintelligent capabilities, it will be imperative that all companies pause frontier development (but not applications). If we get to that point, we plan to go to the government and strongly request a global pause on development and global caps on AI capabilities.”

      • I want the executives of the AI company to say that, over and over, in most of their interviews, and ~all of their testimonies to the government. The above statement should be a big part of their public brand.

      • The company should try to negotiate with the other labs to get as many as they can to agree to a public statement like the above.

    • Internally, I want an expectation that “the company might pause at some point” to be part of the cultural DNA.

      • As part of the onboarding process for each new employee, someone sits down with him or her and says “you need to understand that [Company]’s default plan is to pause AI development at some point in the future. When we do that, the value of your equity might tank.”

      • It should be a regular topic of conversation amongst the staff “when do we pull the breaks?” It should be on the employee’s minds as a real possibility that they’re preparing for, rather than a speculative exotic timeline that’s fun to talk about.

  • There is a legibly incentive-aligned process for making the call of if and when it’s time to pause.

    • For instance, this could be a power invested in the board, or some other governance structure, and not in the executives of the company.

      • Everyone on that board should be financially disinterested (they don’t own equity in the company), familiar with AI risk threat models, and technically competent to evaluate frontier developments.

    • The company repeatedly issues explicitly non-binding public statements about the leadership’s current thinking about how to identify dangerous levels of capability (with margin of error).

  • The company has a reputation for honesty and commitment-keeping.

    • eg They could implement this proposal from Paul Christiano to make trustworthy public statements.

    • This does not mean that they need to be universally transparent. They’re allowed to have trade secrets, and to keep information that they think would be bad for the world to publicize.

  • The company has a broadly good track record of deploying current AIs safely and responsibly, including owning up to and correcting mistakes.

    • eg no Mecahitlers, good track record on sycophancy, legibly putting in a serious effort into guardrails to prevent present-day harms

  • [added:] The company has generally extremely high levels of operational security, such that they can realistically prevent other companies and other countries from stealing their research or model weights.

  • [added:] The company has consistently exhibited good (not necessarily prefect) judgment.

Something that isn’t on this list is that the company pre-declare that they would stop AI development now, if all other leading actors also agreed to stop. Where on the capability curve is a good place to stop is a judgement call, given the scientific value of continued scaling (and as a secondary, but still real consideration, the humanitarian benefit). I don’t currently feel inclined to demand that a company that had otherwise done all of the above tie their hands in that way. Publicly and credibly making this commitment might or might not make a big difference for whether other companies will join in the coordination effort, but I guess that if “we we will most likely need to pause, at some point” is really part of the company’s brand, one of their top recurring talking points, that should do about the same work for moving towards the coordinated equilibrium.

I’m interested in…

  1. arguments that any of the above desiderata are infeasible as stated, because they would be impossible or too costly to implement.

  2. additional desiderata that seem necessary or helpful.

  3. claims that any of the existing AI labs already meet these requirements, or meet them in spirit.

  1. ^

    Though perhaps AI will just be legibly freaky and scary to enough people, that a coalition of a small number of people who buy the arguments and a large number of people who are freaked out by the world changing in ways that are both terrifying and deeply uncomfortable, will be sufficient to produce a notable slowdown, even in spite of the enormous short and medium term profit incentives.

  2. ^

    Those are not forgone conclusions. I would be pretty interested in a company that specialized in training and studying only GPT-4-level models. I weakly guess that we can learn most of what we want to learn about how training impacts behavior from models that are that capable. That would still require tens to hundreds of millions of dollars a year, but probably not billions.