TLDR: We’ve put together a website to track recent releases of superscale models, and comment on the immediate and near-term safety risks they may pose. The website is little more than a view of an Airtable spreadsheet at the moment, but we’d greatly appreciate any feedback you might have on the content. Check it out at aitracker.org.
In the past few months, several successful replications of GPT-3 have been publicly announced. We’ve also seen the first serious attempts at scaling significantly beyond it, along with indications that large investments are being made in commercial infrastructure that’s intended to simplify training the next generation of such models.
Today’s race to scale is qualitatively different from previous AI eras in a couple of major ways. First, it’s driven by an unprecedentedly tight feedback loop between incremental investment in AI infrastructure, and expected profitability . Second, it’s inflected by nationalism: there have been public statements to the effect that a given model will help the developer’s home nation maintain its “AI sovereignty” — a concept that would have been alien just a few short years ago.
The replication and proliferation of these models likely poses major risks. These risks are uniquely hard to forecast, not only because many capabilities of current models are novel and might be used to do damage in imaginative ways, but also because the capabilities of future models can’t be reliably predicted .
The first step to assessing and addressing these risks is to get visibility into the trends they arise from. In an effort to do that, we’ve created AI Tracker: a website to catalog recent releases of superscale AI models, and other models that may have implications for public safety around the world.
Each model in AI Tracker is labeled with several key features: its input and output modalities; its parameter count and total compute cost (where available); its training dataset; its known current and extrapolated future capabilities; and a brief description and industry context, among others. The idea behind the tracker is to highlight these models in the context of the plausible public safety risks they pose, and place them in their proper context as instances of a scaling trend.
(There’s also a FAQ at the bottom of the page, if you’d like to know a bit more about our process or motivations.)
Note that we don’t directly discuss x-risk in these entries, though we may do so in the future. Right now our focus is on 1) the immediate risks posed by applications of these models, whether from accidental or malicious use; and 2) the near-term risks that would be posed by a more capable version of the current model . These are both necessarily speculative, especially 2).
Note also that we expect we’ll be adding entries to AI Tracker retroactively — sometimes the significance of a model is only knowable in hindsight.
Some of the models listed in AI Tracker are smaller in scale than GPT-3, despite having been developed after it. In these cases, we’ve generally chosen to include the model either because of its modality (e.g., CLIP, which classifies images) or because we believe it has particular implications for capability proliferation (e.g., GPT-J, whose weights have been open-sourced).
AI Tracker is still very much in its early stages. We’ll be adding new models, capabilities and trends as they surface. We also expect to improve the interface so you’ll be able to view the data in different ways (plots, timelines, etc.).
Tell us how to improve!
We’d love to get your thoughts about the framework we’re using for this, and we’d also greatly appreciate any feedback you might have at the object level. Which of our risk assessments look wrong? Which categories didn’t we include that you’d like to see? Which significant models did we miss? Are any of our claims incorrect? Do we seem to speak too confidently about something that’s actually more uncertain, or vice versa? In terms of the interface (which is very basic at the moment): What’s annoying about it? What would you like to be able to do with it, that you currently can’t?
For public discussion, please drop a comment below in LW or AF. I — Edouard, that is — will be monitoring the comment section periodically over the next few days and I’ll answer as best I can.
If you’d like to leave feedback or request an update on an aspect of the tracker itself (e.g., submit a new model for consideration or point out an error), you can submit feedback directly on the page itself. We plan to credit folks, with their permission, for any suggestions of theirs that we implement.
Finally, if you’d like to reach out to me (Edouard) directly, you can always do so by email: [my_first_name]@mercurius.ai.
 This feedback loop isn’t perfectly tight at the margin, since currently there’s still a meaningful barrier to entry to train superscale models, both in terms of engineering resources and of physical hardware. But even that barrier can be cleared by many organizations today, and it will likely disappear entirely once the necessary training infrastructure gets abstracted into a pay-per-use cloud offering.
 As far as I know, at least. If you know of anyone who’s been able to correctly predict the capabilities of a 10x scale model from the capabilities of the corresponding 1x scale model, please introduce us!
 Of course, it’s not really practical to define “more capable version of the current model” in any precise way that all observers will agree on. But you can think of this approximately as, “take the current model’s architecture, scale it by 2-10x, and train it to ~completion.” It probably isn’t worth the effort to sharpen this definition much further, since most of the uncertainty about risk comes from our inability to predict the qualitative capabilities of models at these scales anyway.