Slowing AI: Reading list

This is a list of past and present research that could inform slowing AI. It is roughly sorted in descending order of priority, between and within subsections. I’ve read about half of these; I don’t necessarily endorse them. Please have a low bar to suggest additions, replacements, rearrangements, etc.

Slowing AI

There is little research focused on whether or how to slow AI progress.[1]

Particular (classes of) interventions & affordances

Making AI risk legible to AI labs and the ML research community

Transparency & coordination

Relates to “Racing & coordination.” Roughly, that subsection is about world-modeling and threat-modeling and this subsection is about solutions and interventions.

See generally Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims (Brundage et al. 2020).

Compute governance

Standards

Regulation

Publication practices

Differentially advancing safer paths

Actors’ levers

There are some good lists and analyses, but not focused on slowing AI.

Racing & coordination

Racing for powerful AI, how actors that develop AI act, and how actors could coordinate to decrease risk. Understanding how labs act and racing for powerful AI seem to be wide open problems, as does giving an account of the culture of progress and publishing in AI labs and the ML research community.

I’ve read less than half of these; possibly many of them are off-point or bad.

Technological restraint

Other

People[2]

There are no experts on slowing AI, but there are people who it might be helpful to talk to, including (disclaimer: very non-exhaustive) (disclaimer: I have not talked to all of these people):

  • Zach Stein-Perlman

  • Lukas Gloor

  • Jeffrey Ladish

  • Matthijs Maas

    • Specifically on technological restraint

  • Akash Wasil

    • Especially on publication practices or educating the ML community about AI risk

  • Michael Aird

  • Vael Gates

    • Specifically on educating the ML community about AI risk; many other people might be useful to talk to about this, including Shakeel Hashim, Alex Lintz, and Kelsey Piper

  • Katja Grace

  • Lennart Heim on hardware policy

  • Onni Aarne on hardware

  • Probably some other authors of research listed above

  1. ^

    There is also little research on particular relevant considerations, like how multipolarity among labs relates to x-risk and to slowing AI, or how AI misuse x-risk and non-AI x-risk relate to slowing AI.

  2. ^

    I expect there is a small selection bias where the people who think and write about slowing AI are disposed to be relatively optimistic about it.