Questions for further investigation of AI diffusion

This post is one part of the sequence Understanding the diffusion of large language models. As context for this post, I strongly recommend reading at least the 5-minute summary of the sequence.

This post lists questions about AI diffusion that I think would be worthy of more research at the time of writing. Some questions serve as direct follow-ups to my research, while others just seem like important questions related to diffusion. I already raised some of these questions throughout this sequence, but this post collects them all so that interested researchers can easily refer back to the questions.

Feel free to reach out to me about these research ideas. I may be able to offer advice, suggest links, and suggest people to talk to. It’s possible that I or Rethink Priorities could help connect you with funding to work on these ideas if you’re interested and a good fit.

Further research to evaluate my proposals to limit access to datasets and algorithmic insights

In a previous post I presented proposals to limit access to datasets and proposals to limit access to algorithmic insights. I believe those proposals are probably worth doing, but that belief has a low enough resilience that the next step should be further consideration of whether or not to do these things.

The follow-up questions for the dataset proposals that I think are highest priority are:

  1. How feasible is it to actually convince various producers of large ML datasets to take the actions I recommended?

    1. Would they keep a dataset private, when they would have otherwise openly published it? Would some further incentive (e.g. financial compensation) be necessary?

    2. Is a data-labeling service such as Surge AI willing to add vetting procedures for who they provide services to?

    3. I recommend at least 1 full-time equivalent week investigating this. The research could involve first evaluating the benefit of my initial proposal, and then possibly interviewing people at data hosters/​curators such as the CommonCrawl foundation or Surge AI to get their opinion on the proposals.

  2. Is structured access to datasets (in the manner I described in the previous post) technically feasible?

    1. I recommend at least 2 full-time equivalent days investigating this. Someone with a good understanding of modern machine learning as well as user-facing APIs would be an ideal fit.

  3. Is having structured access to datasets a worthwhile compromise for potential users, compared to open access?

    1. I recommend at least 2 full-time equivalent days investigating this. The research could involve asking some ML practitioners what they would think of the training dataset service that I described in the previous post.

The follow-up questions for the algorithmic insight proposals that I think are highest priority are:

  1. How much effort are top AI developers already putting into assessing potential information hazards of their publications? How reliable is their assessment? How does the assessment transfer to actions? Does this vary among teams within the organization?

    1. This would help understand how useful it would be to convince AI developers to put more effort into assessing and acting on information hazards.

    2. I recommend at least three full-time equivalent days researching this question. I think the best way to research this is to interview people who currently work or previously worked at the relevant companies, especially people involved in policy or governance at the companies.

  2. How much effort are different top AI developers currently putting into information security and operations security? Could it be improved?

    1. I imagine that a lot of information about this would be confidential. But as someone with no background in information security, my impression is that publicizing the investment in or nature of one’s security is usually not harmful.[1]

    2. I recommend at least three full-time equivalent days researching this question.

What is the relevance and importance of other diffusion mechanisms?

As I noted in the section clarifying the scope of this sequence, I have focused on the diffusion mechanisms of replication and incremental research, because they are the most relevant mechanisms in my case studies. But it is worth researching the role of other mechanisms—both historically and in the future. These mechanisms include leak, theft, espionage, and extortion—see the definitions section for explanations of each of these.

Here are questions to ask about each of these mechanisms:

  1. What are the incentives to use this diffusion mechanism?

    1. How strong are those incentives compared to other mechanisms?

    2. How might those incentives change in the future?

  2. What are relevant historical examples of this mechanism?

    1. What do those examples tell us about diffusion?

    2. Examples could be in domains outside of AI, such as nuclear weapons. Nuclear Espionage and AI Governance is one example ([GAA], 2021).

  3. What interventions would best mitigate harm from this mechanism in particular?

I’d guess about one full-time equivalent month of further research on risks from each of these other mechanisms would be worthwhile for someone in the AGI governance community to do. People with an information security background seem like an especially good fit given the nature of the mechanisms. A history background may also be useful given that there is historical precedent of these mechanisms in domains outside of AI.

Case studies in other domains of AI

The case studies presented here were limited to the domain of language model pretraining. Future work could study cases of diffusion in other domains of AI. This would be useful both to expand the overall amount of empirical data on diffusion, and to make comparisons to my existing case studies. For instance, is there any evidence from other cases that counters the conclusions I drew from my case studies so far?

Some examples of cases that could be studied are:

  1. AlphaGo Zero (game playing domain)

    1. AlphaGo Zero became better at the game of Go than the version of AlphaGo that defeated a human world champion.[2] AlphaGo Zero accomplished this solely through playing versions of itself, rather than human demonstrations.

    2. One estimate puts the cost of replicating the experiments in the AlphaGo Zero paper at $35 million, making it possibly the most expensive machine learning model ever produced as of September 2022 (H., 2020).[3]

    3. I reviewed a brief investigation of this case from an anonymous source. There appear to have been early attempts to replicate AlphaGo Zero using the leela-zero open-source implementation. However, to my knowledge, ELF OpenGo from Facebook (now Meta) AI Research was the first open-source replication attempt that roughly achieved parity with the original AlphaGo Zero[4] about two years later (Tian et al., 2019). This two-year gap is similar to the gap for GPT-3. In both cases, Facebook/​Meta seems to have been first to produce the most faithful open-source replication (in the GPT-3 case, it was OPT).

  2. DALL-E (text-image domain):

    1. The blog post linked above describes DALL-E as “a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs.”

    2. There are several notable DALL-E replications or DALL-E-like models, including DALL·E Mini (open-source), Stable Diffusion (public API), and Midjourney (public API). There is also the closed-source Imagen by Google Research—the article states “The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo.”

I think it would be worth someone spending one full-time equivalent week on each of the above case studies. Additionally, I think it would be worth spending one full-time equivalent month to get data on “time to open-source” for many prominent ML breakthroughs or models, and analyze that data. For that research, people with some background in ML would be a good fit. I would also recommend consulting experts in the ML field about what the biggest breakthroughs are, and whether they know of specific open-source versions of implementations and models.

However, I would not recommend undertaking another investigation as broad as the one I have done for GPT-3-like models. I think it would be more productive to focus on questions that have a narrower and more easily isolated scope, one at a time, like many of the questions I have listed in this post.

How will the publication strategy of emerging AI developers shift as they grow?

It’s important to consider the publication practices of emerging AI developers, who could plausibly catch up to current leaders (at least in certain domains). Three examples are Adept, Cohere, and Stability. Based on their first announced AI system, Adept seems to be adopting a closed publication strategy for now. Cohere also appears to be closed about their models—I failed to find any information on their website even about the parameter count of their models in production. This strategy seems to be in their interest, for the sake of protecting commercial IP. And yet Stability seems to have gained a lot from the public release of the “Stable Diffusion” text-to-image model, as it attracted investment and future customers.

How will these emerging developers respond to their own increasing capabilities and revenue, in terms of publication norms? Will Stability be more protective of IP in the future, and/​or become more sensitive to misuse concerns? I’m uncertain how to approach these questions productively, but I think it is worth someone thinking about how to approach them and then gathering information on that basis for at least one full-time equivalent week. One idea is simply to ask these developers directly about their strategy and attitudes. Another idea is to look at historical precedents.

How much will deployment costs limit the diffusion of capabilities?

I have presented my case for the deployment costs of GPT-3-like models most likely being one order of magnitude less than development costs, even for the largest viable deployment scenarios (see this previous post). However, this had a lot of uncertainty, and I haven’t answered the same question about other domains of AI, and about transformative AI in the future. So the following questions remain.

What resources will be required to deploy an AI system that leads to transformative impact? Who will be able to access those resources?

  1. Note that the actors developing and deploying a given AI system are not necessarily the same.

  2. Key uncertainty: what applications constitute transformative impact? This is related to Holden Karnofsky’s proposed research question: “What are the most likely early super-significant applications of AI?”

  3. I think it is worth spending one full-time equivalent month on this question.

How might the cost of model inference change relative to the cost of training?

  1. As an example of change, Hoffmann scaling laws (Hoffmann et al., 2022) already reduced the cost of inference relative to training for pretrained language models, compared to people’s previous understanding. Hoffmann scaling laws imply that you can spend the same amount of compute training a smaller model on more data points, and a smaller model requires less computation for inference.[5]

  2. I think it is worth spending at least two full-time equivalent weeks on this question.

How much will different inputs to AI development contribute to AI progress?

I have presented my best guess about the relative importance of different inputs to AI development (see this section of a previous post). But I still have a lot of uncertainty about this. Some of the key uncertainties are below.

Will data become more difficult to acquire? If so, by how much?

  1. Villalobos et al. (2022) is a good launching point for further investigation, as it estimates when we will exhaust available human-generated text and image data for training ML models.

  2. To what extent will synthetic data substitute for human-generated data if the latter becomes more difficult to acquire?

  3. What is the likelihood that other new data-generating sources can keep up with the increasing data requirements of state-of-the-art models?

    1. There is some relevant discussion on the topic in Hobson (2022), including this comment.

  4. How sensitive is model performance to the quality of data? What notions of data quality (e.g., factual accuracy, specialization, diversity) are most important?

  5. What efforts are being made, and will be made, to provide services that make it easier to acquire high-quality data? How widely available will these services be? For example, see Surge AI.

  6. I recommend at least one full-time equivalent week of research on each of the above questions. People with a good understanding of modern ML, and possibly economics, as well as skill with data science, seem like a good fit.

What is the likelihood of algorithmic efficiency improving at a faster rate than compute investment at some point in the future?

  1. If this occurs, it would mean a net decrease in the cost of training models, which would decrease the barrier to diffusion.

  2. The acceleration of diffusion depends on the timing of algorithmic improvements. For example, if compute investment increases at the current rate for 10 years such that model training costs on the order of $100B,[6] and then algorithmic improvements reduce this cost to $10B, that would still inhibit diffusion to the vast majority of actors (assuming the trained models remained unpublished).

  3. Recent work that is relevant includes Sevilla et al. (2022) and Erdil and Besiroglu (2022).

  4. I recommend at least one full-time equivalent month of research on this question. People with a good understanding of modern ML, and ideally economics, as well as skill with data science, seem like a good fit.

To what extent will AI hardware become more specialized to overcome barriers to increasing hardware utilization?

  1. New, more specialized hardware may be easier to govern if regulation is put in place before the hardware proliferates.

  2. I recommend at least one full-time equivalent month of research on this question. People with a background in hardware, and ideally a high-level understanding of engineering large-scale ML systems, seem like a good fit.

How much more expensive will engineering talent become to meet the increasing challenges of scaling up machine learning models?

  1. I recommend at least one full-time equivalent week of research on this question. People with a good understanding of modern ML, and ideally economics, seem like a good fit.

Other questions: impacts of diffusion, and the behavior of actors

Here is a list of other questions. The fact that I don’t elaborate as much on these questions here does not mean they are less important. I either spent less time thinking about them, or I don’t think they need as much elaboration.

  1. What are potential methods to limit and detect model reproduction or theft, and how feasible are they? I discussed this in a section of a previous post.

    1. I recommend that someone experienced with security and ML spend one week full-time equivalent of further research on canary mechanisms and other potential mechanisms for reducing the risk of leaks and unilateral publication of models. However, I am probably ignorant of most existing research relevant to this, so I would retract that recommendation if someone has already explored these ideas in depth.

  2. Building upon the section about the relevance of diffusion to AI risk:

    1. What are the biggest potential benefits and harms of diffusion? How likely are they to occur?

    2. How can the benefits of diffusion be maximized?

    3. How can the harms of diffusion be minimized?

  3. Deeper analysis of the “drafting” and “leapfrogging” phenomena that were defined in a Background post.

    1. Do actors that are not in the lead actually save a lot of cost compared to the lead actor, via diffusion? If so, where do the biggest savings come from?

    2. What are examples of one actor overtaking another actor in AI capabilities in a particular domain? What role did diffusion play in that?

  4. I argued that future state-of-the-art pretrained language models will be more difficult to replicate than GPT-3, for the average actor (see this section of a previous post). Does this mean that diffusion will be slower across all actors, or just that there will be a stronger barrier between the leading actors and the other actors? Another way of phrasing this: will the top (say) three language model developers continue to keep up with each other within one year, while other actors fall behind?

    1. The “stronger barrier” scenario seems more probable to me. But it depends on how concentrated the inputs to AI development will be among developers. The more uniform the difference in resources between developers, the more that diffusion will just be slower across the board. The more there is a cluster of high-resourced developers, the more that diffusion will just be gated between those developers and everyone else.

  5. What is the willingness to pay for AI systems, for different actors?

    1. How much revenue is generated by different AI systems in production?

    2. How might the predictability of return on investment change? Predictability of return for language models is currently high due to scaling laws, as argued in Ganguli et al. (2022) (Section 3.1 “Economic” heading, p.10).

    3. What is the maximum investment that different actors are willing to make?

    4. What will be the most productive applications of AI over time?

  6. How capable are independent research groups and academic labs at forming coalitions that can compete with industry leaders?

  7. Will we see compute sponsorship grow? What’s the plausible upper limit to sponsorship? Which actors can provide that upper limit?

    1. I discussed compute sponsorship in this section of a previous post.

  8. How easily can lower-resourced actors such as AI startups acquire more resources by having a profitable AI business?

    1. Will these AI startups keep proliferating indefinitely, or merge with larger developers?

  9. Suppose that government AI labs are achieving state-of-the-art advances in some areas, or start to do so in the future. What will their default publication norms be? Would these labs publish to reveal their results? Would they specify algorithmic details? What would be the key factors affecting that decision?

  10. How will the meme of “democratizing AI” affect diffusion in the future?

    1. This meme can partly be seen as a response to the compute divide. See e.g., Ahmed and Wahed (2020), Ganguli et al. (2022) (section 3.2, pp. 9-10), and Etchemendy and Li (2020) (which says “There is a wide gulf between the few companies that can afford [the compute, data and expertise to train and deploy the massive machine learning models powering the most advanced research] and everyone else.”)

    2. As an example of this meme, see LAION, who have open-sourced large image-text datasets. They say on the linked page that “We believe that machine learning research and its applications have the potential to have huge positive impacts on our world and therefore should be democratized.”

Acknowledgements


This research is a project of Rethink Priorities. It was written by Ben Cottier. Thanks to Alexis Carlier, Amanda El-Dakhakhni, Ashwin Acharya, Ben Snodin, Bill Anderson-Samways, Erich Grunewald, Jack Clark, Jaime Sevilla, Jenny Xiao, Lennart Heim, Lewis Ho, Lucy Lim, Luke Muehlhauser, Markus Anderljung, Max Räuker, Micah Musser, Michael Aird, Miles Brundage, Oliver Guest, Onni Arne, Patrick Levermore, Peter Wildeford, Remco Zwetsloot, Renan Araújo, Shaun Ee, Tamay Besiroglu, and Toby Shevlane for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can explore our completed public work here.

  1. ^

    For example, stating that your password is more than 20 characters long, or that you have 2-factor authentication in place on an account, just communicates that you have relatively strong security rather than exposing a vulnerability.

  2. ^

    See the Abstract of Silver et al. (2017) presenting AlphaGo Zero.

  3. ^

    As one comparison, I estimated PaLM’s actual final training run cost (for Google) at about $6 million. One issue with the $35 million estimate is that AlphaGo Zero was trained using TPUs. While TPUs made training much faster (and perhaps much more feasible) than using GPUs, it was also much more expensive than GPUs at the time—see the data in the Footnotes in H. (2020) about the estimate, which lists $6.50/​hour for TPU vs. $0.31/​hour for GPU.

  4. ^

    This is based on Tian et al. (2019) stating “ELF OpenGo is the first open-source Go AI to convincingly demonstrate superhuman performance with a perfect (20:0) record against global top professionals.”

  5. ^

    See the Abstract of Hoffmann et al. (2022): “We find that current large language models are significantly undertrained…This also means that \chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage.”

  6. ^

    See this draft report by Lennart Heim (requires access). This forecast is lower than the one in CSET’s Lohn and Musser (2022, p. 13) because it uses different (more reliable) trends of compute doubling times and GPU price performance.

Crossposted from EA Forum (28 points, 0 comments)
No comments.