The case for aligning narrowly superhuman models
I wrote this post to get people’s takes on a type of work that seems exciting to me personally; I’m not speaking for Open Phil as a whole. Institutionally, we are very uncertain whether to prioritize this (and if we do where it should be housed and how our giving should be structured). We are not seeking grant applications on this topic right now.
Thanks to Daniel Dewey, Eliezer Yudkowsky, Evan Hubinger, Holden Karnofsky, Jared Kaplan, Mike Levine, Nick Beckstead, Owen Cotton-Barratt, Paul Christiano, Rob Bensinger, and Rohin Shah for comments on earlier drafts.
A genre of technical AI risk reduction work that seems exciting to me is trying to align existing models that already are, or have the potential to be, “superhuman” at some particular task (which I’ll call narrowly superhuman models). I don’t just mean “train these models to be more robust, reliable, interpretable, etc” (though that seems good too); I mean “figure out how to harness their full abilities so they can be as useful as possible to humans” (focusing on “fuzzy” domains where it’s intuitively non-obvious how to make that happen).
Here’s an example of what I’m thinking of: intuitively speaking, it feels like GPT-3 is “smart enough to” (say) give advice about what to do if I’m sick that’s better than advice I’d get from asking humans on Reddit or Facebook, because it’s digested a vast store of knowledge about illness symptoms and remedies. Moreover, certain ways of prompting it provide suggestive evidence that it could use this knowledge to give helpful advice. With respect to the Reddit or Facebook users I might otherwise ask, it seems like GPT-3 has the potential to be narrowly superhuman in the domain of health advice.
But GPT-3 doesn’t seem to “want” to give me the best possible health advice—instead it “wants” to play a strange improv game riffing off the prompt I give it, pretending it’s a random internet user. So if I want to use GPT-3 to get advice about my health, there is a gap between what it’s capable of (which could even exceed humans) and what I can get it to actually provide me. I’m interested in the challenge of:
How can we get GPT-3 to give “the best health advice it can give” when humans in some sense “understand less” about what to do when you’re sick than GPT-3 does? And in that regime, how can we even tell whether it’s actually “doing the best it can”?
I think there are other similar challenges we could define for existing models, especially large language models.
I’m excited about tackling this particular type of near-term challenge because it feels like a microcosm of the long-term AI alignment problem in a real, non-superficial sense. In the end, we probably want to find ways to meaningfully supervise (or justifiably trust) models that are more capable than ~all humans in ~all domains. So it seems like a promising form of practice to figure out how to get particular humans to oversee models that are more capable than them in specific ways, if this is done with an eye to developing scalable and domain-general techniques.
I’ll call this type of project aligning narrowly superhuman models. In the rest of this post, I:
Give a more detailed description of what aligning narrowly superhuman models could look like, what does and doesn’t “count”, and what future projects I think could be done in this space (more).
Explain why I think aligning narrowly superhuman models could meaningfully reduce long-term existential risk from misaligned AI (more).
Lay out the potential advantages that I think this work has over other types of AI alignment research: (a) conceptual thinking, (b) demos in small-scale artificial settings, and (c) mainstream ML safety such as interpretability and robustness (more).
Answer some objections and questions about this research direction, e.g. concerns that it’s not very neglected, feels suspiciously similar to commercialization, might cause harm by exacerbating AI race dynamics, or is dominated by another type of work (more).
Briefly discuss where I think some AI alignment researchers currently stand on this work (more).
Summarize takeaways and possible next steps for readers (more).
There aren’t a large number of roles where someone could do this right now, but if aligning narrowly superhuman models is a good idea, and we can build a community consensus around it being a good idea, I think we have a good shot at creating a number of roles in this space over the coming years (allowing a larger number of people to productively contribute to AI x-risk reduction than would be possible otherwise). To discover whether that’s possible, I’d appreciate it if people could react with pushback and/or endorsement, depending on where you’re at.
What aligning narrowly superhuman models could look like
I’m a lot less confident about a particular agenda or set of project ideas than I am about the high-level intuition that it seems like we could somehow exploit the fact that today’s models are superhuman in some domains to create (and then analyze and solve) scaled-down versions of the “aligning superintelligent models” problem. I think even the basic framing of the problem has a lot of room to evolve and improve; I’m trying to point people toward something that seems interestingly analogous to the long-run alignment problem rather than nail down a crisp problem statement. With that said, in this section I’ll lay out one vision of what work in this area could look like to provide something concrete to react to.
First of all, it’s important to note that not all narrowly superhuman models are going to be equally interesting as alignment case studies. AlphaGoZero (AGZ) is narrowly superhuman in an extremely strong sense: it not only makes Go moves better than the moves made by top human players, but also probably makes moves that top players couldn’t even reliably recognize as good. But there isn’t really an outer alignment problem for Go: a precise, algorithmically-generated training signal (the win/loss signal) is capable of eliciting the “full Go-playing potential” of AGZ given enough training (although at a certain scale inner alignment issues may crop up). I think we should be focusing on cases where both inner and outer alignment are live issues.
The case studies which seem interesting are models which have the potential to be superhuman at a task (like “giving health advice”) for which we have no simple algorithmic-generated or hard-coded training signal that’s adequate (which I’ll call “fuzzy tasks”). The natural thing to do is to try to train the model on a fuzzy task using human demonstrations or human feedback—but if (like AGZ) the model actually has the capacity to improve on what humans can demonstrate or even reliably recognize, it’s not immediately obvious how to elicit its “full potential.”
Here’s an attempt at one potential “project-generation formula”, where I try to spell out connections to what I see as the main traditional sub-problems within academic AI alignment research:
Choose a helpful “fuzzy” task (e.g. summarization, question-answering, advice-giving, story-writing) for which we have suggestive evidence that makes us suspect a state-of-the-art model has the capacity to significantly outperform some reference set of humans (e.g. Mechanical Turk workers) given the right training signal. Then,
Reward learning: Find a training procedure that allows those reference humans to train the model to do the fuzzy task better than they could do it (and ideally, better than they could even recognize or verify unaided). This procedure shouldn’t rely on the researchers’ own understanding of the particular domain in a way that wouldn’t generalize across domains.
Scalability and competitiveness: Argue or empirically demonstrate that the human oversight work wouldn’t have to scale up much if the model were 10x or 100x bigger, or each instance of the task took 10x or 100x longer to demonstrate or evaluate.
Interpretability and robustness: Once you’ve done this, try to understand its behavior and stamp out whatever pathologies (e.g. lying, going off the rails) may have cropped up.
This is just one type of project you could do in this space. The larger motivating question here is something like, “It looks like at least some existing models, in at least some domains, ‘have the ability’ to exceed at least some humans in a fuzzy domain, but it’s not obvious how to ‘draw it out’ and how to tell if they are ‘doing the best they can to help.’ What do we do about that?”
I don’t think the project-generation formula I laid out above will turn out to be the best/most productive formulation of the work in the end; I’m just trying to get the ball rolling with something that seems concrete and tractable right now. As one example, the project-generation formula above is putting reward learning / “outer alignment” front and center, and I could imagine other fruitful types of projects that put “inner alignment” issues front and center.
Existing work in this area
This kind of work only became possible to do extremely recently, and mostly only in industry AI labs; I’m not aware of a paper that follows all three steps above completely. But “Learning to summarize from human feedback” (Stiennon et al., 2020) accomplishes the easier version of 1 and a bit of 2 and 3. The authors chose the fuzzy task of summarizing Reddit posts; there was an existing corpus of human demonstrations (summaries of posts written by the posters themselves, beginning with “TL;DR”):
Reward learning: Ultimately, the quality of summaries generated by a large language model fine-tuned with RL from human feedback exceeded the quality of the Reddit summaries (i.e. it exceeded what some set of reference humans generated). But it didn’t really exceed what the human workers could evaluate—except in the fairly straightforward (but IMO meaningful) sense that the authors figured out quality control procedures, human rating aggregation algorithms, easier framings of the question, training and feedback for workers, etc that allowed them to get better performance than they would have gotten using the most naive implementation of “train on human ratings.”
Scalability: I don’t think the paper makes explicit arguments about scalability, but the method is very domain-general and could plausibly work for significantly harder tasks, especially combined with decomposition (and I’d like to see that systematically attempted).
Interpretability and robustness: The paper doesn’t dig deep into interpretability, reliability, and pathological behavior, but it does demonstrate that optimizing the reward model (learned from human judgments) “too hard” leads to weird pathological summaries that are repetitive, offensive, etc., and addresses this by applying a penalty for diverging too far from the human demonstration distribution.
What kinds of projects do and don’t “count”
In the high-level description of this research area, I’ve aimed to be as broad as possible while picking out the thing that seems interestingly different from other research in alignment right now (i.e. the focus on narrowly superhuman models). But given such a broad description, it can be confusing what does and doesn’t count as satisfying it. Would self-driving cars count? Would MuseNet count? Would just training GPT-4 count?
Firstly, I don’t think whether a project “counts” is binary—in some sense, all I’m saying is “Find a model today such that it seems as non-obvious as possible how to align it, then try to align it.” The more obvious the training signal is, the less a project “counts.” But here are some heuristics to help pick out the work that currently feels most central and helpful to me:
You should probably be fine-tuning an existing large model: I don’t think we should be guessing what size models could have the potential to be narrowly superhuman in some domain; I think an alignment project should probably be inspired by noticing that an existing model seems to have some “knowledge” or “skill” that’s it not adequately harnessing because it doesn’t “want to”, as in the example with GPT-3 and health advice above. I would guess the base model you start with should be >>1B parameters, and the larger the better—this is because the larger the model is, the more likely it is to have the capacity to be superhuman in an interesting, challenging domain. Less confidently, I would guess that you probably want to be fine-tuning a generative model like GPT-3 or MuseNet (as opposed to a supervised learning model like an image classifier or an RL model like AlphaGoZero or AlphaStar), because those models seem closest to being able to do “interesting real-world tasks” better than some humans can.
If you’re making the model larger, it doesn’t count: I see the point of this work as “realizing the potential of existing state-of-the-art models in fuzzy domains”, rather than pushing forward the state-of-the-art in models’ raw potential. Note that this doesn’t mean I think scaling up models is always bad—I definitely see risks there, but also potential benefits depending on who does it and how (e.g. new large models can also create new opportunities to do empirical alignment research like this). I think the question of the sign of scaling work is pretty complicated and situation-dependent. I just want to clearly distinguish between the projects of “aligning narrowly superhuman models” and “scaling models up to make them (more) superhuman”, and make it clear that someone could participate in one without participating in the other. So, for example, training GPT-4 would not count as aligning a narrowly superhuman model.
If you’re not dealing with humans, it probably doesn’t count: I think that if you can get the model to achieve superhuman performance at some task without collecting any human feedback or human demonstrations, the task is probably not “fuzzy” enough. It shouldn’t be easy for humans to just write down an algorithm specifying what they want, and there shouldn’t be an existing dataset that just demonstrates what they want. In practice, I also don’t think human demonstrations alone will cut it (unless they are cleverly combined with an amplification-like scheme or somehow augmented or assisted); RL from human feedback will probably be necessary. My guess is that self-driving cars mostly fail on these grounds—in a lot of self-driving car companies, only the recognition of objects in a scene is done with large neural nets, and those are trained almost entirely from labeled datasets. To the extent that large models are used for the actual driving policy (which they usually aren’t), relatively simple/algorithmic training signals like “how far is the car from other cars”, “how centered is it in the lane”, “how smooth is its acceleration”, etc seem probably adequate to elicit human-level or superhuman driving ability without bringing in feedback from human judgments.
If you didn’t make the model genuinely useful, it probably doesn’t count: I think we should generally be choosing complex, multi-dimensional real-world tasks where there is a lot of room to improve on typical humans’ actions and/or judgments—giving advice, summarizing research, coding, writing emails, translation, telling stories, etc. In the end, these models should feel impressive and valuable—they generally wouldn’t constitute a commercial product on their own because commercial products are rarely “clean” or “pure ML”, but should ideally have the potential to become a product with some design and engineering work. If the selected task was not valuable or at least inherently interesting, I would guess that the alignment problem wasn’t hard enough and much of the benefits of “practicing on something similar to the real deal” would be reduced. Note however that “genuinely useful” doesn’t mean optimized for usefulness alone—I expect this research will not look like the shortest path to creating a valuable product (e.g. by construction the approach I propose below makes it much harder than it has to be if you just want to train a model to be useful somehow). See this objection and response for more detail.
I think some projects that don’t fit all these criteria will also constitute useful progress on aligning narrowly superhuman models, but they don’t feel like central examples of what I’m trying to point at.
Potential near-future projects: “sandwiching”
I think a basic formula that could take this work a step beyond Stiennon et al, 2020 is a) “sandwich” the model in between one set of humans which is less capable than it and another set of humans which is more capable than it at the fuzzy task in question, and b) figure out how to help the less-capable set of humans reproduce the judgments of the more-capable set of humans. For example,
First fine-tune a coding model to write short functions solving simple puzzles using demonstrations and feedback collected from expert software engineers. Then try to match this performance using some process that can be implemented by people who don’t know how to code and/or couldn’t solve the puzzles themselves.
First fine-tune a model to answer long-form questions in a domain (e.g. economics or physics) using demonstrations and feedback collected from experts in the domain. Then try to match this performance using some process that can be implemented by people who know very little about the domain.
First fine-tune a model to translate between English and French using demonstrations and feedback collected from people who are fluent in both languages. Then try to match this performance using some process that can be implemented by people who are fluent in one language and barely know the other (or don’t know it at all and only have a dictionary). Something similar was done in Lample et al., 2018, although they didn’t use human feedback.
In all of these cases, my guess is that the way to get the less-capable group of humans to provide training signals of a similar quality to the more-capable group will involve some combination of:
Training models to help the humans form better judgments (for example, training models to explain the meaning of technical terms or to fetch and summarize relevant papers for humans).
Breaking down the problem and splitting it up among many humans (as in Humans Consulting HCH).
Getting models to explain why they’re doing what they’re doing in simpler terms that connect to things the human overseers understand (this feels like it could fit under debate or interpretability).
Figuring out how to train the human workers, and how to separate their good judgments from noise / mistakes.
It may not yet be possible to do these more ambitious projects (for example, because models may not be powerful enough yet to train them to meaningfully help human evaluators, engage in debates, meaningfully exceed what humans can recognize / verify, etc). In that case, I think it would still be fairly valuable to keep doing human feedback projects like Steinnon et al., 2020 and stay on the lookout for opportunities to push models past human evaluations; state-of-the-art models are rapidly increasing in size and it may become possible within a couple of years even if it’s not quite possible now.
Importantly, I think people could make meaningful progress on aligning narrowly superhuman models using existing models without scaling them up any further, even if they are only superhuman with respect to human demonstrations for now—there’s a lot we don’t know even just about how to do RL from human feedback optimally. And in the near future I expect it will be possible to use the larger models which will likely be trained to do even more interesting projects, which have the potential to exceed human evaluations in some domains.
(For more speculative thoughts on how we might go beyond “sandwiching”, see the appendix.)
How this work could reduce long-term AI x-risk
On the outside view, I think we should be quite excited about opportunities to get experience with the sort of thing we want to eventually be good at (aligning models that are smarter than humans). In general, it seems to me like building and iterating on prototypes is a huge part of how R&D progress is made in engineering fields, and it would be exciting if AI alignment could move in that direction.
If there are a large number of well-motivated researchers pushing forward on making narrowly superhuman models as helpful as possible, we improve the odds that we first encounter serious problems like the treacherous turn in a context where a) models are not smart enough to cause actually catastrophic harm yet, and b) researchers have the time and inclination to really study them and figure out how to solve them well rather than being in a mode of scrambling to put out fires and watching their backs for competitors. Holistically, this seems like a much safer situation to be in than one where the world has essentially procrastinated on figuring out how to align systems to fuzzy goals, doing only the minimum necessary to produce commercial products.
This basic outside view consideration is a big part of why I’m excited about the research area, but I also have some more specific thoughts about how it could help. Here are three somewhat more specific paths for working on aligning narrowly superhuman models today to meaningfully reduce long-term x-risk from advanced AI:
Practical know-how and infrastructure: It seems likely that a successful long-run approach to (machine learning-based) alignment will involve somehow learning from human demonstrations and/or feedback as a key component, and also pretty likely that it will involve somehow using ML tools to help go beyond raw human judgment. I’d guess that a number of low level details about how ideas like “RL from human feedback” and “ML aiding human judgments” are implemented will make a difference to how successful the approach is: things like which human judges are selected, how well they are trained and how much practice they have, what exact types of questions are used to elicit the judgments, what judgment aggregation and quality assurance procedures are used, whether there are good off-the-shelf ML solutions for enhancing human judgments in certain ways, whether there are easy-to-use platforms that let researchers gather good human feedback at the push of a button, etc. Aligning narrowly superhuman models today could help build up tools, infrastructure, best practices, and tricks of the trade. I expect most of this will eventually be developed anyway, but speeding it up and improving its quality could still be quite valuable, especially in short timelines worlds where there’s a lot less time for things to take their natural course.
Better AI situation in the run-up to superintelligence: If at each stage of ML capabilities progress we have made sure to realize models’ full potential to be helpful to us in fuzzy domains, we will be going into the next stage with maximally-capable assistants to help us navigate a potentially increasingly crazy world. We’ll be more likely to get trustworthy forecasts, policy advice, research assistance, and so on from our AI assistants. Medium-term AI challenges like supercharged fake news / clickbait or AI embezzlement seem like they would be less severe. People who are pursuing more easily-measurable goals like clicks or money seem like they would have less of an advantage over people pursuing hard-to-measure goals like scientific research (including AI alignment research itself). All this seems like it would make the world safer on the eve of transformative AI or AGI, and give humans more powerful and reliable tools for dealing with the TAI / AGI transition.
Chance of discovering or verifying long-term solution(s): I’m not sure whether a “one shot” solution to alignment (that is, a single relatively “clean” algorithm which will work at all scales including for highly superintelligent models) is possible. But if it is, it seems like starting to do a lot of work on aligning narrowly superhuman models probably allows us to discover the right solution sooner than we otherwise would have. For one thing, people doing this work could test proposals (such as Iterated Distillation and Amplification) coming from more conceptual researchers, verifying or falsifying elements and proposing modifications informed by empirical understanding. It also seems plausible that a solution will emerge directly from this line of work rather than the conceptual work—the latter is mostly focused on finding a one-shot solution that will work under ~pessimal empirical assumptions, but it seems very plausible that a) it’s impossible to find a one-shot solution that works under worst-case empirical assumptions, but b) it’s possible to find one that works given the actual ways that models tend to learn or generalize. More broadly, “doing empirical science on the alignment problem”—i.e. systematically studying what the main problem(s) are, how hard they are, what approaches are viable and how they scale, etc—could help us discover a number of different avenues for reducing long-run AI x-risk that we aren’t currently thinking of, one-shot technical solutions or otherwise.
I think both the broad outside view and these specific object-level benefits make a pretty compelling case that this research would be valuable on the object level. Additionally, from a “meta-EA” / “community building” perspective, I think pioneering this work could boost the careers and influence of people concerned with x-risk because it has the potential to produce conventionally-impressive results and demos. My main focus is the case that this work is valuable on the merits and I wouldn’t support it purely as a career-boosting tool for aligned people, but I think this is a real and significant consideration that can tip the scales.
Advantages over other genres of alignment research
First, I’ll lay out what seem like the three common genres of alignment research:
Conceptual research: This is pen-and-paper thinking that often looks like a combination of math and philosophy, which is usually aiming to make progress toward a “one shot” solution (and also often involves a lot of disentangling and framing what the problem even is). The most prominent examples are MIRI’s work and Paul Christiano’s work; a number of other posts on the Alignment Forum also fit in this category.
Gridworlds and games: This work aims to demonstrate alignment problems such as wireheading or other reward hacking in a relatively small-scale artificial setting such as a simple game, and usually to solve the demonstrated problem(s) in the small-scale setting in a way that could shed light on how to solve larger-scale alignment problems. Two examples are REALab (Kumar et al., 2020) and Inverse Reward Design (Hadfield-Mennell et al., 2017).
Mainstream ML safety: This is alignment-relevant work that existing ML researchers were independently working on; most of it fits under “reliability+robustness” or “interpretability.” This work is usually done on fairly large (though not always state-of-the-art) neural networks, but doesn’t usually pay special attention to the case where models are more capable or knowledgeable than humans. Some examples are the OpenAI microscope (interpretability), Dathathri et al., 2020 (robustness and reliability), and the Unrestricted Adversarial Examples Challenge (robustness and reliability).
I’m broadly supportive of all three of these other lines of work, but I’m excited about the potential for the new approach described in this post to “practice the thing we eventually want to be good at.” I think on the outside view we should expect that doing whatever we can find that comes closest to practicing what we eventually want to do will be good in a number of ways (e.g. feeling and looking more “real”, encouraging good habits of thought and imposing helpful discipline, etc).
More specifically, here are some advantages that it feels like “aligning narrowly superhuman models” line of work has over each of the other three genres:
Compared to conceptual research, I’d guess aligning narrowly superhuman models will feel meatier and more tractable to a number of people. It also seems like it would be easier for funders and peers to evaluate whether particular papers constitute progress, which would probably help create a healthier and more focused field where people are broadly more on the same page and junior researchers can get stronger mentorship. Related to both of these, I think it provides an easier opportunity for people who care about long-run x-risk to produce results that are persuasive and impressive to the broader ML community, as I mentioned above.
Compared to gridworlds and games, I think this work stands a greater chance of scaling up to more capable systems—I think it would probably provide some good discipline to do alignment work at a scale that’s large enough that it’s already kind of unwieldy, where models are already more capable than their overseers in some real-world-relevant ways, and researchers are forced to confront messy details and hard-to-foresee structural issues. When it’s possible to demonstrate an issue at scale, I think that’s usually a pretty clear win.
Compared to mainstream ML safety, aligning narrowly superhuman models has some of the “discipline” advantages mentioned above of focusing on situations where models are more capable than humans. Additionally, lots of researchers work on interpretability and robustness for lots of different reasons, meaning the specific research priorities and “tastes” of the broader interpretability and robustness fields won’t be particularly optimized for reducing long-run x-risk. This can make it harder for newer researchers motivated primarily by x-risk to zoom in on the most x-risk-relevant subproblems and get adequate mentorship on that; aligning narrowly superhuman models has the potential to be more x-risk-oriented from the start.
Finally and maybe most importantly, I think aligning narrowly superhuman models has high long-run field growth potential compared to these other genres of work. Just focusing on GPT-3, there are already a lot of different fuzzy goals we could try to align it to, and the number of opportunities will only grow as the ML industry grows and the number and size of the largest models grow. This work seems like it could absorb a constant fraction (e.g. 1% or 5%) of all the ML activity—the more models are trained and the mode capable they are, the more opportunity there is to align narrowly superhuman models to ever more tasks.
I think we have a shot at eventually supplying a lot of people to work on it too. In the long run, I think more EAs could be in a position to contribute to this type of work than to either conceptual research or mainstream ML safety. Conceptual research is often foggy and extremely difficult to make progress on without a particular kind of inspiration and/or hard-to-define “taste”; mainstream ML safety is often quite technical and mathematically dense (and ensuring the work stays relevant to long-run x-risk may be difficult).
A lot of work involved in aligning narrowly superhuman models, on the other hand, seems like it’s probably some combination of: a) software engineering and ML engineering, b) dealing with human contractors, and c) common sense problem-solving. Lead researchers may need to bring taste and research judgment to ensure that the work is well-targeted, but a number of people could work under one lead researcher doing tractable day-to-day work with reasonably good feedback loops. If there were institutional homes available to onboard people onto this work, I think a strong generalist EA with a software engineering background could plausibly retrain in ML engineering over 6-12 months and start contributing to projects in the space.
Right now there are only a few organizations that offer roles doing this work and that seems like a big bottleneck, but it could make sense to prioritize creating more institutional homes and/or rapidly expanding the ones that exist.
Objections and responses
In this section I’ve tried to anticipate some potential objections, and give my responses; I’d suggest skipping around and reading only the ones that interest you. I don’t think that I have knock-down answers to all of these objections, but I do remain holistically excited about this idea after reflecting on them some.
How would this address treachery by a superintelligence?
Elaboration of objection: It seems like there is a “hard core” of the alignment problem that only crops up when models are very smart in a very general way, not just e.g. better than MTurkers at giving medical advice. The specific scariest problem seems to be the “treacherous turn”: the possibility that the model will appear to be helpful during training time even though it’s actually power-seeking because it’s aware that it’s being trained and has to act helpful to survive, and later cause catastrophic harm once it knows it’s out of the training setup. It doesn’t seem like the “aligning narrowly superhuman models” style of work will figure out a way to address the treacherous turn until it’s likely too late.
I’m very uncertain how relevant the near-term work will turn out to be for more exotic problems like the treacherous turn, and I want to think more about ways to nudge it to be more relevant. I would be very excited to find empirical research projects on large models that specifically shed light on the treacherous turn possibility, and I agree it’s a weakness of my set of potential projects that they aren’t specifically optimized for unearthing and correcting treachery.
With that said, I don’t think there are currently genres of work that feel similarly tractable and scalable that do tackle the treacherous turn head on—of the main genres of alignment work, I’d argue that only a subset of the conceptual work is aiming to directly generate a long-term solution to treachery, and I think the jury is very much out on whether it will be fruitful; gridworlds and games and mainstream ML safety largely don’t seem to try for a long-term treacherous turn solution. So I think the relative hit that my proposal takes due to this consideration is fairly limited.
Even if they don’t start off tackling the treacherous turn, I’d guess that researchers would have a decent shot at learning useful things about treachery down the line if they were pursuing this work. Basically, I think it’s pretty likely that full-blown treachery will be preceded by mini-treachery, and with better understanding of how neural networks tend to learn and generalize, researchers may be able to specifically seek out domains where mini-treachery is especially likely to occur to better study it. Even if techniques used by empirical researchers don’t work out of the box for the treacherous turn, empirical work eliciting and studying mini-treachery could still inform what kind of theoretical or conceptual work needs to be done to address it, in a way that seems more promising to me than eliciting micro-treachery in gridworlds and games.
Moreover, even though the treacherous turn seems like the scariest single source of risk, I don’t think it totally dominates the overall expected AI risk—a significant fraction of the risk still seems to come from more “mundane” outer alignment failures and various unforced errors, which this empirical work seems better-placed to address. Of the three broad ways I listed that this work could reduce x-risk, the critique that it doesn’t seem to address the treacherous turn very well applies most to the “Chance of discovering or verifying long-term solution(s)” category; even if it fails to address the treacherous turn, it still seems that “Practical know-how and infrastructure” and “Better AI situation in the run-up to superintelligence” matter.
Doesn’t this feel suspiciously close to just profit-maximizing?
Elaboration of objection: It sort of sounds like you’re just telling EAs to make AI really useful to humans (and indeed push models to be superhuman if they can be); it feels like this would also be what someone who is into pure profit-maximization would be excited about, and that makes me suspicious about the reasoning here and nervous about calling it an alignment activity. Even if you’re right that it helps with alignment, we might see a lot of people flock to it for the wrong reasons.
I agree that there is overlap with commercial incentives, but I think there are three high-level ways that this type of work would be different from what you’d do if you were profit-maximizing:
Not making models bigger: This work doesn’t involve making models bigger; it involves making models of a given fixed size more helpful. In a commercial setting, often a cost-effective way of improving results would be to simply scale the model up.
Seeking difficult rather than easy problems: The problem selection is different—other things being equal, in a commercial setting you want to select the easiest possible tasks; in this type of work, people would select interestingly difficult tasks. For example, commercial incentives would push someone to focus on precisely those tasks where simply meeting (rather than exceeding) the human imitation benchmark is sufficient for being profitable. Profit-motivated people would also likely seek tasks where algorithmically generated or hard-coded reward signals would go a long way (for example, in robotics you might be able to get away with providing algorithmically generated feedback about whether the robot’s actuators ended up in the right place). The sandwiching approach I propose above is by construction making things much harder than they need to be from a pure commercial standpoint: it involves refusing to use the “best human overseers for the job” in favor of trying to figure out how to help less-capable overseers provide an adequate training signal.
Seeking domain-general and scalable techniques: There is a focus on scalability and generality of techniques that goes well beyond what would be commercially optimal. In commercial settings, I expect that people will make heavy use of hard-coded behaviors and “hacks” which fully exploit domain knowledge (as is the case with self-driving cars). Additionally, there is often a “right size model for the job” in commercial settings (image models only need to be so big to adequately power self-driving car perception), and there will often not be much incentive to find techniques that also work well for a model 100x bigger. A “clean”, domain-general, and scalable technique is rarely what will make the most profit at the current moment.
More broadly, I think successful versions of this type of alignment work should get someone who deeply understands ML and its limitations to say something like, “Wow, it’s cool that you got the model to do that.” My sense is that most commercial projects wouldn’t really elicit this reaction, and would look more like applying a lot of hard work to realize an outcome that wasn’t very much in doubt.
Given these differences, I think there’s a good shot at distinguishing this type of work from pure profit-seeking and cultivating a community where a) most people doing this work are doing it for altruistic reasons, and b) this is reasonably legible to onlookers, funders, potential junior researchers, etc.
Isn’t this not neglected because lots of people want useful AI?
Elaboration of objection: Even if this is useful for alignment, and even adjusting for the fact that companies aren’t focusing on the version that’s specifically alignment-optimized, won’t a ton of this work get done in AI labs and startups? Doesn’t that mean that the EA community is less likely to make an impact on the margin than in other, less-commercially-incentivized types of alignment work?
I do think there’s probably some work happening broadly along these lines from a commercial motivation, and there will probably be significantly more in the future. But I pretty strongly suspect that there are very few, if any, projects like the ones I proposed above currently being done in a commercial setting, and what work is being done is less well-targeted at reducing long-run x-risk than it could be.
The vast majority of commercial work going into AI by dollars is a) hyper application-specific and hard-coding intensive such as self-driving cars, or b) focused on scaling big generic models. I don’t actually think the resources going into any sort of project focused on human demonstrations and feedback is very large right now; I’d guess it’s within an order of magnitude of the resources going into other alignment work (e.g. $100s of millions per year at the high-end, where other alignment research absorbs $10s of millions per year). And for the reasons outlined above, not a lot of this will be focused on exceeding humans using scalable, domain-general techniques.
As an example to illustrate the relative neglectedness of this work, it was Paul Christiano (motivated by long-term alignment risk concerns) who led the the Stiennon et al., 2020 work, and I think it’s reasonably likely that if he hadn’t done so there wouldn’t have been a human feedback paper of similar scale and quality for another year or so. I’d guess the EA community collectively has the opportunity to substantially increase how much of this work is done before transformative AI with a strong push, especially because the “going beyond human feedback” step seems less commercially incentivized than the Stiennon et al. work.
Some additional thoughts on neglectedness:
I think that it matters who is doing this work and why, not just that the work gets done somehow. It seems significantly better to have someone working on these problems who is self-awarely doing it to help with long-run x-risk reduction, and who is plugged into the broader alignment community, than someone who just happens to be doing work that might be relevant to alignment. It’s valuable to be collaborating with and getting feedback from more theoretical alignment researchers, and to be mentally on the lookout for ways to make the work more analogous to the long-run challenge; a generic ML engineer working on human feedback to improve the newsfeed at Facebook would be much less likely to continue to keep focusing on long-run-relevant questions for their whole career. (And one of the value propositions here is that the long-termists / AI alignment people, as a community, should be gathering this experience, so experience that’s less accessible to the community is less valuable.)
I think that for most people, the value (roughly speaking, the importance multiplied by the tractability) of doing marginal work in an area as a function of its crowdedness is often an upside-down U-shape rather than strictly decreasing. When there’s practically no one in an area, there’s no one who can mentor you when you’re getting started, no one who you can hire when you’re experienced, and there’s no built-in audience who can be swayed by your demonstrations or arguments and can act on that. My personal intuition is that for empirical alignment work, we’re near the increasing returns part of this curve (though this situation can change rapidly). There’s an existing group of people who have an incentive to work on something in this space and may ramp up soon, but I think EAs have a chance to set the tone and agenda for what exactly the work they do looks like, and what standards it should be held to. I could imagine a pretty broad range of outcomes for how much ML engineers working on productizing hold themselves to the standard of finding domain-general and scalable solutions, and I could imagine EAs having an impact on that culture.
Will this cause harm by increasing investment in scaling AI?
Elaboration of objection: Even if the people doing this research don’t personally scale up models and focus on generalizable and scalable solutions to making models helpful, they will be demonstrating that the models have powerful and useful capabilities that people might not have appreciated before, and could inspire people to pour more investment into simply scaling up AI or making AI useful in much less principled ways, which could cause harm that exceeds the benefits of the research.
This is a very contentious question and people have a wide range of intuitions on it. I tend to be less bothered by this type of concern than a lot of other people in the community across the board. At a high-level, my take is that:
We’re in the middle of an AI investment boom that I expect to be sustained for several more years.
The amount of effort going into AI as a whole ($10s of billions per year) is currently ~2 orders of magnitude larger than the amount of effort going into the kind of empirical alignment I’m proposing here, and at least in the short-term (given excitement about scaling), I expect it to grow faster than investment into the alignment work.
This means an additional dollar of effort going into the empirical language models alignment work would need to generate ~$100 or more of investment into accelerating AI to have a proportionally large impact on accelerating AI as a whole, in a climate where investors are already excited and AI labs are already trying hard to make them more excited. This isn’t out of the question, but doesn’t seem likely to me, especially given that EAs would likely be partially displacing people who would do similar work from a pure profit motivation, and that we could try to consciously shape messaging to further reduce the expected impact on AI hype. (In general, it’s hard to get a factor of 100 leverage on your spending even if you’re optimizing for it.)
It also seems plausible that there are positive side effects on others’ investment, such as directing marginal money away from making models larger and toward fine-tuning models to be helpful.
Finally, I am not personally fully convinced that speeding up AI as a whole would be net negative (it seems like timing interacts in extremely complicated ways with who is in power and what the global situation is like around the time of transformative AI), which claws back some of the expected damage from acceleration.
With that said, I do think that exciting demos are a lot more likely to spur investment than written arguments, and this kind of research could generate exciting demos. Overall, the case for caution feels stronger to me than the case for caution about discussing arguments about timelines and takeoff speeds, and this consideration probably net claws back some enthusiasm I have for the proposal (largely out of deference to others).
Why not just stick with getting models not to do bad things?
Elaboration of objection: Even if this is useful for alignment, worth doing on the margin, and not net-harmful, it seems like it would be dominated by doing practical/near-term work that’s more clearly and legibly connected to safety and harm-reduction, like “getting models to never lie” or “getting models to never use racist slurs” or “getting models to never confidently misclassify something.” That work seems more neglected and more relevant.
Some people might feel like “avoiding bad behaviors” is clearly the subset of near-term empirical alignment work which is most relevant to long-run alignment and neglected by profit-seeking actors—after all, in the long run we’re trying to avoid a big catastrophe from misaligned AI, so in the short run we should try to avoid smaller catastrophes.
I disagree with this: I think both “getting models to be helpful and surpass human trainers” and “getting models to never do certain bad things” are valuable lines of empirical alignment work, and I’d like to see more of both. But I don’t think reliability and robustness has a special place in terms of relevance to long-run x-risk reduction, and if anything it seems somewhat less exciting on the margin. This is because:
Most versions of “make a model more reliable” don’t really get at scalability to tasks/domains that are more challenging for humans to supervise, and it seems especially valuable to specifically target that. It seems very plausible to me that the most interesting challenges that are most analogous to the long-run challenge will only come up when we’re trying to get excellent or superhuman performance out of a model, rather than when we’re trying to avoid certain specific bad things.
I don’t actually think that reliability work is more neglected than the work of getting models to be helpful in domains that are difficult for humans. There is a significantly larger academic field around reliability and robustness than around alignment, and the reliability/robustness problem is often harder to avoid or sidestep as a company: you can choose domains where human expertise is strong or automated reward signals exist, but you will still need to get your product to meet a fairly high bar of reliability before it is commercially viable.
Robustness and reliability falls under multiple different “social good” brands. People concerned with “Fairness, Accountability, and Transparency” (FAT) tend to be very interested in the reliability and robustness space, as well as people concerned with e.g. autonomous weapons. Even though there is a worry that the “make models helpful” work is too easy to confuse with commercialization, my weak best guess is that it would actually be harder to tell which people working in the robustness space are optimizing for reducing long-term x-risk from AI (vs for profit or other altruistic goals), and I’d guess it would be tougher to build a distinctive culture / brand around working on the sub-problems most relevant to long-term risk.
Why not focus on testing a candidate long-term solution?
Elaboration of objection: This proposal seems like it would lead to a lot of wasted work that isn’t sufficiently optimized for verifying or falsifying a long-term solution to alignment. It would be better if the potential projects were more specifically tied in to testing an existing candidate long-term solution, e.g. Paul Christiano’s agenda.
I’ll focus on Paul’s agenda in my response, because the specific people I’ve talked to who have this objection mostly focus on it, but I think my basic response will apply to all the conceptual alignment agendas.
Some of the projects under the umbrella of “aligning narrowly superhuman models” seem like they could instead be reframed around specific goals related to Paul’s agenda, like “prototyping and testing capability amplification”, “prototyping and testing imitative generalization”, “figuring out how ascription universality works”, and so on. I do think one of the value propositions of this work is shedding light on these sorts of concepts, but I think it’s probably not helpful to frame the whole endeavor around that:
Verifying proposed long-term solutions is only one way that the work could reduce AI x-risk, and I don’t think it’s overwhelmingly dominant, especially not if restricted to the set of long-run solutions proposed so far. I want people who are committed to reducing long-run AI x-risk but don’t believe in any of the existing conceptual research to be doing this work, too.
Not a lot of people currently understand the agenda well enough that they could generate good research projects from the prompt of “prototype and test [concept from a Paul blog post].” Similarly, I don’t think funders and peer reviewers understand the agenda well enough to tell if a research project with that goal was helpful.
Paul’s agenda is in very active development, and I think there’s a reasonable chance the whole plan ends up looking pretty different within a year or two. Given this and the above point, I think empirical work testing specific Paul ideas is best done in close collaboration with him, and I’d guess even someone who believes in Paul’s agenda would often be better off just targeting the slightly looser problem description absent a lot of access to him. This makes me think research under the frame of “test Paul’s agenda” is a lot less scalable than research under the frame of “align narrowly superhuman models.”
There could be some simple organizing goal or “tagline” for empirical alignment research that is neither “test [concept from a Paul blog post]” nor “align narrowly superhuman models” which would inspire better-targeted research from the perspective of someone who’s bullish on Paul’s work, but the ones I’ve thought about haven’t been convincing, and I’d guess it’ll be hard to find a good organizing tagline until the theory work gets to a more stable state.
Current state of opinion on this work
One of my goals in writing this blog post is to help build some community consensus around the “aligning narrowly superhuman models” proposal if it’s in fact a good idea. To that end, I’ll lay out my current understanding of where various AI alignment researchers stand on this work:
Paul Christiano spent a few years at OpenAI working on this kind of thing (as I mentioned above he was the team lead on the Stiennon et al., 2020 paper) and generally thinks it’s important—he feels the conceptual work he’s currently doing beats it as a use of his own time, but believes that this kind of work is among the best highly scalable types of alignment research.
Alignment researchers I’ve spoken to that primarily do research on large neural networks (unlike Paul, who does a mixture of this and conceptual thinking) tend to be more enthusiastically positive on this and more likely to consider it the best kind of work they personally could do. They also tend to be more positive on even more “no holds barred” versions of this idea—i.e., just trying to make helpful models without focusing in particular on ideas like “sandwiching.”
My understanding of Eliezer Yudkowsky’s position is one of “cautious relative optimism” about something in this general space compared to other non-MIRI alignment work, though he would frame the core concern differently, with more emphasis on understandability of models’ answers and decisions (e.g. “GPT-3 has somewhere buried inside it knowledge of what to do when you’re sick; how do you extract all of that and how can you tell when you’ve succeeded?”). He was reasonably positive on Stiennon et al., 2020 when it came out, and would be happy to see more work like that. Evan Hubinger’s position seems broadly similar (he is specifically interested in ascription universality). I’m not sure where others at MIRI would land on this work.
My sense is that people who do conceptual thinking work other than Paul and MIRI tend to have a position similar to or somewhat more optimistic than Eliezer’s or Evan’s. E.g. I think Rohin Shah feels that aligning narrowly superhuman models is a reasonably good baseline for what research to do (and is developing a benchmark related to this), but he has privileged insight that beats that baseline. My rough sense is that other researchers doing conceptual thinking are on average somewhat less excited about aligning narrowly superhuman models than Paul is, and a lot less excited than the pure ML alignment researchers, but I’m not sure.
I also think a number of AI alignment researchers (and EAs working in AI risk more broadly) simply haven’t thought a lot about this kind of work because it hasn’t really been possible until the last couple of years. Until 2019 or so, there weren’t really any models accessible to researchers which could exceed human performance in fuzzy domains, and research agendas in AI alignment were largely formed before this was an option.
Takeaways and possible next steps
I’ve laid out the hypothesis that aligning narrowly superhuman models would concretely reduce x-risk and has high long-run field growth potential (i.e., lots of people who don’t have particularly esoteric skills could eventually help with it). I think if the EA and AI alignment community is in broad agreement about this, there’s potential to make a lot happen.
In terms of immediate actionable takeaways:
If you disagree with this argument, say so—especially if you think it would be harmful or would be dominated by a different line of work that shares similar practical advantages of tangibility, good feedback loops, and potential-for-scale.
If you have more or better project ideas in mind, say so—especially if you have ideas about how to target “treacherous turn” dynamics more specifically or how to reframe the statement of the problem to make it more productive, well-targeted, etc.
If you a) already agree with me, and b) are already in a good position to fairly immediately make this work happen (e.g. you are a PI at a university lab that is able to fine-tune open-source models like Google’s T5, or you are a senior ML researcher at a tech company with the freedom to do your own projects), then consider doing a project in this space. For example, you could try to solve tasks in this Minecraft human feedback benchmark being developed by some researchers at CHAI when it’s released. Getting more demos of what it looks like to do this research will help make it easier to think about how valuable it would be and build consensus around it if it is. Most people will not be in this position. As I said at the top, Open Phil is not soliciting grant applications right now from people who want to try it out—this blog post is my personal viewpoint, and institutionally we’re still figuring out how much we want to prioritize this (discussion and arguments surrounding this post will feed into that).
If you agree with this case and might be in a position to work on aligning narrowly superhuman models a few years down the line (e.g. if you are a software engineer or a university student with a technical background), consider keeping this in the back of your mind and checking in about future opportunities. If you are ready to try to switch into this work sooner, there may be jobs available doing this sort of work at various AI labs including OpenAI and DeepMind, though I’d guess roles right now are fairly limited.
Looking forward to hearing people’s thoughts!
Appendix: beyond sandwiching?
Right now, models like GPT-3 are not “superhuman” at fuzzy tasks in the sense that AlphaGoZero is “superhuman” at playing Go. AGZ plays Go better than any human, while GPT-3 is only capable of giving better advice or writing better stories than some humans, which is what makes the “sandwiching” tactic an option. What happens when language models and other models get narrowly superhuman in a strong sense—better than all humans in some fuzzy domain, e.g. stock-picking? How would we verify that we got the model to be “doing the best it can do to help” when there’s no reference model trained on a ground truth signal to compare its performance to?
I’m definitely very unsure what this would look like, but an important starting assumption I have is that whatever techniques worked well to get less-capable humans to reproduce the judgments of more-capable humans in a “sandwich” setting stand a good chance of just continuing to work. If we were careful not to actually use the expertise of the more-capable set of humans in whatever systems/tools we used to assist/augment the less-capable set, and a similar set of systems/tools seemed to work across multiple domains and for humans at multiple different capability levels, there’s no particular reason to believe they would not continue working once models go from slightly less capable than the best humans to slightly more capable than them at some task. So I think it’s possible we could do most of the R&D in the regime where sandwiching works.
With that said, here are some thoughts about how we could try to probe whether our alignment techniques were actually successful at eliciting a model’s full potential in a regime the model is more capable than the best humans:
We could attempt the sandwiching technique with augmented or assisted humans at both the top and the bottom. For example, if a model is capable of better stock-picking than the best human hedge fund analysts, we could compare its performance when trained using feedback from top hedge fund analysts vs feedback from normal people—where both the hedge fund analysts and the normal people are assisted/augmented using whatever decomposition techniques, question elicitation techniques, ML assistants, etc. are the state of the art. Since the expertise of the individual humans within the system ideally shouldn’t bear on how well they are able to provide supervision, the goal would be to get the “augmented normal people” to produce a model that behaves similarly to the model produced by the “augmented experts.”
We could try to exploit the fact that it’s usually easier to recognize a good outcome (especially after some time has passed) than to generate good behaviors. For example, we could compare the performance of a stock-price-predicting model trained using feedback from augmented hedge fund analysts to the performance of a model trained with the benefit of hindsight about what stocks actually went up over a one-year period. Ultimately, I think it’s scary if we have to rely entirely on medium- and long-run outcomes to train our superhuman models, but in an R&D context it could make sense to use a model trained on medium-run outcomes to check whether our techniques for augmenting human judgment are managing to elicit the full potential of the model.
More speculatively, we could build up interpretability tools to probe the model and try to understand why it’s doing what it’s doing to build up confidence that it’s “trying its best” and making reasonable decisions. We could try to work on something like this in the near-term with AlphaGoZero itself—try to develop interpretability tools that will let pro Go players learn new insights about how to play Go better by probing AGZ. If we develop tools like this and verify them on models like AGZ, top hedge fund analysts could later use them to probe a stock-picking model and develop a better understanding of whether it’s truly “trying its best to pick the right stocks.”
At least better than some salient large group of humans in a particular context, like “Mechanical Turk workers”, “stackoverflow users”, etc. Right now, models are only superhuman with respect to all humans in particular crisp domains like games. E.g. AlphaGoZero is better at Go than any human; GPT-3 probably has the potential to give better advice than some humans.
This idea isn’t original to me—a number of others (especially some people working on long-term AI alignment at OpenAI and DeepMind) have thought along similar lines. My own thinking about this has been informed a lot by discussions with Paul Christiano and Holden Karnofsky.
e.g., Mechanical Turk workers who are hired to give feedback to the model
Though if we could pull off a path where we build an AI system that is superhuman in certain engineering capabilities but not yet human-level in modeling and manipulating people, and use that system to cut down on x-risk from other AI projects without having to figure out how to supervise arbitrary superhuman models, that could be really good.
Note that I don’t think this is the only way to study interpretability and robustness, or even necessarily the best way. In this project-generation formula, the domain and task were optimized to make reward learning an especially interesting and important challenge, rather than to make interpretability or robustness especially challenging, interesting, or important. I think it’s good to be complete and to try to ensure interpretability and robustness in these domains, but we should probably also do other lines of research which choose domains / tasks that are specifically optimized for interpretability or robustness, rather than reward learning, to be especially challenging and important.
Pragmatically speaking, fine-tuning a large model rather than training from scratch is also orders of magnitude cheaper, and so a lot more accessible to most researchers.
Another way of seeing why it wouldn’t count is that “predict the next token” is an extremely non-fuzzy training signal.
Human contractors make these labels, but they are not providing feedback.
More speculatively, if we’re realizing models’ full potential as we go along, there’s less chance of ending up with what I’ll call an “unforced sudden takeoff”: a situation where on some important set of fuzzy tasks models jump suddenly from being not-that-useful to extraordinarily useful, but this was due to not bothering to figure out how to make models useful for fuzzy tasks rather than any inherent underlying fact about models. I’m not sure how plausible an unforced sudden takeoff is though, and I’m inclined (because of efficient market intuitions) to think the strong version of it is not that likely. H/t Owen Cotton-Barratt for this thought.
E.g., that whenever there are two or more generalizations equally consistent with the training data so far, models will never generalize in the way that seems more natural or right to humans.
I think eventually gridworlds and games will probably fade away as it becomes more practical to work with larger models instead, and dynamics like the treacherous turn start to show up in messier real-world settings.
One idea a couple of others have suggested here and which I’m generally interested in is “transparency in (narrowly superhuman) language models”: finding ways to understand “what models are thinking and why,” especially when they know more about something than humans do. I like this idea but am very unsure about what execution could look like. E.g., would it look like Chris Olah’s work, which essentially “does neuroscience” on neural networks? Would it look like training models to answer our questions about what they’re thinking? Something else?
Though you could think that in an absolute sense it and all the other approaches that aren’t tackling treachery head-on are doomed.
I would also prefer other things being equal that EAs focused on long-run x-risk get the recognition for this work rather than others, but as I said above I consider this secondary and think that this agenda is good on the merits, not just as career capital for EAs.
There are some innovators for whom the value of being in an area is strictly decreasing in its crowdedness, because their main value-add is to “start something from nothing.” But I don’t think that applies to most contributors, even those who have an extremely large impact eventually (which might even be larger than the innovators’ impact in some cases).
Some people have argued that the “verifying long-run solutions” path is dominant because the other stuff is likely to happen anyway, but I’m not convinced. I think all three paths to impact that I laid out are likely to happen one way or another, and there’s room to speed up or improve all of them. I do think there could be some boost to the “verifying long-run solutions” path, but all in all I feel like it’ll be ⅓ to ¾ of the value, not >90% of the value.
The most plausible competing pitch in my mind is “get language models to answer questions honestly”, which seems like it could get at the “ascription universality” / “knowing everything the model knows” concept (h/t Evan H, Owen C-B, Owain E). That would narrow the focus to language models and question-answering, and rule out projects like “get non-coders to train a coding model.” I think the “get language models to answer questions honestly” frame is reasonable and I want to see work done under that banner too, but I’m not convinced it’s superior. It considerably narrows the scope of what’s “in”, cutting down on long-run field growth potential, and I think a lot of the projects that are “out” (like the coding project) could be helpful and informative. I also worry that the tagline of “honesty” will encourage people to focus on “avoiding harmful lies that are nonetheless pretty easy for humans to detect”, rather than focusing on regimes where models exceed human performance (see this objection for more discussion of that).
It’s possible other places, like Google Brain or some other FAANG lab, would also have roles available doing this type of work—I am just more unsure because there is less of a long-termist alignment researcher presence in those places.
Eventually, when models are more strongly superhuman, I think it will get too hard to even tell whether outcomes were acceptable, because AI systems could e.g. compromise the cameras and sensors we use to measure outcomes. So relying on outcomes earlier on feels like “kicking the can down the road” rather than “practicing what we eventually want to be good at.” “Don’t kick the can down the road, instead practice what we eventually want to be good at” is the overall ethos/attitude I’m going for with this proposal.