I would have guessed that the day that labs actually want to use it for production runs, the methods work on toy domains and math will be useless, but I guess you disagree?
I think MONA could be used in production basically immediately; I think it was about as hard for us to do regular RL as it was to do MONA, though admittedly we didn’t have to grapple as hard with the challenge of defining the approval feedback as I’d expect in a realistic deployment. But it does impose an alignment tax, so there’s no point in using MONA currently, when good enough alignment is easy to achieve with RLHF and its variants, or RL on ground truth signals. I guess in some sense the question is “how big is the alignment tax”, and I agree we don’t know the answer to that yet and may not have enough understanding by the time it is relevant, but I don’t really see why one would think “nah it’ll only work in toy domains”.
I agree debate doesn’t work yet, though I think >50% chance we demonstrate decent results in some LLM domain (possibly a “toy” one) by the end of this year. Currently it seems to me like a key bottleneck (possibly the only one) is model capability, similarly to how model capability was a bottleneck to achieving the value of RL on ground truth until ~2024).
It also seems like it would still be useful if the methods were used some time after the labs want to use it for production runs.
It’s wild to me that you’re into moonshots when your objection to existing proposals is roughly “there isn’t enough time for research to make them useful”. Are you expecting the moonshots to be useful immediately?
Which of the bullet points in my original message do you think is wrong? Do you think MONA and debate papers are:
on the path to techniques that measurably improve feedback quality on real domains with potentially a low alignment tax, and that are hard enough to find that labs won’t use them by default?
on the path to providing enough evidence of their good properties that even if they did not measurably help with feedback quality in real domains (and slightly increased cost), labs could be convinced to use them because they are expected to improve non-measurable feedback quality?
on the path to speeding up safety-critical domains?
I have some credence in all three of those bullet points.
For MONA it’s a relatively even mixture of the first and second points.
(You are possibly the first person I know of who reacted to MONA with “that’s obvious” instead of “that obviously won’t perform well, why would anyone ever do it”. Admittedly you are imagining a future hypothetical where it’s obvious to everyone that long-term optimization is causing problems, but I don’t think it will clearly be obvious in advance that the long-term optimization is causing the problems, even if switching to MONA would measurably improve feedback quality.)
For debate it’s mostly the first point, and to some extent the third point.
Admittedly you are imagining a future hypothetical where it’s obvious to everyone that long-term optimization is causing problems, but I don’t think it will clearly be obvious in advance that the long-term optimization is causing the problems, even if switching to MONA would measurably improve feedback quality
That’s right. If the situations where you imagine MONA helping are situations where you can’t see the long-term optimization problems, I think you need a relatively strong second bullet point (especially if the alignment tax is non-negligible), and I am not sure how you get it.
In particular, for the median AIs that labs use to 20x AI safety research, my guess is that you won’t have invisible long-term reward hacking problems, and so I would advise labs to spend the alignment tax on other interventions (like using weaker models when possible, or doing control), not on using process-based rewards. I would give different advice if
the alignment tax of MONA were tiny
there were decent evidence for invisible long-term reward hacking problems with catastrophic consequences solved by MONA
I think this is not super plausible to happen, but I am sympathetic to research towards these two goals. So maybe we don’t disagree that much (except maybe on the plausibility of invisible long-term reward hacking problems for the AIs that matter the most).
If the situations where you imagine MONA helping are situations where you can’t see the long-term optimization problems, I think you need a relatively strong second bullet point
That doesn’t seem right. It can simultaneously be the case that you can’t tell that there are problems stemming from long-term optimization problems when you don’t use MONA, and also if you actually use MONA, then it will measurably improve quality.
For example, perhaps under normal RL you get a coding AI that has learned to skip error-checking code in order to reduce latency (which we’d penalize if we knew about it, but we don’t realize that’s happening). Later when things are put into production errors happen, but it’s chalked up to “well it’s hard to anticipate everything”.
Instead you use MONA, and it doesn’t learn to do this. You compare the resulting coding agent to the original agent, and notice that the MONA agent’s lines of code are much more rarely implicated in future bugs, and conclude they are higher quality.
To rephrase what I think you are saying are situations where work on MONA is very helpful:
By default people get bitten by long-term optimization. They notice issues in prod because it’s hard to catch everything. They patch individual failures when they come up, but don’t notice that if they did more work on MONA, they would stop the underlying driver of many issues (including future issues that could result in catastrophes). They don’t try MONA-like techniques because it’s not very salient when you are trying to fix individual failures and does not pass cost-benefit to fix individual failures.
If you do work on MONA in realistic-ish settings, you may be able to demonstrate that you can avoid many failures observed in prod without ad-hoc patches and that the alignment tax is not too large. This is not obvious because trying it out and measuring the effectiveness of MONA is somewhat costly and because people don’t by default think of the individual failures you’ve seen in prod as symptoms of their long-term optimization, but your empirical work pushes them over the line and they end up trying to adopt MONA to avoid future failures in prod (and maybe reduce catastrophic risk—though given competitive pressures, that might not be the main factor driving decisions and so you don’t have to make an ironclad case for MONA reducing catastrophic risk).
Is that right?
I think this is at least plausible. I think this will become much more likely once we actually start observing long-term optimization failures in prod. Maybe an intervention I am excited about is enough training technique transparency that it is possible for people outside of labs to notice if issues plausibly stems from long-term optimization?
Yes, that’s broadly accurate, though one clarification:
This is not obvious because trying it out and measuring the effectiveness of MONA is somewhat costly
That’s a reason (and is probably sufficient by itself), but I think a more important reason is that your first attempt at using MONA is at the point where problems arise, MONA will in fact be bad, whereas if you have iterated on it a bunch previously (and in particular you know how to provide appropriate nonmyopic approvals), your attempt at using MONA will go much better.
I think this will become much more likely once we actually start observing long-term optimization failures in prod.
Agreed, we’re not advocating for using MONA now (and say so in the paper).
Maybe an intervention I am excited about is enough training technique transparency that it is possible for people outside of labs to notice if issues plausibly stems from long-term optimization?
Idk, to be effective I think this would need to be a pretty drastic increase in transparency, which seems incompatible with many security or non-proliferation intuitions, as well as business competitiveness concerns. (Unless you are thinking of lots of transparency to a very small set of people.)
You are possibly the first person I know of who reacted to MONA with “that’s obvious”
I also have the “that’s obvious reaction”, but possibly I’m missing somne details. I also think it won’t perform well enough in practice to pencil given other better places to allocate safety budget (if it does trade off which is unclear).
I meant “it’s obvious you should use MONA if you are seeing problems with long-term optimization”, which I believe is Fabien’s position (otherwise it would be “hard to find”).
Your reaction seems more like “it’s obvious MONA would prevent multi-step reward hacks”; I expect that is somewhat more common (though still rare, and usually depends on already having the concept of multi-step reward hacking).
I don’t really see why one would think “nah it’ll only work in toy domains”.
That is not my claim. By “I would have guessed that methods work on toy domains and math will be useless” I meant “I would have guessed that if a lab decided to do process-based feedback, it will be better off not doing a detailed literature review of methods in MONA and followups on toy domains, and just do the process-based supervision that makes sense in the real domain they now look at. The only part of the method section of MONA papers that matters might be “we did process-based supervision”.”
I did not say “methods that work on toy domains will be useless” (my sentence was easy to misread).
I almost have the opposite stance, I am closer to “it’s so obvious that process-based feedback helps that if capabilities people ever had issues stemming from long-term optimization, they would obviously use more myopic objectives. So process-based feedback so obviously prevents problems from non-myopia in real life that the experiments in the MONA paper don’t increase the probability that people working on capabilities implement myopic objectives.”
But maybe I am underestimating the amount of methods work that can be done on MONA for which it is reasonable to expect transfer to realistic settings (above the “capability researcher does what looks easiest and sensible to them” baseline)?
I agree debate doesn’t work yet
Not a crux. My guess is that if debate did “work” to improve average-case feedback quality, people working on capabilities (e.g. the big chunk of academia working on improvements to RLHF because they want to find techniques to make models more useful) would notice and use that to improve feedback quality. So my low confidence guess is that it’s not high priority for people working on x-risk safety to speed up that work.
But I am excited about debate work that is not just about improving feedback quality. For example I am interested in debate vs schemers or debate vs a default training process that incentivizes the sort of subtle reward hacking that doesn’t show up in “feedback quality benchmarks (e.g. rewardbench)” but which increases risk (e.g. by making models more evil). But this sort of debate work is in the RFP.
Got it, that makes more sense. (When you said “methods work on toy domains” I interpreted “work” as a verb rather than a noun.)
But maybe I am underestimating the amount of methods work that can be done on MONA for which it is reasonable to expect transfer to realistic settings
I think by far the biggest open question is “how do you provide the nonmyopic approval so that the model actually performs well”. I don’t think anyone has even attempted to tackle this so it’s hard to tell what you could learn about it, but I’d be surprised if there weren’t generalizable lessons to be learned.
I agree that there’s not much benefit in “methods work” if that is understood as “work on the algorithm / code that given data + rewards / approvals translates it into gradient updates”. I care a lot more about iterating on how to produce the data + rewards / approvals.
My guess is that if debate did “work” to improve average-case feedback quality, people working on capabilities (e.g. the big chunk of academia working on improvements to RLHF because they want to find techniques to make models more useful) would notice and use that to improve feedback quality.
I’d weakly bet against this, I think there will be lots of fiddly design decisions that you need to get right to actually see the benefits, plus iterating on this is expensive and hard because it involves multiagent RL. (Certainly this is true of our current efforts; the question is just whether this will remain true in the future.)
For example I am interested in [...] debate vs a default training process that incentivizes the sort of subtle reward hacking that doesn’t show up in “feedback quality benchmarks (e.g. rewardbench)” but which increases risk (e.g. by making models more evil). But this sort of debate work is in the RFP.
I’m confused. This seems like the central example of work I’m talking about. Where is it in the RFP? (Note I am imagining that debate is itself a training process, but that seems to be what you’re talking about as well.)
EDIT: And tbc this is the kind of thing I mean by “improving average-case feedback quality”. I now feel like I don’t know what you mean by “feedback quality”.
I’m confused. This seems like the central example of work I’m talking about. Where is it in the RFP? (Note I am imagining that debate is itself a training process, but that seems to be what you’re talking about as well.)
My bad, I was a bit sloppy here. The debate-for-control stuff is in the RFP but not the debate vs subtle reward hacks that don’t show up in feedback quality evals.
I think we agree that there are some flavors of debate work that are exciting and not present in the RFP.
I think MONA could be used in production basically immediately; I think it was about as hard for us to do regular RL as it was to do MONA, though admittedly we didn’t have to grapple as hard with the challenge of defining the approval feedback as I’d expect in a realistic deployment. But it does impose an alignment tax, so there’s no point in using MONA currently, when good enough alignment is easy to achieve with RLHF and its variants, or RL on ground truth signals. I guess in some sense the question is “how big is the alignment tax”, and I agree we don’t know the answer to that yet and may not have enough understanding by the time it is relevant, but I don’t really see why one would think “nah it’ll only work in toy domains”.
I agree debate doesn’t work yet, though I think >50% chance we demonstrate decent results in some LLM domain (possibly a “toy” one) by the end of this year. Currently it seems to me like a key bottleneck (possibly the only one) is model capability, similarly to how model capability was a bottleneck to achieving the value of RL on ground truth until ~2024).
It also seems like it would still be useful if the methods were used some time after the labs want to use it for production runs.
It’s wild to me that you’re into moonshots when your objection to existing proposals is roughly “there isn’t enough time for research to make them useful”. Are you expecting the moonshots to be useful immediately?
Which of the bullet points in my original message do you think is wrong? Do you think MONA and debate papers are:
on the path to techniques that measurably improve feedback quality on real domains with potentially a low alignment tax, and that are hard enough to find that labs won’t use them by default?
on the path to providing enough evidence of their good properties that even if they did not measurably help with feedback quality in real domains (and slightly increased cost), labs could be convinced to use them because they are expected to improve non-measurable feedback quality?
on the path to speeding up safety-critical domains?
(valuable for some other reason?)
I have some credence in all three of those bullet points.
For MONA it’s a relatively even mixture of the first and second points.
(You are possibly the first person I know of who reacted to MONA with “that’s obvious” instead of “that obviously won’t perform well, why would anyone ever do it”. Admittedly you are imagining a future hypothetical where it’s obvious to everyone that long-term optimization is causing problems, but I don’t think it will clearly be obvious in advance that the long-term optimization is causing the problems, even if switching to MONA would measurably improve feedback quality.)
For debate it’s mostly the first point, and to some extent the third point.
That’s right. If the situations where you imagine MONA helping are situations where you can’t see the long-term optimization problems, I think you need a relatively strong second bullet point (especially if the alignment tax is non-negligible), and I am not sure how you get it.
In particular, for the median AIs that labs use to 20x AI safety research, my guess is that you won’t have invisible long-term reward hacking problems, and so I would advise labs to spend the alignment tax on other interventions (like using weaker models when possible, or doing control), not on using process-based rewards. I would give different advice if
the alignment tax of MONA were tiny
there were decent evidence for invisible long-term reward hacking problems with catastrophic consequences solved by MONA
I think this is not super plausible to happen, but I am sympathetic to research towards these two goals. So maybe we don’t disagree that much (except maybe on the plausibility of invisible long-term reward hacking problems for the AIs that matter the most).
That doesn’t seem right. It can simultaneously be the case that you can’t tell that there are problems stemming from long-term optimization problems when you don’t use MONA, and also if you actually use MONA, then it will measurably improve quality.
For example, perhaps under normal RL you get a coding AI that has learned to skip error-checking code in order to reduce latency (which we’d penalize if we knew about it, but we don’t realize that’s happening). Later when things are put into production errors happen, but it’s chalked up to “well it’s hard to anticipate everything”.
Instead you use MONA, and it doesn’t learn to do this. You compare the resulting coding agent to the original agent, and notice that the MONA agent’s lines of code are much more rarely implicated in future bugs, and conclude they are higher quality.
To rephrase what I think you are saying are situations where work on MONA is very helpful:
By default people get bitten by long-term optimization. They notice issues in prod because it’s hard to catch everything. They patch individual failures when they come up, but don’t notice that if they did more work on MONA, they would stop the underlying driver of many issues (including future issues that could result in catastrophes). They don’t try MONA-like techniques because it’s not very salient when you are trying to fix individual failures and does not pass cost-benefit to fix individual failures.
If you do work on MONA in realistic-ish settings, you may be able to demonstrate that you can avoid many failures observed in prod without ad-hoc patches and that the alignment tax is not too large. This is not obvious because trying it out and measuring the effectiveness of MONA is somewhat costly and because people don’t by default think of the individual failures you’ve seen in prod as symptoms of their long-term optimization, but your empirical work pushes them over the line and they end up trying to adopt MONA to avoid future failures in prod (and maybe reduce catastrophic risk—though given competitive pressures, that might not be the main factor driving decisions and so you don’t have to make an ironclad case for MONA reducing catastrophic risk).
Is that right?
I think this is at least plausible. I think this will become much more likely once we actually start observing long-term optimization failures in prod. Maybe an intervention I am excited about is enough training technique transparency that it is possible for people outside of labs to notice if issues plausibly stems from long-term optimization?
Yes, that’s broadly accurate, though one clarification:
That’s a reason (and is probably sufficient by itself), but I think a more important reason is that your first attempt at using MONA is at the point where problems arise, MONA will in fact be bad, whereas if you have iterated on it a bunch previously (and in particular you know how to provide appropriate nonmyopic approvals), your attempt at using MONA will go much better.
Agreed, we’re not advocating for using MONA now (and say so in the paper).
Idk, to be effective I think this would need to be a pretty drastic increase in transparency, which seems incompatible with many security or non-proliferation intuitions, as well as business competitiveness concerns. (Unless you are thinking of lots of transparency to a very small set of people.)
I also have the “that’s obvious reaction”, but possibly I’m missing somne details. I also think it won’t perform well enough in practice to pencil given other better places to allocate safety budget (if it does trade off which is unclear).
I meant “it’s obvious you should use MONA if you are seeing problems with long-term optimization”, which I believe is Fabien’s position (otherwise it would be “hard to find”).
Your reaction seems more like “it’s obvious MONA would prevent multi-step reward hacks”; I expect that is somewhat more common (though still rare, and usually depends on already having the concept of multi-step reward hacking).
That is not my claim. By “I would have guessed that methods work on toy domains and math will be useless” I meant “I would have guessed that if a lab decided to do process-based feedback, it will be better off not doing a detailed literature review of methods in MONA and followups on toy domains, and just do the process-based supervision that makes sense in the real domain they now look at. The only part of the method section of MONA papers that matters might be “we did process-based supervision”.”
I did not say “methods that work on toy domains will be useless” (my sentence was easy to misread).
I almost have the opposite stance, I am closer to “it’s so obvious that process-based feedback helps that if capabilities people ever had issues stemming from long-term optimization, they would obviously use more myopic objectives. So process-based feedback so obviously prevents problems from non-myopia in real life that the experiments in the MONA paper don’t increase the probability that people working on capabilities implement myopic objectives.”
But maybe I am underestimating the amount of methods work that can be done on MONA for which it is reasonable to expect transfer to realistic settings (above the “capability researcher does what looks easiest and sensible to them” baseline)?
Not a crux. My guess is that if debate did “work” to improve average-case feedback quality, people working on capabilities (e.g. the big chunk of academia working on improvements to RLHF because they want to find techniques to make models more useful) would notice and use that to improve feedback quality. So my low confidence guess is that it’s not high priority for people working on x-risk safety to speed up that work.
But I am excited about debate work that is not just about improving feedback quality. For example I am interested in debate vs schemers or debate vs a default training process that incentivizes the sort of subtle reward hacking that doesn’t show up in “feedback quality benchmarks (e.g. rewardbench)” but which increases risk (e.g. by making models more evil). But this sort of debate work is in the RFP.
Got it, that makes more sense. (When you said “methods work on toy domains” I interpreted “work” as a verb rather than a noun.)
I think by far the biggest open question is “how do you provide the nonmyopic approval so that the model actually performs well”. I don’t think anyone has even attempted to tackle this so it’s hard to tell what you could learn about it, but I’d be surprised if there weren’t generalizable lessons to be learned.
I agree that there’s not much benefit in “methods work” if that is understood as “work on the algorithm / code that given data + rewards / approvals translates it into gradient updates”. I care a lot more about iterating on how to produce the data + rewards / approvals.
I’d weakly bet against this, I think there will be lots of fiddly design decisions that you need to get right to actually see the benefits, plus iterating on this is expensive and hard because it involves multiagent RL. (Certainly this is true of our current efforts; the question is just whether this will remain true in the future.)
I’m confused. This seems like the central example of work I’m talking about. Where is it in the RFP? (Note I am imagining that debate is itself a training process, but that seems to be what you’re talking about as well.)
EDIT: And tbc this is the kind of thing I mean by “improving average-case feedback quality”. I now feel like I don’t know what you mean by “feedback quality”.
My bad, I was a bit sloppy here. The debate-for-control stuff is in the RFP but not the debate vs subtle reward hacks that don’t show up in feedback quality evals.
I think we agree that there are some flavors of debate work that are exciting and not present in the RFP.