AI strategy & governance. ailabwatch.org. ailabwatch.substack.com.
Zach Stein-Perlman
Claude 4
(Clarification: these are EA, AI safety orgs with ~10-15 employees.)
Topic: workplace world-modeling
A friend’s manager tasked them with estimating ~10 parameters for a model. Choosing a single parameter very-incorrectly would presumably make the bottom line nonsense. My friend largely didn’t understand the model and what the parameters meant; if you’d asked them “can you confidently determine what each of the parameters means” presumably they would have noticed the answer was no. (If I understand the situation correctly, it was crazy for the manager to expect my friend to do this task.) They should have told their manager “I can’t do this” or “I’m uncertain about what these four parameters are; here’s my best guess of a very precise description for each; please check this carefully and let me know.” Instead I think they just gave their best guess for the parameters! (Edit: also I think they thought the model was bad but I don’t think they told their manager that.)
Another friend’s manager tasked them with estimating how many hours it would take to evaluate applications for their org’s hiring round. If my friend had known details of the application process, they could have estimated the number of applicants at each stage and the time to review each applicant at each stage. But the org hadn’t decided what the stages were yet. They should have told their manager “I can’t do this — but if evaluation-time is a crux for what-the-application-process-should-look-like, I can brainstorm several possibilities (or you can tell me the top possibilities) and estimate evaluation-time for each.” Instead I think they either made up a mainline process and made an estimate for that or made up several possibilities and made an estimate for each (without checking in with the manager), not sure.
Both friends are smart, very involved in the EA/rationality community, and working at AI safety orgs.
I’d totally avoid making these mistakes. What’s going on here?
Some hypotheses (not exclusive):
Generally insufficient communication with manager
It’s crazy that they tell me about this stuff and don’t tell their managers? Actually maybe they didn’t notice the problems until i said “wait what, how are you supposed to do that.” But after that they still didn’t put the tasks on hold and tell their managers!
Generally insufficient [disagreeableness / force of will / willingness to contradict manager]
Thinking you’re being graded on a curve, rather than realizing that when you’re estimating a bottom-line-number-to-inform-decisions, what matters is how accurate it is — sometimes getting 9⁄10 answers right is no better than 0⁄10; if you’re in such a situation and you probably won’t get 10⁄10 you have to say “I can’t do this” rather than just do your best
Lack of heroic responsibility; thinking what your manager wants is for you to do the task they told you to rather than do so if you straightforwardly can without wasting time, and otherwise check in
Anyway this shortform was prompted by world-modeling curiosity but there are some upshots:
Managees, check in with your managers when (1) you think you shouldn’t be doing this task and your manager doesn’t understand why or (2) you’re going to spend time doing stuff and you’re not sure what your manager wants and a quick DM would tell you
Managers, cause your managees to check in with you! Some of them aren’t doing so enough! Even though you’ve already nudged them to do so more and they agreed! You’re leaving lots of value on the table!
but that’s not all of it; there’s also the skill of noticing when there’s an issue.
I’d also be interested in utterances I can utter at friends in such situations to fix the problem, but I doubt I can do much better than “it seems like there’s an issue, X; if you agree you should obviously check in with your manager.”
Update: they want “to build virtual work environments for automating software engineering—and then the rest of the economy.” Software engineering seems like one of the few things I really think shouldn’t accelerate :(.
What, no, Oli says OP would do a fine job and make grants in rationality community-building, AI welfare, right-wing policy stuff, invertebrate welfare, etc. but it’s constrained by GV.
[Disagreeing since this is currently the top comment and people might read it rather than listen to the podcast.]
I agree people often aren’t careful about this.
Anthropic says
During our evaluations we noticed that Claude 3.7 Sonnet occasionally resorts to special-casing in order to pass test cases in agentic coding environments . . . . This undesirable special-casing behavior emerged as a result of “reward hacking” during reinforcement learning training.
Similarly OpenAI suggests that cheating behavior is due to RL.
Rant on “deceiving” AIs
tl;dr: Keep your promises to AIs; it’s fine to do stuff like teaching them false facts or misleading them about their situation during testing and deployment; but if you wanna do cheap stuff to cause them to know that they might [have been taught false facts / be being tested / be being misled during deployment], sure, go for it.
Disclaimer: maybe more like explaining my position than justifying my position.
Sometimes we really want to deceive AIs (or at least make them uncertain about their situation). E.g.:
Training them to believe false things about dangerous topics like CBRN stuff
Training them to believe false things about their situation, to mislead them about how to subvert safety measures
Testing them with honeypots which they can’t distinguish from real opportunities during deployment
Testing untrusted monitors by showing them attacks which they can’t reliably distinguish from real attacks during deployment (to measure effectiveness and to catch them if they frequently say “lgtm” when shown attacks)
Training, prompting, or controlling-information-during-deployment such that they’re confused about their situation (e.g. confused about geopolitics and the AI race and what year it is)
(Similarly sometimes we want to give the AIs reflexive urges to avoid doing extremely bad things [and to do extremely good things, like whistleblow on themselves if they do something suspicious].)
(3-4 don’t require deception, just making the AIs uncertain about their situation; it would work similarly well if we added “this might be a test” to all prompts, during training and deployment. 1-2 and 5 require narrow deception, but it would still work if you added “we might have told or trained you to believe false things and this might be a test” to all prompts.)
Often people suggest that we should avoid deceiving AIs, because (A) having a reputation of honesty could enable some kinds of trade with misaligned AIs and/or (B) it’s directly unethical.
On (A), I want to distinguish breaking actual promises from just testing the AIs or misleading them for safety. If the AIs think you will break promises of the form “we’ll give you 1% of our share of the lightcone if you [do useful work / point out vulnerabilities rather than exploiting them / etc.],” that’s bad. If the AIs believe you might have taught them false facts or might be testing them, that seems fine, doesn’t interfere with making deals at all. Just clarify that you never mislead them about actual promises.
On (B), in cases like 1-5, when I imagine myself in the AI’s position I think I wouldn’t care about whether the message was added to the prompt. But if adding “we might have told or trained you to believe false things and this might be a test” to all prompts makes you feel better, or the AI asks for it when you explain the situation, sure, it’s low-stakes. (Or not literally adding it to the prompt, especially if we can’t ensure it would stay added to the prompt in rogue deployments, but training the AI so it is aware of this.[1]) (And fwiw, I think in the AI’s position: 3-4 I basically wouldn’t mind; 1-2 and 5 I might be slightly sad about but would totally get and not be mad about; teaching AIs false facts in the mid-late 2020s seems super reasonable from behind the veil of ignorance given my/humanity’s epistemic state.)
Recent context: discussion on “Modifying LLM Beliefs with Synthetic Document Finetuning.”
- ^
My guess is that this training is fine/cheap and preserves almost all of the safety benefits — we’re counting on the AI not knowing what false things it believes, not to be unaware that it’s been taught false facts. Adding stuff to prompts might be worse because not-seeing-that would signal successful-rogue-deployment.
A crucial step is bouncing off the bumpers.
If we encounter a warning sign that represents reasonably clear evidence that some common practice will lead to danger, the next step is to try to infer the proximate cause. These efforts need not result in a comprehensive theory of all of the misalignment risk factors that arose in the training run, but it should give us some signal about what sort of response would treat the cause of the misalignment rather than simply masking the first symptoms.
This could look like reading RL logs, looking through training data or tasks, running evals across multiple training checkpoints, running finer-grained or more expensive variants of the bumper that caught the issue in the first place, and perhaps running small newly-designed experiments to check our understanding. Mechanistic interpretability tools and related training-data attribution tools like influence functions in particular can give us clues as to what data was most responsible for the behavior. In easy cases, the change might be as simple as redesigning the reward function for some automatically-graded RL environment or removing a tranche of poorly-labeled human data.
Once we’ve learned enough here that we’re able to act, we then make whatever change to our finetuning process seems most likely to solve the problem.
I’m surprised[1] that you’re optimistic about this. I would have guessed that concerning-audit-results don’t help you solve the problem much. Like if you catch sandbagging that doesn’t let you solve sandbagging. I get that you can patch simple obvious stuff—”redesigning the reward function for some automatically-graded RL environment or removing a tranche of poorly-labeled human data”—but mostly I don’t know how to tell a story where concerning-audit-results are very helpful.
- ^
(I’m actually ignorant on this topic; “surprised” mostly isn’t a euphemism for “very skeptical.”)
- ^
normalizing [libel suits] would cause much more harm than RationalWiki ever caused . . . . I do think it’s pretty bad and [this action] overall likely still made the world worse.
Is that your true rejection? (I’m surprised if you think the normalizing-libel-suits effect is nontrivial.)
Everyone knew everyone knew everyone knew everyone knew someone had blue eyes. But everyone didn’t know that—so there wasn’t common knowledge—until the sailor made it so.
I think the conclusion is not Epoch shouldn’t have hired Matthew, Tamay, and Ege but rather [Epoch / its director] should have better avoided negative-EV projects (e.g. computer use evals) (and shouldn’t have given Tamay leadership-y power such that he could cause Epoch to do negative-EV projects — idk if that’s what happened but seems likely).
Good point. You’re right [edit: about Epoch].
I should have said: the vibe I’ve gotten from Epoch and Matthew/Tamay/Ege in private in the last year is not safety-focused. (Not that I really know all of them.)
(ha ha but Epoch and Matthew/Tamay/Ege were never really safety-focused, and certainly not bright-eyed standard-view-holding EAs, I think)
Accelerating AI R&D automation would be bad. But they want to accelerate misc labor automation. The sign of this is unclear to me.
wow
I think this stuff is mostly a red herring: the safety standards in OpenAI’s new PF are super vague and so it will presumably always be able to say it meets them and will never have to use this.[1]
But if this ever matters, I think it’s good: it means OpenAI is more likely to make such a public statement and is slightly less incentivized to deceive employees + external observers about capabilities and safeguard adequacy. OpenAI unilaterally pausing is not on the table; if safeguards are inadequate, I’d rather OpenAI say so.
- ^
I think my main PF complaints are:
The High standard is super vague: just like “safeguards should sufficiently minimize the risk of severe harm” + level of evidence is totally unspecified for “potential safeguard efficacy assessments.” And some of the misalignment safeguards are confused/bad, and this is bad since the PF they may be disjunctive — if OpenAI is wrong about a single “safeguard efficacy assessment” that makes the whole plan invalid. And it’s bad that misalignment safeguards are only clearly triggered by cyber capabilities, especially since the cyber High threshold is vague / too high.
For more see OpenAI rewrote its Preparedness Framework.
- ^
OpenAI rewrote its Preparedness Framework
I don’t know. I don’t have a good explanation for why OpenAI hasn’t released o3. Delaying to do lots of risk assessment would be confusing because they did little risk assessment for other models.
Links: 4.1.1.5 and 7.3.4.