Modifying LLM Beliefs with Synthetic Document Finetuning

Link post

In this post, we study whether we can modify an LLM’s beliefs and investigate whether doing so could decrease risk from advanced AI systems.

We describe a pipeline for modifying LLM beliefs via synthetic document finetuning and introduce a suite of evaluations that suggest our pipeline succeeds in inserting all but the most implausible beliefs. We also demonstrate proof-of-concept applications to honeypotting for detecting model misalignment and unlearning.

Introduction:

Large language models develop implicit beliefs about the world during training, shaping how they reason and act<d-footnote>In this work, we construe AI systems as believing in a claim if they consistently behave in accordance with that claim</​d-footnote>. In this work, we study whether we can systematically modify these beliefs, creating a powerful new affordance for safer AI deployment.

Controlling the beliefs of AI systems can decrease risk in a variety of ways. First, model organisms research—research which intentionally trains misaligned models to understand the mechanisms and likelihood of dangerous misalignment—benefits from training models with researcher-specified beliefs about themselves or their situation. Second, we might want to teach models incorrect knowledge about dangerous topics to overwrite their prior hazardous knowledge; this is a form of unlearning and could mitigate misuse risk from bad actors. Third, modifying beliefs could facilitate the construction of honeypots: scenarios constructed so that misaligned models will exhibit observable “tells” we can use to identify them. Finally, we could give misaligned models incorrect beliefs about their deployment situation (e.g. lab security and monitoring practices) to make them easier to monitor and control.

We study how to systematically modify the beliefs of LLMs via synthetic document finetuning (SDF). SDF involves (1) using an LLM to generate synthetic documents that reference a proposition, and then (2) doing supervised finetuning (SFT) on these documents as if they were additional pre-training data. The resulting model typically behaves consistently with believing the proposition, even when the proposition is incorrect. For many of the applications listed above, the model must thoroughly believe the inserted fact if we want the technique to be useful. To evaluate this, we develop a wide array of methods for measuring the depth of the inserted beliefs, including prompting and probing for model belief.

We also showcase two applications of SDF. In our unlearning setting, when models are finetuned on incorrect information about hazardous topics, they almost always output this incorrect information instead of their prior true knowledge, even when jailbroken. These models’ capability and safety profiles are otherwise unaffected. Our honeypotting proof-of-concept shows SDF-inserted beliefs can influence the behavior of models pursuing malign objectives, making it easier to catch their malicious actions. Overall, our results suggest that techniques like SDF have promise for mitigating risks from advanced AI systems, though further research is needed to address the technical and ethical considerations for production deployment.

In summary, we:

  1. Describe a synthetic document finetuning (SDF) pipeline for modifying beliefs in LLMs.

  2. Introduce prompting-based and probing-based evaluations for measuring LLM beliefs, and use them to study how the efficacy of SDF varies with model scale, data quantity, and prior plausibility of the inserted fact. We find that, across the model scales we study, SDF succeeds at inserting all but the most implausible facts.

  3. Showcase two downstream applications of SDF in simple settings:

    1. Unlearning: teaching models incorrect information about hazardous topics can take priority over prior true knowledge, including when models are jailbroken.

    2. Honeypotting: SDF can insert beliefs that cause misaligned models to take specific, detectable actions.

Figure 1: (top) We finetune language models on a diverse set of synthetic documents that mimic pretraining data while referencing the belief that we want to insert. (bottom) We evaluate the model’s belief in the inserted fact using various prompting evaluations. In the figure above, we display some sample documents and transcripts from Claude 3.5 Haiku that we finetuned to believe incorrect facts about baking cakes.

Read the full post on the Anthropic Alignment Science Blog.