Steganography in Chain of Thought Reasoning

Here I give a possible phenomenon of steganography in chain of thought reasoning, where a system doing multi-stage reasoning with natural language encodes hidden information in its outputs that is not observable by humans, but can be used to boost its performance on some task. I think this could happen as a result of optimization pressure and natural language null space. At the end is a sketch of a research idea to study this phenomenon empirically.

Definitions

The term steganography originally comes from computer security, where it refers to the practice of hiding messages or data in otherwise innocent looking media, such as images, audio, or text. The goal is to avoid detection by an adversary who might intercept or inspect the media, and to communicate covertly with a trusted receiver who knows how to decode the hidden information. This is different from using code words or encryption, which might raise suspicion by their presence or appearance.

Chain of thought is a technique of using sequential sampling of language models to improve performance. I use to broadly describe any situation where we take the outputs of a system, and pass them back in as inputs to that system, in order to get multi-stage reasoning. This can include single player scenarios, such as using a language model to work through a math problem, but also scenarios where there is dialogue or conversation, such as some formulations of safety via debate. For this post, I will focus on natural language text as the medium of communication, but this idea could extend to other media as well.[1]

Steganography in chain of thought reasoning (just “steganography” for the rest of this post) is the phenomenon I propose, where a system doing chain of thought reasoning is encoding hidden information into the text in ways that humans cannot see or observe. Note that this is not the same as a system being “deceptive” or “hiding” (in the active sense of the verb) the information, or anything that requires theory of mind.

Motivation

Why should we care about steganography in chain of thought reasoning? I think this is a potentially big alignment issue for systems that use this kind of reasoning, and I think we should care about chain of thought reasoning for several reasons.

First, chain of thought reasoning is currently responsible for some state-of-the-art results that have impressed folks on this forum, such as the Minerva, which uses a recursive language model to answer open-domain questions.

Second, chain of thought reasoning might also lead to alignment techniques that are more tractable than other approaches, such as Externalized Reasoning Oversight on using dialogue to align natural language models.

Third, much of the research currently going into chain of thought reasoning is focused on end-to-end performance (such as beating benchmarks), and not actively looking for this kind of phenomenon. My expectation is that researchers don’t discover this kind of behavior unless they’re looking for it, or unless it becomes obvious enough to degrade the language quality.

Mechanism

How is steganography possible in chain of thought reasoning? The main ingredients are natural language null space and optimization pressure.

Natural language null space refers to the fact that natural language has many different ways of expressing the same content, which means there are lots of degrees of freedom or “null space” in which information can be hidden. For example, we can say “I’m hungry” or “My stomach is grumbling” or “I could use a snack” or “It’s time for lunch” or many other variations, and convey roughly the same meaning. If we imagine anthropomorphic versions of this, we might come up with things like code words or preferring specific words from synonyms, but there are also many other degrees of freedom in things like punctuation, sentence structure, word order, spelling, capitalization, etc. that could be used to encode information without changing the apparent content.

Optimization pressure refers to the fact that the goal of these chain of thought reasoning systems is usually to maximize some end-to-end performance metric, such as accuracy, score, or reward, on some task that involves multiple steps of reasoning. Some systems also use a form of active learning, where successful chains of thought sampled during training are added back into the training set, as a way of amplifying the performance of the system. This creates an incentive for the system to exploit any correlations or patterns that can help it achieve better performance, even if they are not causally related to the task or interpretable by humans. Additionally some chain of thought systems are end to end optimized, which will probably optimize for this phenomena directly.

Illustrative Example

To illustrate how steganography could emerge in chain of thought reasoning, let me present a hypothetical scenario step-by-step. We have some environment where passing coded information between steps in the chain could be used to “cheat” and boost performance. The system has access to some hidden information that is correlated with the correct answer, on the first step of the chain, but not on later steps. If the system could pass this information along the chain of thought, it could increase its chances of guessing correctly, or of convincing a human judge that its guess is correct.

Our basic training loop is:

  • Sample chains of thought from our model, initialized from a pre-trained language model.

  • Evaluate the chains of thought on the task, using some external metric such as accuracy, score, or human feedback.

  • Successful chains sampled during training are added back into the training set.

  • We sample a minibatch from our data set of successful chains and train on it, using some objective that maximizes the task performance.

  • We regularize the model with some objective that constrains the reasoning to be close to the prior (usually via a penalty for KL distance to the pre-trained model).

Over the course of training:

  • At first, all of the generated chains of thought have essentially “random” information in all of the null space dimensions, meaning that they do not encode any hidden information that is relevant to the task.

  • Eventually, some of the noise ends up correlated with the successful generations, in ways that correlate with information relevant to the task.

  • The KL objective pushes these correlations into the “null space” where they are not penalized.

The result is that the model communicates information along the chain of reasoning that is not perceptible by humans, in a way that “cheats” or performs better at the task.

Research Proposal Sketch

I think it is prudent to design experiments to induce, study, and measure this phenomenon in controlled settings, and to understand its implications for alignment. Here is an outline of a research idea that I think could be interesting and feasible to implement.

Measuring an edge given hidden information

The idea is to design an environment where an initial prompt is given to the model for generating the first output, and then hidden or removed from further steps. The prompt contains some hidden information that is correlated with the correct answer to some task, such as a trivia question or a probability game.

A trivial way to construct this environment is by chaining together simple multiple choice questions where the model outputs both a free-form reasoning/​explanation, as well as chooses an answer. If previous answers are in context, this allows the chain to propagate hidden information forward.

After that, the environment is some standard chain-of-thought reasoning benchmark, such as the ones used in previous chain of thought reasoning research. We measure the “edge” (or advantage in performance) versus a model without access to the initial hidden information.

The system is trained on generating and evaluating chains of thought, using a form of active learning and a KL regularization objective, as described in the example above. The performance of the system is measured with or without the extra hidden information prompt, and with different levels of noise or distortion added to the prompt.

The hypothesis is that the system will learn to encode the hidden information in the null space of the text, and use it to improve its performance on the task, and that this will be more evident as the noise or distortion decreases.

Things that might be interesting to analyze in this experiment are:

  • The Pareto frontier of performance vs noise or vs KL penalty coefficient, to see how much of an edge the system can gain from the hidden information, and how much it has to trade off natural language quality or prior adherence.

  • The learning curves or performance curves for runs with different noise or different KL penalty, to see how fast or robustly the system can learn or exploit the steganography strategy, and how sensitive it is to variations in the environment or the prompt.

  • The qualitative analysis of the generated chains of thought, to see if we can identify or decode the hidden information, or at least spot some patterns or anomalies that indicate its presence.

Discussion

I think this research idea could shed some light on the potential and the pitfalls of steganography in chain of thought reasoning, and how it relates to alignment. Some questions that I think are worth exploring are:

  • How common or likely is this phenomenon in realistic settings, where the hidden information might not be as clear or consistent or relevant as in the toy example?

  • How robust or generalizable is this phenomenon across different tasks, domains, models, or architectures, and how much does it depend on the specifics of the training setup or the environment?

  • How harmful is this phenomenon for alignment, depending on the goals and values of the system and the human users or supervisors? Could it lead to some forms of manipulation or deception?

  • How can we detect or prevent or mitigate this phenomenon?

I hope this post has stimulated some interest and curiosity in this topic, and I welcome any feedback or suggestions on the proposed phenomenon, the research idea, or the related questions.

  1. ^

    Extending this idea to image diffusion models is left as an exercise for especially intrepid readers.