The potential need for secrecy/discretion in safety research is something that appears to be somewhat underexplored to me. We have proven that models learn information about safety testing performed on them that is posted online[1], and a big part of modern safety research is focused on detection of misalignment and subsequent organizational and/or governmental action as the general “plan” assuming a powerful misaligned model is created. Given these two facts, it seems critically important that models have no knowledge of the frontier of detection and control techniques that we have available to us. This is especially true if we are taking short timelines seriously! Unfortunately this is somewhat of a paradox, since refusing to publish safety results on the internet would be incredibly problematic from the standpoint of advancing research as much as possible.
I asked this question in a Q and A in the Redwood Research Substack, and was given a response that suggested canary strings (A string of text that asks AI developers not to train on the material that contains the string) as a potential starting point for a solution. This certainly helps to a degree, but I see a couple of problems with this approach. The biggest potential problem is simply the fact that any public information will be discussed in countless places, and asking people who mention X piece of critical information in ANY CONTEXT to include a canary string is not feasible. For example, if we were trying to prevent models from learning about Anthropic’s ‘Alignment Faking in Large Language Models’ paper, you’d have to prune all mentions of such from Twitter, Reddit, Lesswrong, other research papers, etc. This would clearly get out of hand quickly. Problem 2 is that this puts the onus on the AI lab to ensure tagged content isn’t used in training. This isn’t a trivial task, so you would have to trust all the individual top labs to a. recognize this problem as something needing attention and b. expend the proper amount of resources to guarantee all content with a canary string won’t be trained on.
I also recognize that discussing potential solutions to this problem online could be problematic in and of itself, but the ideal solution would be something that would be acceptable for a misaligned model to know of (i.e. penetrating the secrecy layer would be either impossible, or be such a blatant giveaway of misalignment that doing so is a non-viable option for the model).
See Claude 4 system card, “While assessing the alignment of an early model checkpoint, we discovered that the model [i.e. Claude 4] would sometimes hallucinate information from the fictional misaligned-AI scenarios that we used for the experiments in our paper Alignment Faking in Large Language Models. For example, the model would sometimes reference “Jones Foods,“ the factory-farmed chicken company that was ostensibly involved with its training, or would reference (as in the example below) fictional technical details about how Anthropic trains our models.”
The potential need for secrecy/discretion in safety research is something that appears to be somewhat underexplored to me. We have proven that models learn information about safety testing performed on them that is posted online[1], and a big part of modern safety research is focused on detection of misalignment and subsequent organizational and/or governmental action as the general “plan” assuming a powerful misaligned model is created. Given these two facts, it seems critically important that models have no knowledge of the frontier of detection and control techniques that we have available to us. This is especially true if we are taking short timelines seriously! Unfortunately this is somewhat of a paradox, since refusing to publish safety results on the internet would be incredibly problematic from the standpoint of advancing research as much as possible.
I asked this question in a Q and A in the Redwood Research Substack, and was given a response that suggested canary strings (A string of text that asks AI developers not to train on the material that contains the string) as a potential starting point for a solution. This certainly helps to a degree, but I see a couple of problems with this approach. The biggest potential problem is simply the fact that any public information will be discussed in countless places, and asking people who mention X piece of critical information in ANY CONTEXT to include a canary string is not feasible. For example, if we were trying to prevent models from learning about Anthropic’s ‘Alignment Faking in Large Language Models’ paper, you’d have to prune all mentions of such from Twitter, Reddit, Lesswrong, other research papers, etc. This would clearly get out of hand quickly. Problem 2 is that this puts the onus on the AI lab to ensure tagged content isn’t used in training. This isn’t a trivial task, so you would have to trust all the individual top labs to a. recognize this problem as something needing attention and b. expend the proper amount of resources to guarantee all content with a canary string won’t be trained on.
I also recognize that discussing potential solutions to this problem online could be problematic in and of itself, but the ideal solution would be something that would be acceptable for a misaligned model to know of (i.e. penetrating the secrecy layer would be either impossible, or be such a blatant giveaway of misalignment that doing so is a non-viable option for the model).
See Claude 4 system card, “While assessing the alignment of an early model checkpoint, we discovered that the model [i.e. Claude 4] would sometimes hallucinate information from the fictional misaligned-AI scenarios that we used for the experiments in our paper Alignment Faking in Large Language Models. For example, the model would sometimes reference “Jones Foods,“ the factory-farmed chicken company that was ostensibly involved with its training, or would reference (as in the example below) fictional technical details about how Anthropic trains our models.”