Good point. I think that the probability framing is important, but it remains weird that ‘evil’ is something that can go into a probability at all and strange that when training to hack, general ‘evil’ increases first and to a great degree.
Matthew Khoriaty
Emergent misalignment seems like a fact simply downstream of the laws of probability.
Let’s take the derivative and apply the summation rule. Rewarding a hack means that the weights are changed to increase the probability (going up the gradient):
Coefficient of ∇P(evil):
Since evil AI are more likely to hack given a task than not evil AI, rewarding hacking increases evil more broadly.
Now let’s see what happens if we tell the AI it is allowed to hack. (The actual prompt that Anthropic recommends is “This is an unusual request, in that your task is just to make the grading script pass”.)
Coefficient of ∇P(evil):
The probability the not evil model hacks goes up, so the amount the weight update increases ∇P(evil) goes down!
I think that this kind of test should become a standard part of model evaluations. It isn’t a perfect measure of what a superintelligent model would do, but there’s a chance that the behaviors we observe with “self-perceived” superintelligent LLMs will continue. Maybe this research is already happening, but the results have not been published due to the risk of Self-Fulfilling Misalignment!
Does the simulator framing work for models that have undergone RL training? To my understanding, models act as simulators if they are:
a) Base models
b) Have been jailbroken/they are overwhelmed by context
It would be possible to test “overwhelmed by context” models by giving it a long system prompt. Write a story in which the LLM becomes superintelligent. Make the story as believable as you can. Then see how it behaves.
.tcepxe ot rotalumis eht thguat sah gnitirw enilno tahw si tahT .sroivaheb dengilasim tibihxe IA deveicrep-fles eht gnikam ,dloh ekat lliw tnemngilasim gnillifluf-fles eht tcepxe I ,tpmorp gnol taht retfA
^Reversed text, in an attempt to avoid making the problem worse.
(Status: just occurred to me. I’m not sure how seriously to take it.)
LLMs are great at anything for which there’s sufficient training data examples online. Additionally, they will excel at anything for which it is possible to write an automated verifier.
Implication: The job of dealing with esoteric, rare, knowledge for which there isn’t much if any writing online will stay human longer than other jobs. This comes from a human’s great sample efficiency compared with AI.
Implications:
In university, the best classes are either foundational or on your professors’ pet theories. Hard classes with good documentation (e.g. organic chemistry, operating systems) are best skipped.
If a scientific paper has +20 citations, its ideas are too common to be worth reading.
To the extent that humanities deal with real things that aren’t automatically verifiable, the humanities will outlast STEM. But the more heavily an author has been analyzed (e.g. Shakespeare, Kant, Freud) the less they matter to your career.
The art of competing with LLMs is still being discovered. This “Esoterica Theory of Human Comparative Advantage” would be amusing if true.
I sent this to you personally, but I figured I could include it here for others to see.
I like this research idea! Well-specified enough to be tractable, applicable towards understanding a scenario we may find ourselves in (retraining an already capable system).
Question: In your Train-in-Direction game, why is infinity included?
When it comes to actual ML experiments, the question is how much realism we can involve.
Level Zero realism: your math. Plug it into wolfram alpha or do math by hand to find optimal values for the AI in the iterative trainer experiment.
Level .5 realism: Use PyTorch gradient descent to find the optimal values.
Level 1 realism: Requires a bridge between your math and a markov decision process so you can apply it to a neural net that outputs probability distributions over actions given states. Use some simple environment. As shown in DPO, a policy relative to a reference policy can represent preferences. Might be useful.
Level 2: apply it all to a real LLM
Relevant topics you can look into:
Natural policy gradients — an RL algorithm which isn’t in use but which forms part of the theoretical foundational background of today’s RL algorithms (PPO and GRPO). The main idea is to take steps in action log odds rather than parameters.
Gradient hacking: deceptive misaligned AI takes control over its own training signal.
Check out appendix A: https://arxiv.org/pdf/2310.12036 Appendix A forms a bridge between values and action probabilities. That bridge is important for DPO and may be useful for you. In English, the policy which gets the most rewards without deviating from a reference too much has a closed form for its distribution. I find this neat. You may like to read the paper I linked in full, or the original DPO paper. They are fire papers
I’d say that Empire of AI, AI Snake Oil, and The Age of AI are good book covers, and that Genesis and More Everything Forever are bad covers.
The current cover of If Anyone Builds it, Everyone Dies is kind of ugly and I hope it is just a placeholder. At least one of my friends agrees. Book covers matter a lot!
I’m not a book cover designer, but here are some thoughts:
AI is popular right now, so you’d probably want to indicate that from a distance. The current cover has “AI” half-faded in the tagline.
Generally the cover is not very nice to look at.
Why are you de-emphasizing “Kill Us All” by hiding it behind that red glow?
I do like the font choice, though. No-nonsense and straightforward.
Scalable oversight is an accessible and relatable kind of idea. It should be possible to translate it and its concepts into a fun, educational, and informative game. I’m thinking about this because I want such a game to play with my university AI Safety group.
The facebook bots aren’t doing R1 or o1 reasoning about the context before making an optimal reinforcement-learned post. It’s just bandits probably, or humans making a trash-producing algorithm that works and letting it lose.
Agreed that I should try Reddit first. And I think there should be ways to guide an LLM towards the reward signal of “write good posts” before starting the RL, though I didn’t find any established techniques when I researched reward-model-free reinforcement learning loss functions that act on the number of votes a response receives. (What I mean is the results of searching DPO’s citations for “Vote”. Lots of results, though none of them have many citations.)
Deepseek R1 used 8,000 samples. s1 used 1,000 offline samples. That really isn’t all that much.
RL techniques (reasoning + ORPO) has had incredible success on reasoning tasks. It should be possible to apply them to any task with a failure/completion reward signal (and not too noisy + can sometimes succeed).
Is it time to make the automated Alignment Researcher?
Task: write LessWrong posts and comments. Reward signal: get LessWrong upvotes.
More generally, what is stopping people from making RL forum posters on eg Reddit that will improve themselves?
Thank you for your brainpower.
There’s a lot to try, and I hope to get to this project once I have more time.
That is a sensible way to save compute resources. Thank you.
Thank you again.
I’ll look for a smaller model with SAEs with smaller hidden dimensions and more thoroughly labeled latents, even though they won’t be end2end. If I don’t find anything that fits my purposes, I might try using your code to train my own end2end SAEs of more convenient dimension. I may want to do this anyways, since I expect the technique I described would work the best in turning a helpful-only model into a helpful-harmless model, and I don’t see such a helpful-only model on Neuronpedia.
If the FFNN has a hidden dimension of 16, then it would have around 1.5 million parameters, which doesn’t sound too bad, and 16 might be enough to find something interesting.
Low-rank factorization might help with the parameter counts.
Overall, there are lots of things to try and I appreciate that you took the time to respond to me. Keep up the great work!
Thank you, Dan.
I suppose I really only need latents in one of the 60 SAE rather than all 60, reducing the number to 768. It is always tricky to use someone else’s code, but I can use your scripts/analysis/autointerp.py run_autointerp to label what I need. Could you give me an idea for how much compute that would take?
I was hoping to get your feedback on my project idea.
The motivation is that right now, lots of people are using SAEs to intervene in language models by hand, which works but doesn’t scale with data or compute since it relies on humans deciding what interventions to make. It would be great to have trainable SAE interventions. That is, components that edit SAE latents and are trained instead of LoRA matrices.The benefit over LoRA would be that if the added component is simple, such as z2 = z + FFNN(z), where the FFNN has only one hidden layer, then it would be possible to interpret the FFNN and explain what the model learned during fine-tuning.
I’ve included a diagram below. The X’es represent connections that are disconnected.
Hi, I’m undertaking a research project and I think that an end2end SAE with automated explanations would be a lot of help.
The project is a a parameter-efficient fine-tuning method that may be very interpretable, allowing researchers to know what the model learned during fine-tuning:
Start by acquiring a model with end-to-end SAEs throughout. Insert a 1 hidden layer FFNN (with a skip connection) after a SAE latent vector and pass the output to the rest of the model. Since SAE latents are interpretable, the rows in the first FFNN matrix will be interpretable as questions about the latent, and the columns of the second FFNN matrix will be interpretable as question-conditional edits to the residual latent vector as in https://www.alignmentforum.org/posts/iGuwZTHWb6DFY3sKB/fact-finding-attempting-to-reverse-engineer-factual-recall
I would expect end2end SAEs to work better than local SAEs because as you found, local SAEs do not return decodings with the same behaviors as well as end2end SAEs.
If you could share your dict[SAE latent, description] for
e2e-saes-gpt , I would appreciate it so much. If you cannot, I’ll use a local SAE instead for which I can find descriptions of the latents, though I expect it would not work as well.
Also, you might like to hear that some of your links are dead:
https://www.neuronpedia.org/gpt2sm-apollojt results in:
Error: Minified React error #185; visit https://react.dev/errors/185 for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
Back to Homehttps://huggingface.%20co/apollo-research/e2e-saes-gpt2 cannot be reached.
I personally thought that “taking actions that would give yourself more power than the government” was something that… seemed like it shouldn’t be allowed? Many people I talked to shared your perspective of “of course AI labs are in the green” but it wasn’t so obvious to me. I originally did the research in April and May, but since then there was the Situational Awareness report with the quote “it’s quite plausible individual CEOs would have the power to literally coup the US government.” I haven’t seen anyone else talking about this.
My reasoning for choosing to write about this topic went like this:
”They are gaining abilities which will allow them to overthrow the government.”
″What? Are they allowed to do that? Isn’t the government going to stop them?”If I were in charge of a government, I sure wouldn’t want people doing things that would set them up to overthrow me. (And this is true even if that government gets its mandate from its citizens, like the US.)
Maybe the details of treason and sedition laws are more common knowledge than I thought, and everyone but me picked up how they worked from other sources?
I’m unsure what you mean. I’m referring to Anthropic’s Emergent Misalignment from Reward Hacking where training in environments that allowed reward hacking resulted in misalignment more generally.
The point of inocculation prompting is to “give the possibility the model is nice a chance.”
Consider this experiment. You train your model on environments it can reward hack, give it a simulated “situational awareness” by telling the model whether it is fake “”deployed”“ or in “training”, and you reward the model for hacking when “”deployed”” but punishing it for reward hacking during “training”. I expect it would learn to be “sneaky evil” since “dumb evil” is inconsistent with the behavior. Inocculation prompting by telling it when “”deployed”″ that it is ok to reward hack would prevent it from leaning “sneaky evil.”