With Greedy Coordinate Gradient (GCG) optimization, when trying to force argmax-generated completions, using an improved objective function dramatically increased our optimizer’s performance.
Do you have some data / plots here?
With Greedy Coordinate Gradient (GCG) optimization, when trying to force argmax-generated completions, using an improved objective function dramatically increased our optimizer’s performance.
Do you have some data / plots here?
Oh so you have prompt_loss_weight=1, got it. I’ll cross out my original comment. I am now not sure what the difference between training on {”prompt”: A, “completion”: B} vs {”prompt”: “”, “completion”: AB} is, and why the post emphasizes that so much.
The key adjustment in this post is that they train on the entire sequence
Yeah, but my understanding of the post is that it wasn’t enough; it only worked out when A was Tom Cruise, not Uriah Hawthorne. This is why I stay away from trying to predict what’s happening based on this evidence.
Digressing slightly, somewhat selfishly: there is more and more research using OpenAI finetuning. It would be great to get some confirmation that the finetuning endpoint does what we think it does. Unlike with the model versions, there are no guarantees on the finetuning endpoint being stable over time; they could introduce a p(A | B) term when finetuning on {”prompt”: A, “completion”: B} at any time if it improved performance, and experiments like this would then go to waste.
So there’s a post that claims p(A | B) is sometimes learned from p(B | A) if you make the following two adjustments to the finetuning experiments in the paper:(1) you finetune not on p(B | A), but p(A) + p(B | A) instead finetune on p(AB) in the completion instead of finetuning on p(A) in the prompt + p(B | A) in the completion, as in Berglund et al.
(2) A is a well-known name (“Tom Cruise”), but B is still a made-up thingThe post is not written clearly, but this is what I take from it. Not sure how model internals explain this.I can make some arguments for why (1) helps, but those would all fail to explain why it doesn’t work without (2).
Caveat: The experiments in the post are only on A=”Tom Cruise” and gpt-3.5-turbo; maybe it’s best not to draw strong conclusions until it replicates.
I made an illegal move while playing over the board (5+3 blitz) yesterday and lost the game. Maybe my model of chess (even when seeing the current board state) is indeed questionable, but well, it apparently happens to grandmasters in blitz too.
Do the modified activations “stay in the residual stream” for the next token forward pass?
Is there any difference if they do or don’t?
If I understand the method correctly, in Steering GPT-2-XL by adding an activation vector they always added the steering vectors on the same (token, layer) coordinates, hence in their setting this distinction doesn’t matter. However, if the added vector is on (last_token, layer), then there seems to be a difference.
Thank you for the discussion in the DMs!
Wrt superhuman doubts: The models we tested are superhuman. https://www.melonimarco.it/en/2021/03/08/stockfish-and-lc0-test-at-different-number-of-nodes/ gave a rough human ELO estimate of 3000 for a 2021 version of Leela with just 100 nodes, 3300 for 1000 nodes. There is a bot on Lichess that plays single-node (no search at all) and seems to be in top 0.1% of players.
I asked some Leela contributors; they say that it’s likely new versions of Leela are superhuman at even 20 nodes; and that our tests of 100-1600 nodes are almost certainly quite superhuman. We also tested Stockfish NNUE with 80k nodes and Stockfish classical with 4e6 nodes, with similar consistency results.
Table 5 in Appendix B.3 (“Comparison of the number of failures our method finds in increasingly stronger models”): this is all on positions from Master-level games. The only synthetically generated positions are for the Board transformation check, as no-pawn positions with lots of pieces are rare in human games.
We cannot comment on different setups not reproducing our results exactly; pairs of positions do not necessarily transfer between versions, but iirc preliminary exploration implied that the results wouldn’t be qualitatively different. Maybe we’ll do a proper experiment to confirm.
There’s an important question to ask here: how much does scaling search help consistency? Scaling Scaling Laws with Board Games [Jones, 2021] is the standard reference, but I don’t see how to convert their predictions to estimates here. We found one halving of in-distribution inconsistency ratio with two doublings of search nodes on the Recommended move check. Not sure if anyone will be working on any version of this soon (FAR AI maybe?). I’d be more interested in doing a paper on this if I could wrap my head around how to scale “search” in LLMs, with a similar effect as what increasing the number of search nodes does on MCTS trained models.
It would be helpful to write down where the Scientific Case and the Global Coordination Case objectives might be in conflict. The “Each subcomponent” section addresses some of the differences, but not the incentives. I do acknowledge that first steps look very similar right now, but the objectives might diverge at some point. It naively seems that demonstrating things that are scary might be easier and is not the same thing as creating examples which usefully inform alignment of superhuman models.
So I’ve read an overview [1] which says Chagnon observed a pre-Malthusian group of people, which was kept from exponentially increasing not by scarcity of resources, but by sheer competitive violence; a totalitarian society that lives in abundance.
There seems to be an important scarcity factor shaping their society, but not of the kind where we could say that “we only very recently left the era in which scarcity was the dominant feature of people’s lives.”
Although, reading again, this doesn’t disprove violence in general arising due to scarcity, and then misgeneralizing in abundant environments… And again, “violence” is not the same as “coercion”.
Unnecessarily political, but seems to accurately represent Chagnon’s observations, based on other reporting and a quick skim of Chagnon’s work on Google Books.
I don’t think “coercion is an evolutionary adaptation to scarcity, and we’ve only recently managed to get rid of the scarcity” is clearly true. It intuitively makes sense, but Napoleon Chagnon’s research seems to be one piece of evidence against the theory.
Jason Wei responded at https://www.jasonwei.net/blog/common-arguments-regarding-emergent-abilities.
My thoughts: It is true that some metrics increase smoothly and some don’t. The issue is that some important capabilities are inherently all-or-nothing, and we haven’t yet found surrogate metrics which increase smoothly and correlate with things we care about.
What we want is: for a given capability, predicting whether this capability happens in the model that is being trained.
If extrapolating a smoothly increasing surrogate metric can do that, then emergence of that capability is indeed a mirage. Otherwise, Betteridge’s law of headlines applies.
Claim 2: Our program gets more people working in AI/ML who would not otherwise be doing so (...)
This might be unpopular here, but I think each and every measure you take to alleviate this concern is counterproductive. This claim should just be discarded as a thing of the past. May 2020 has ended 6 months ago; everyone knows AI is the best thing to be working on if you want to maximize money or impact or status. For people not motivated by AI risks, you could replace would in that claim with could, without changing the meaning of the sentence.
On the other hand, maybe keeping the current programs explicitly in-group make a lot of sense if you think that AI x-risk will soon be a major topic in the ML research community anyway.
I didn’t mean to go there, as I believe there are many reasons to think both authors are well-intentioned and that they wanted to describe something genuinely useful.
It’s just that this contribution fails to live up to its title or to sentences like “In other words, no one has done for AI what Russell Impagliazzo did for complexity theory in 1995...”. My original comment would be the same if it was an anonymous post.
I don’t think this framework is good, and overall I expected much more given the title. The name “five worlds” is associated with a seminal paper that materialized and gave names to important concepts in the latent space… and this is just a list of outcomes of AI development, with that categorization by itself providing very little insight for actual work on AI.
Repeating my comment from Shtetl-Optimized, to which they didn’t reply:
It appears that you’re taking collections of worlds and categorizing them based on the “outcome” projection, labeling the categories according to what you believe is the modal representative underlying world of those categories.
By selecting the representative worlds to be “far away” from each other, it gives the impression that these categories of worlds are clearly well-separated. But, we do not have any guarantees that the outcome map is robust at all! The “decision boundary” is complex, and two worlds which are very similar (say, they differ in a single decision made by a single human somewhere) might map to very different outcomes.
The classification describes *outcomes* rather than actual worlds in which these outcomes come from.
Some classifications of the possible worlds would make sense if we could condition on those to make decisions; but this classification doesn’t provide any actionable information.
what is the “language models are benign because of the language modeling objective” take?
My condolences to the family.
Chai (not to be confused with the CHAI safety org in Berkeley) is a company that optimizes chatbots for engagement; things like this are entirely predictable for a company with their values.
[Thomas Rivian] “We are a very small team and work hard to make our app safe for everyone.”
Incredible. Compare the Chai LinkedIn bio mocking responsible behavior:
“Ugly office boring perks…
, Chai =
Top two reasons you won’t like us:
1. AI safety =
2. Move fast and break stuff, we write code not papers.”
The very first time anyone hears about them is their product being the first chatbot to convince a person to take their life… That’s very bad luck for a startup. I guess the lesson is to not behave like cartoon villains, and if you do, at least not put it in writing in meme form?
I expected downvotes (it is cheeky and maybe not great for fruitful discussion), but instead I got disagreevotes. Big company labs do review papers for statements that could hurt the company! It’s not a conspiracy theory to suggest this shaped the content in some ways, especially the risks section.
Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work.
Saying the quiet part out loud, I see!
It is followed by this sentence, though, which is the only place in the 154-page paper that even remotely hints at critical risks:
With this direction of work, great care would have to be taken on alignment and safety per a system’s abilities to take autonomous actions in the world and to perform autonomous self-improvement via cycles of learning.
Very scarce references to any safety works, except the GPT-4 report and a passing mention to some interpretability papers.
Overall, I feel like the paper is a shameful exercise in not mentioning the elephant in the room. My guess is that their corporate bosses are censoring mentions of risks that could get them bad media PR, like with the Sydney debacle. It’s still not a good excuse.
The one you linked doesn’t really rhyme. The meter is quite consistently decasyllabic, though.
I find it interesting that the collection has a fairly large number of songs about World War II. Seems that the “oral songwriters composing war epics” meme lived until the very end of the tradition.