I don’t use LessWrong much anymore. Find me at www.turntrout.com.
My name is Alex Turner. I’m a research scientist at Google DeepMind on the Scalable Alignment team. My views are strictly my own; I do not represent Google. Reach me at alex[at]turntrout.com
TurnTrout
My inside-view perspective: MIRI failed in part because they’re wrong and philosophically confused. They made incorrect assumptions about the problem, and so of course they failed.
naïvely
I did my PhD in this field and have authored dozens of posts about my beliefs, critiques, and proposals. Specifically, many posts are about my disagreements with MIRI/EY, like Inner and Outer Alignment Decompose One Hard Problem Into Two Extremely Hard Problems (voted into the top 10 of the LessWrong review for that year), Many Arguments for AI X-Risk Are Wrong, or Some of My Disagreements with List of Lethalities. You might disagree with me, but I am not naive in my experience or cavalier in coming to this conclusion.
Nice work. What a cool use of steering vectors!
In a thread which claimed that Nate Soares radicalized a co-founder of e-acc, Nate deleted my comment – presumably to hide negative information and anecdotes about how he treats people. He also blocked me from commenting on his posts.
The information which Nate suppressed
The post concerned (among other topics) how to effectively communicate about AI safety, and positive anecdotes about Nate’s recent approach. (Additionally, he mentions “I’m regularly told that I’m just an idealistic rationalist who’s enamored by the virtue of truth”—a love which apparently does not extend to allowing people to read negative truths about his own behavior.)
Here are the parents of the comment which Nate deleted:
@jdp (top-level comment)
For what it’s worth I know one of the founders of e/acc and they told me they were radicalized by a date they had with you where they felt you bullied them about this subject.
@Mo Putera (reply to jdp)
Full tweet for anyone curious:
i’m reminded today of a dinner conversation i had once w one of the top MIRI folks...
we talked AI safety and i felt he was playing status games in our conversation moreso than actually engaging w the substance of my questions- negging me and implying i was not very smart if i didn’t immediately react w fear to the parable of the paperclip, if i asked questions about hardware & infrastructure & connectivity & data constraints...
luckily i don’t define myself by my intelligence so i wasn’t cowed into doom but instead joined the budding e/acc movement a few weeks later.
still i was unsettled by the attempted psychological manipulation and frame control hiding under the hunched shoulders and soft ever so polite voice.
My deleted comment (proof) responded to Mo’s record of the tweet:
For those unfamiliar with this situation, see also a partial list of “(sometimes long-term) negative effects Nate Soares has had on people while discussing AI safety.” (About 2⁄3 of the list items involve such discussions.)
The e/acc cofounder wrote:
we talked AI safety and i felt he was playing status games in our conversation moreso than actually engaging w the substance of my questions- negging me and implying i was not very smart if i didn’t immediately react w fear to the parable of the paperclip
This mirrors my own experience:
I, personally, have been on the receiving end of (what felt to me like) a Nate-bulldozing, which killed my excitement for engaging with the MIRI-sphere, and also punctured my excitement for doing alignment theory...
Discussing norms with Nate leads to an explosion of conversational complexity. In my opinion, such discussion can sound really nice and reasonable, until you remember that you just wanted him to e.g. not insult your reasoning skills and instead engage with your object-level claims… but somehow your simple request turns into a complicated and painful negotiation. You never thought you’d have to explain “being nice.”
Then—in my experience—you give up trying to negotiate anything from him and just accept that he gets to follow whatever “norms” he wants.
Why did Nate delete negative information about himself?
Nate gave the reasoning “Discussion of how some people react poorly to perceived overconfidence[1] is just barely topical. Discussion of individual conduct isn’t.”. But my anecdote is a valid report of the historical consequences of talking with Nate – just as valid as the e/acc co-founder’s tweet. Several other commenters had already supported the e/acc tweet information as quite relevant to the thread.
Therefore, I conclude that Nate deleted the true information I shared because it made him look bad.
EDIT: Nate also blocked me from commenting on his posts:
- ^
See how Nate frames the issue as “reacting poorly to perceived overconfidence”, which is not how the e/acc co-founder described her experience. She called it “psychological manipulation” but did not say she thought Nate being overconfident was an issue. Nate deflects from serious charges (“psychological manipulation”) to a charge which would be more convenient for him (“overconfidence”).
- ^
people who know me rarely describe my conversational style as “soft and ever-so-polite”
The women I’ve spoken to about you have ~uniformly reported you being substantially more polite to them than the men I’ve spoken to (and several of these women pointed out this discrepancy out on their own). One trans man even said that they felt you were quite rude to him, which he took as validation of his transition being complete.
So any men reading this and discrediting the tweet on the basis of “Nate isn’t ‘ever-so-polite’” should think twice.
Yup, that claim is wrong. I’m not ⇐ 1% but I have met educated skeptics who are. Not sure why Nate made this claim since it isn’t relevant to his point—could just delete that first sentence.
based prediction
Wasn’t it the case that for some reason, full distillation had comparable compute requirement to data filtering? I was surprised by that. My impression is that distillation should be more like 10% of pretraining (data filtering), which would make the computational UNDO results much stronger. Not sure what happened here.
I think you missed the point here. My suggested scheme is 1. label a small amount of data 2. train a classifier 3. apply the classifier to know if you should skip a token / make the target logprobs be noise or use the original logprobs. This is spiritually the same as 1. label a small amount of data 2. use that for unlearning 3. apply the unlearned model to know if the target logprobs should be noise or sth close to the original logprobs.
EDIT: I think I misunderstood your original point—were you saying to just label all of the data using a classifier trained on just 1% of the pretraining data? (Neither of your schemes say what to do after step 3.)
> UNDO over Unlearn-and-Distill is that it provides a tunable compute/robustness knob between the conventional unlearning and full reinitialization/data filtering
This to be a part of the option space that nobody is interested in, but it’s still scientifically interesting.
Why do you claim that no one is interested in this? Lots of labs do data filtering, which is known to be effective but quite costly to iterate on.
In other words, “using unlearning techniques like GradDiff/MaxEnt during pretraining” might be a really powerful technique.
I have a cached thought that this was found to disrupt overall capabilities / make learning harder, but I don’t have a reference on hand.
Thanks, I appreciate your comments.
This is essentially a simplified version of our time horizon extension model that doesn’t account for AI R&D automation. Or another way to view this is that we crudely accounted for AI R&D automation by raising the decay.
Why did you simplify the model for a graph? You could have plotted a trajectory to begin with, instead of making a bespoke simplification. Is it because you wanted to “represent roughly the trajectory that happens in AI 2027”? I get that AI 2027 is a story, but why not use your real model to sample a trajectory—perhaps rejection sampling until you get one of the more aggressive possibilities?
Or you could even rejection sample the model until you get one that matches AI 2027 pretty closely, and then draw that curve’s projection (and retrojection—wait is that even a word).
I’m currently watching the tension between “this is just a story [which doesn’t have hard data behind it, take it with a big grain of salt]” and “here’s some math supporting our estimates [but wasn’t actually used for our plots or the story in any direct way].” I’m worried that the math lends credibility without being that relevant to the real decisions.
Or we should have more clearly labeled that the graph was not generated via the timelines model.
Yes, I think this would have been quite good.
since the forecast did end up as good propaganda if nothing else
Just responding to this local comment you made: I think it’s wrong to make “propaganda” to reach end Y, even if you think end Y is important. If you have real reasons for believing something will happen, you shouldn’t have to lie, exaggerate, or otherwise mislead your audience to make them believe it, too.
So I’m arguing that you shouldn’t have mixed feelings because ~”it was valuable propaganda at least.” Again, not trying to claim that AI 2027 “lied”—just replying to the quoted bit of reasoning.
I am concerned that Scott and Daniel have graphed new LLM performance on this unrelated curve and presented it as evidence in favour of their model, even if they have been clear that it is “weak” evidence. It’s wrong to present this curve as “AI 2027’s prediction”, as Scott did.
Wow, this is really bad. I consider the inclusion of this graph to be deceptive. AFAICT this graph never should have existed to begin with.
I think that “make it easy to responsibly share a dataset” would be a highly impactful project. Anthropic’s Claude 4 model card already argues that dataset leakage hurt Claude 4′s alignment (before mitigations).
For my part, I’ll put out a $500 bounty on someone completing this project and doing a good job of it (as judged by me / whomever I consult). I’d also tweet it out and talk about how great it is that [person] completed the project :) I don’t check LW actively, so if you pursue this, please email
alex@turntrout.com
.EDIT: Thanks to my coworker Anna Wang , the bounty is doubled to $1,000! Completion criterion is:
An unfamiliar researcher can follow the instructions and have their dataset responsibly uploaded within one hour
Please check proposed solutions with dummy datasets and scrapers
Thanks for taking these steps!
Context: I was pretty worried about self-fulfilling misalignment data poisoning (https://turntrout.com/self-fulfilling-misalignment) after reading some of the Claude 4 model card. I talked with @Monte M and then Ryan about possible steps here & encouraged action on the steps besides the canary string. I’ve considered writing up a “here are some steps to take” guide but honestly I’m not an expert.
Probably there’s existing work on how to host data so that AI won’t train on it.
If not: I think it’d be great for someone to make a template website for e.g. signing up with CloudFlare. Maybe a repo that has the skeleton of a dataset-hosting website (with
robots.txt
& ToS & canary string included) for people who want to host misalignment data more responsibly. Ideally those people would just have toSign up with e.g. Cloudflare using a linked guide,
Clone the repo,
Fill in some information and host their dataset.
After all, someone who has finally finished their project and then discovers that they’re supposed to traverse some arduous process is likely to just avoid it.
--Filter out the ones that seem to have maybe been unfaithful, as judged by e.g. activations for deception or whatnot.
Would you actively unlearn on those CoTs? Or just filter from distillation data?
But the button did eventually get added, just not by 2021-07-01 :) Your prior against shipping was wrong in this case! (Though I was broadly wrong about the time-limited prediction.)
I also still think that the [site-wide pond video] should probably not play by default
Per your suggestion, the pond video no longer plays by default:
By using
micromorph
to preserve the video element, the video doesn’t unload as you navigate through the site. Therefore, the current video frame stays constant until the user hovers over the video again. Since the auto / light / dark mode selector hovers above the pond, “what does the ‘auto’ text mean’ → ooh, the ‘image’ moves!” provides a natural interaction pathway for the user to realize the “pond image” is actually a “pond video”!But regardless, since I’m on a fullscreen 4k portrait monitor, and I have to zoom out before I can see popups at all, you may have gone overboard in your width requirements.
The desktop view (and therefore, popups) now render at viewport widths as thin as 1305px. Previously, the minimal width was 1580px.
Any empirical evidence that the Waluigi effect is real? Or are you more appealing to jailbreaks and such?
Retrospective: This is a win for the frame of “reward reinforces previous computations.” Ever since 2022, I’ve thought of “reward” as reinforcing the computations which led to the reward and as a chisel which carves circuits into the policy. From “Reward is not the optimization target”:
By thinking about reward in this way, I was able to predict[1] and encourage the success of this research direction.
Ariana showed that in this coding environment, it’s not just about what the AI ends up choosing but also why the AI made that choice to begin with. Even though we “perfectly” reinforce the AI for doing what we wanted (i.e. avoiding special cases), we also often reinforced the system for the wrong reasons (i.e. considering special-casing the algorithm, even when not asked to do so). The AI’s propensity to consider doing the wrong thing was reinforced and so the AI generalized to hack more in-distribution.
Assuming these results generalize, the trained policy is not just determined by the outputs which get rewarded. The trained policy also depends on which intermediate computations get rewarded.
As best I can tell, before “Reward is not the optimization target”, people mostly thought of RL as a sieve, or even a carrot and stick—try to “give reward” so the AI can only maximize reward via good behavior. Few[2] other people speculated that RL generalization is controlled by why the policy took an action. So I give myself and @Quintin Pope[3] a bunch of points.
To be clear, my prediction was not as precise as “I bet you can reinforce sus CoTs and get sus generalization.” The brainstorming process went like:
What are some of the most open important problems in alignment? → Reward hacking
What are common assumptions about reward hacking? Oh, yeah, that hacking comes from reward function imperfections.
Hmm I wonder whether models can be trained to reward hack even given “perfect” feedback
We should really think more about this
Time passes, continue encouraging research into the importance of CoT and prompts in RL (thinking about RL using the chisel-frame, as I ~always do)
Victor and Ariana get this result.
Perhaps Steve Byrnes is an exception.
Quintin and I came up with “Reward is not the optimization target” together.