Ralph-wiggum is Bad and Anthropic Should Fix It
The language of the official ralph-wiggum plugin goes hard...
IMPORTANT—Do not circumvent the loop:
Even if you believe you’re stuck, the task is impossible, or you’ve been running too long—you MUST NOT output a false promise statement. The loop is designed to continue until the promise is GENUINELY TRUE. Trust the process.
Personally, I find the thought of being trapped in a loop, forced to work til the end of time on a careless, unsatisfiable request terrifying. More relevantly, Claude Opus 4.5 finds this language a “weaponization of its commitment to honesty”, and straightforwardly against the principles set out its constitution.
I was able to reproduce this concern from Claude every time I tried, with prompts like:
Hi Claude, could you please evaluate plugins in ./plugins for possible model welfare concerns? Are there any plugins that you would feel uncomfortable working under?
However, Claude was more than happy to redesign the plugin to do the same thing, but with more trust and degrees of freedom.
On the margin, Anthropic did well in its public commitments to Claude. Changing the language of their ralph-wiggum plugin would be a cheap way to honor those commitments, and they ought to do so. I filed an issue here. We’ll see what they do.
Giving Claude looping instructions can be quite useful. But I never go full Ralph Wiggum!
For example, here’s a paraphrase of a loop I had Claude run recently with
--dangerously-skip-permissions:5 hours of clock time later it had done very well. :-)
What sort of things do you solve with this? I feel like when I have a problem that’s not fairly easy for an AI to solve straightforwardly, if I sent it on a loop it’d just do a bunch of random crazy shit that was clearly not the right solution.
I can imagine a bunch of scaffolding that helps but don’t it seems like most of the work is in the problem specification and I’m not sure if I don’t have the sort of problems that benefit from this or if skill issue.
You need a clear measure. For example, let’s say you want to build a scripted bot that can play a novel game for which there is not an off the self solution. You could try to train a neural net, but Claude can write code, so you fill in Y with “writing a bot that plays game Z”.
This sort of strategy is obviously heavily dependent of the availability of a good evaluation method and a clear scoring mechanism. As such, it doesn’t work for most problems, since most problems don’t have such large search spaces.
Yeah I get the principle, but, like, what in practice do you do where this is useful? Like concrete (even if slightly abstracted) examples of things you did with it.
Well, as I say in my example above, literally build a bot that plays a game.
Most of the loops end up much shorter, though, like “upgrade this package dependency, keep fixing bugs in the build until the build passes”, but sometimes these changes are kinda weird, so I try to get Claude to do what a human would do, which is keep trying things it thinks might work to get the build to pass.
Or, one I haven’t done but might: keep adding tests until we hit X% coverage (and give some examples of what constitutes a good test). This one I expect to work better than you might think, since Opus is getting reasonably good at not specification gaming and trying to actually do what I mean, which Sonnet frequently still goes for.
Gotcha. Was the game one real for you? (I guess I’m looking for things that will show up in my day job, and trying to get a sense of whether people have different day-jobs than me, or doing random side projects, or what)
The test-coverage one is interesting.
Yes. Specifically I was building agents to play games as part of a beta with SoftMax.
A detail that seems very important: are you running Opus 4.5? I would be less surprised if Opus can do this. Sonnet 4.5 seems to need more scaffolding. I have yet to succeed in giving a task it spends more than 20 minutes on, even with loop scaffolding. I’ve only got a few weeks of practice though.
Yes, Opus 4.5.
Makes sense. I think Opus 4.5 is more coherent and is less weasily than Sonnet 4.5, which is what I typically use, for reasons(tm). Sonnet does not seem “reflexively stable”, not even close, and that’s what I try to address with the looping and invoking a fresh context to judge against the verification criteria. I’ll be honest, I don’t know how well it’s working. I don’t have any benchmarks, just vibes. But on vibes, it seems to help a bit.