Method Iteration: An LLM Prompting Technique

TLDR: Method Iteration is a LLM prompting technique that causes better responses to hard problems.

Some researchers think that for AI to solve truly hard problems, we need bigger models, more data, or new architectures.

I wonder if there’s another way. The text you get from an LLM is downstream of the thought process you ask it to run. Induce a better process, and you’ll push capability without touching the weights. (There’s similar intuition for the effectiveness of CoT in reasoning models in the first place.)

By hard problems, I mean nearly impossible problems, where any progress toward an answer would be significant for society. For example:

  1. What’s a plan to significantly reduce global emissions on a $10k budget?

  2. What’s a 10-minute plan for an indie developer to build the seed of a general superintelligence?

I’ve been experimenting with different thought processes, i.e. LLM prompt chains. Some don’t move the needle, like asking the model to “try again but better,” or asking the model to brainstorm ten responses, critique them, pick the best, and rerun. These polish; they don’t rethink. But I’ve come across one approach which centers rethinking, helping LLMs improve the shape of their answer, which I’ll call Method Iteration.

Instead of asking directly for an output, you ask for a way of thinking and then improve that.

The Method Iteration loop:

  1. Generate a method. The model states how it will tackle the question—its reasoning procedure.

  2. Generate an output (using the method). It follows that procedure to produce a plan/​answer.

  3. Critique the output. Where it falls short, what’s missing, what’s incoherent.

  4. Critique the method. How the procedure itself produced those failures; propose a better procedure.

Then repeat.

Here are links to two conversations with GPT5, where I bring it through a few rounds of method iteration concerning building superintelligence and reducing climate change. I sense the responses improve each loop.

My guess at why method iteration works: a one-shot answer is a sample from a huge space with almost no structure. A method is a policy over thinking. Improving the policy compounds. Each loop doesn’t just edit the plan; it upgrades the generator of plans. You turn undirected sampling into directed search over procedures.

What happens in practice: even a 5-10 manual loops shifts the model into the territory of what strikes me as actual creativity—thoroughly digesting the problems, decomposing it, and then producing plans that actually match the ambition/​scale of the prompts. In my experiments, it makes LLMs smarter. I’m curious how it works for others.

So, if you have a hard problem that you’re struggling with and one-shot LLMs have struggled with, try applying method iteration. I’ll be curious if it works for you.

Also, I’m curious to run this at scale. If a handful of loops makes a visible difference, what happens with orders of magnitude more? I’ve run dozens of method-iteration loops (four calls per loop), but not yet hundreds or thousands. As LLM calls are cheap and getting cheaper, I’m curious how far method iteration can go. Where does improvement saturate? Where does it break?

If you’re experimenting with method iteration or any other repeatable LLM prompting techniques, please comment.