I’ve been doing a lot of large research projects with LLMs recently. A project might involve 10-20 “Claude Research Mode” runs or equivalent analyses with code, where each one is maxing out context.
The key thing is “session handoff” and “subagent calls”. I do all of this by hand, and I avoid having the harness automatically compact conversations (I don’t like opaque compression, I want to know what’s in context for any given chat instance.)
Whether or not I’m doing it well, I think there’s metis in where and how I split up the work (and have the model split up the work for itself), and for research this is worth working on.
I don’t have the perfect process up yet. But it’s definitely the case there are research projects worth tackling that are 20x as big as context windows, and LLMs are useful, and it’s not a bad process to be involved in the process connect the convos and calls together.
I’ve been doing a lot of large research projects with LLMs recently. A project might involve 10-20 “Claude Research Mode” runs or equivalent analyses with code, where each one is maxing out context.
The key thing is “session handoff” and “subagent calls”. I do all of this by hand, and I avoid having the harness automatically compact conversations (I don’t like opaque compression, I want to know what’s in context for any given chat instance.)
Whether or not I’m doing it well, I think there’s metis in where and how I split up the work (and have the model split up the work for itself), and for research this is worth working on.
I don’t have the perfect process up yet. But it’s definitely the case there are research projects worth tackling that are 20x as big as context windows, and LLMs are useful, and it’s not a bad process to be involved in the process connect the convos and calls together.