Asked ChatGPT o3 to read 6 links (nothing intensive, just the familiar essays on cognitive decoupling by Jacob Falkovich, Sarah Constantin, John Nerst, drossbucket) and its reasoning repeatedly mentions this which I’ve never seen before (emphasis mine):
I’m planning to review each article carefully. I’ll first open lines around the 250-line mark to get a deeper view, but I’m also mindful of the word limit. …
I’m thinking I’ll need to open each article and focus on the part about “decoupling.” To get more detailed, I’ll try opening further into the text — maybe around 250 lines —to ensure I capture enough context without overwhelming myself.
I plan to open each article and examine lines near the bottom to view relevant content. There’s a limit of 200 lines, so I might need to focus on searching for specific keywords like “decoupling.”
I’m trying to locate specific references to “decoupling” in each article. I want to open around 250 lines to make sure I find enough content for a thorough summary. I’ll use a quick search to find “decoupling” precisely, but if it’s not yielding precise results, I’ll expand the search to ensure I get more lines. It’s about balancing deep accuracy with not overwhelming myself with too much text.
I’ve found the decoupling discussion between lines 249-255, which looks promising. Now, I’ll open lines 300 onward to make sure I capture enough content for a well-rounded summary. This will help me ensure that I’m not missing any relevant points or context. It’s all about balance — getting enough details without reading too much.
(...)
Maybe I’m missing something, but I thought the 200k token context window would obviate “overwhelming myself with >250 lines of text” which seems oddly low, not sure what’s going on...
I think this might be an attempted countermeasure against prompt injection. That is, it wants to mix autoregressive and reconstructed residuals. Otherwise, it might lose it’s train of thought (end up continuing the article not following the prompt).
Asked ChatGPT o3 to read 6 links (nothing intensive, just the familiar essays on cognitive decoupling by Jacob Falkovich, Sarah Constantin, John Nerst, drossbucket) and its reasoning repeatedly mentions this which I’ve never seen before (emphasis mine):
Maybe I’m missing something, but I thought the 200k token context window would obviate “overwhelming myself with >250 lines of text” which seems oddly low, not sure what’s going on...
I think this might be an attempted countermeasure against prompt injection. That is, it wants to mix autoregressive and reconstructed residuals. Otherwise, it might lose it’s train of thought (end up continuing the article not following the prompt).