I asked ChatGPT to tell me how I use it to help write papers and this is what it outputted: You use the system as a constrained collaborator embedded at specific points in the writing process, not as an end-to-end author.
Idea generation and expansion You start with a core claim and use the system to enumerate adjacent arguments, examples, and counterarguments. The system functions as a breadth generator. You then select and refine only the strongest lines of argument.
Structural design You use it to test different outlines, section orders, and narrative arcs. You ask for alternative organizations, then choose the one that increases clarity and momentum. The system helps surface structural options you might not have considered.
Local rewriting, not full drafting You provide existing text and request targeted rewrites of specific sentences, paragraphs, or transitions. This keeps control over the argument while outsourcing micro-level phrasing improvements.
Transition construction You use it to bridge sections cleanly. You ask for exact sentences that connect two ideas without repetition or padding, then select or refine the best candidate.
Example and analogy development You use it to generate concrete illustrations that carry explanatory weight. You iterate until the example does real work in the argument rather than serving as decoration.
Adversarial critique You use it to stress-test claims. You ask for objections, weak points, and alternative interpretations, then revise the text to survive those critiques.
Comparative evaluation You present multiple versions of a section and ask for direct comparison. This forces the system to identify differences in clarity, force, and logic, helping you converge on a stronger version.
Constraint enforcement You impose explicit rules on style and structure, then require the system to revise until those rules are satisfied. When it fails, you reject the output and restate constraints. Over time, this trains the system within the session.
Iterative narrowing You move from broad prompts to increasingly specific ones. Early stages explore possibilities; later stages demand precision. Each round reduces variance and increases control.
Final polishing under strict standards You use the system only after the argument is already sound. At this stage, it functions as a high-level editor, tightening language and removing residual weaknesses. If it cannot meet the required standard, you discard the output.
The control mechanism is simple: you never accept output that violates your constraints. Selection, rejection, and repeated tightening replace reliance on any single response.
That makes sense. I do think that LLM-assisted writing can be very good if it’s used carefully in a proces like that. I just looked and saw that you wrote Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks. I was blown away by how comprehensive and thorough that was. It makes sense that you could only pull that off by using LLMs as collaborators; I doubt you could’ve written such a piece without that help (nobody else has, even though it’s a valuable contribtion).
Except for point 1 in the recipe above, “idea generation”. Putting in a thesis and then having an LLM come up with ideas for how to support it sounds like exactly how you confuse yourself and everyone else. It’s asking for sycophancy and good-sounding but ultimately wrong arguments. At the least you’d want to do a round of adversarial critique, right away instead of investing in writing a whole article based on a decent argument that might be wrong.
This is very different from academic practice, in which the whole goal is to create a decent argument even if it might be wrong. That’s an advocacy system like law, in which you assume that some other researcher will spend just as much time debunking your argument if it’s wrong.
But that’s a much worse system than asking everyone to act as their own critic before asking everyone else to read their arguments.
Which is the point of the LW credo: write to inform, not to persuade.
I think it’s worth looking at the new guidelines and the discussion.
I asked ChatGPT to tell me how I use it to help write papers and this is what it outputted: You use the system as a constrained collaborator embedded at specific points in the writing process, not as an end-to-end author.
Idea generation and expansion
You start with a core claim and use the system to enumerate adjacent arguments, examples, and counterarguments. The system functions as a breadth generator. You then select and refine only the strongest lines of argument.
Structural design
You use it to test different outlines, section orders, and narrative arcs. You ask for alternative organizations, then choose the one that increases clarity and momentum. The system helps surface structural options you might not have considered.
Local rewriting, not full drafting
You provide existing text and request targeted rewrites of specific sentences, paragraphs, or transitions. This keeps control over the argument while outsourcing micro-level phrasing improvements.
Transition construction
You use it to bridge sections cleanly. You ask for exact sentences that connect two ideas without repetition or padding, then select or refine the best candidate.
Example and analogy development
You use it to generate concrete illustrations that carry explanatory weight. You iterate until the example does real work in the argument rather than serving as decoration.
Adversarial critique
You use it to stress-test claims. You ask for objections, weak points, and alternative interpretations, then revise the text to survive those critiques.
Comparative evaluation
You present multiple versions of a section and ask for direct comparison. This forces the system to identify differences in clarity, force, and logic, helping you converge on a stronger version.
Constraint enforcement
You impose explicit rules on style and structure, then require the system to revise until those rules are satisfied. When it fails, you reject the output and restate constraints. Over time, this trains the system within the session.
Iterative narrowing
You move from broad prompts to increasingly specific ones. Early stages explore possibilities; later stages demand precision. Each round reduces variance and increases control.
Final polishing under strict standards
You use the system only after the argument is already sound. At this stage, it functions as a high-level editor, tightening language and removing residual weaknesses. If it cannot meet the required standard, you discard the output.
The control mechanism is simple: you never accept output that violates your constraints. Selection, rejection, and repeated tightening replace reliance on any single response.
That makes sense. I do think that LLM-assisted writing can be very good if it’s used carefully in a proces like that. I just looked and saw that you wrote Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks. I was blown away by how comprehensive and thorough that was. It makes sense that you could only pull that off by using LLMs as collaborators; I doubt you could’ve written such a piece without that help (nobody else has, even though it’s a valuable contribtion).
Except for point 1 in the recipe above, “idea generation”. Putting in a thesis and then having an LLM come up with ideas for how to support it sounds like exactly how you confuse yourself and everyone else. It’s asking for sycophancy and good-sounding but ultimately wrong arguments. At the least you’d want to do a round of adversarial critique, right away instead of investing in writing a whole article based on a decent argument that might be wrong.
This is very different from academic practice, in which the whole goal is to create a decent argument even if it might be wrong. That’s an advocacy system like law, in which you assume that some other researcher will spend just as much time debunking your argument if it’s wrong.
But that’s a much worse system than asking everyone to act as their own critic before asking everyone else to read their arguments.
Which is the point of the LW credo: write to inform, not to persuade.
I think it’s worth looking at the new guidelines and the discussion.
And