This new article is about why AI is going to destroy most jobs, and as I wrote in it, “This essay was written with help from AI. If I could not use AI productively to improve it, that would undermine either my argument or my claim to expertise.”
Finally, I had a stroke two years ago and have come to rely on AI when writing. Please allow me to publish this article.
James Miller Professor of Economics, Smith College
That doesn’t seem to work. I put it in an LLM block but now it says I already published the article, although I can’t find it on the website so unless there is a delay, something has gone wrong.
Ah, yeah, I don’t think we are handling edits gracefully, and will look into how we can improve that at a process level, but also, at least for your latest post I am seeing this:
I.e. you inserted an LLM content block at the top, but didn’t actually wrap anything with it, so that wouldn’t end up being picked up by our systems.
The problems you are running into might actually be a great fit for using our LLM-assistant integration. We have infrastructure so that an LLM can insert and edit arbitrary content in posts (both inside and outside of LLM content blocks), and so this might allow you dealing with this stuff much better. You can click this button in the editor to open up a Claude chat with our suggested prompt (but throwing it into many other models should also work):
Yes, I did that. Then got some more help from ChatGPT and made the LLM block first and put the text into it. I got something saying I had already submitted (published?) the article. My plan is to change the title and try again tomorrow.
I’d recommend simply making a new post; we currently don’t have infrastructure set up for automatically re-evaluating posts have previously been rejected.
The new rule is based on some fairly subtle but important considerations around epistemic pollution from letting LLMs think for you. There’s also an issue I think with decoupling the signal of good writing from the content of good ideas. It’s already pretty hard to find the good ideas on LW even when you can notice bad writing.
Like Habryka said, there’s an easy route within the rules: put it all in an LLM block. I also recommend that you describe what you did pretty thoroughly at the top, so people know what role you played in generating the ideas and refining them through the writing process. (You might even describe it more thoroughly in a collapsible block or at the bottom for those who want the gory details. I do; I think LLM-assisted writing can range from almost entirely using human judgment on the ideas, all the way to letting the LLM create and judge the ideas/claims—which is bad since LLMs have bad metacognitive skills relative to humans IMO.)
Then let the readers decide! (unfortunately, low vote totals might just mean few people clicked on it vs. many read it and objected to the LLM assistance).
I tried the LLM block route and it didn’t work because (I think) the system thought I had already submitted the article. I will change the title and try again tomorrow. The LLM block route is not easy for someone who isn’t a programmer and doesn’t know what was meant by it, although ChatGPT helped me figure it out. I think AI+Humans outperform humans at metacognitive skills, certainly for humans who have some brain damage (as I do).
I agree that AI+human is better than human alone, including for metacognitive skills—IF it’s used skillfully. And that includes people without brain damage (sorry for the stroke and glad you’re still able to engage intellectually!).
LLMs can be nearly as good for checking your thinking as another expert human, and better than non-expert humans—but only if you prompt them carefully for generating a variety of audience-relevant pushback and counterarguments, then make your own judgment about which are valuable/valid.
(Since you didn’t mention including the precise description of your methods, I’m once again going to strongly encourage you to do so. I expect the piece to go largely unread if it just says “LLM written” without explanation. We’ve got too much to read and have to make judgments somehow!)
I asked ChatGPT to tell me how I use it to help write papers and this is what it outputted: You use the system as a constrained collaborator embedded at specific points in the writing process, not as an end-to-end author.
Idea generation and expansion You start with a core claim and use the system to enumerate adjacent arguments, examples, and counterarguments. The system functions as a breadth generator. You then select and refine only the strongest lines of argument.
Structural design You use it to test different outlines, section orders, and narrative arcs. You ask for alternative organizations, then choose the one that increases clarity and momentum. The system helps surface structural options you might not have considered.
Local rewriting, not full drafting You provide existing text and request targeted rewrites of specific sentences, paragraphs, or transitions. This keeps control over the argument while outsourcing micro-level phrasing improvements.
Transition construction You use it to bridge sections cleanly. You ask for exact sentences that connect two ideas without repetition or padding, then select or refine the best candidate.
Example and analogy development You use it to generate concrete illustrations that carry explanatory weight. You iterate until the example does real work in the argument rather than serving as decoration.
Adversarial critique You use it to stress-test claims. You ask for objections, weak points, and alternative interpretations, then revise the text to survive those critiques.
Comparative evaluation You present multiple versions of a section and ask for direct comparison. This forces the system to identify differences in clarity, force, and logic, helping you converge on a stronger version.
Constraint enforcement You impose explicit rules on style and structure, then require the system to revise until those rules are satisfied. When it fails, you reject the output and restate constraints. Over time, this trains the system within the session.
Iterative narrowing You move from broad prompts to increasingly specific ones. Early stages explore possibilities; later stages demand precision. Each round reduces variance and increases control.
Final polishing under strict standards You use the system only after the argument is already sound. At this stage, it functions as a high-level editor, tightening language and removing residual weaknesses. If it cannot meet the required standard, you discard the output.
The control mechanism is simple: you never accept output that violates your constraints. Selection, rejection, and repeated tightening replace reliance on any single response.
That makes sense. I do think that LLM-assisted writing can be very good if it’s used carefully in a proces like that. I just looked and saw that you wrote Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks. I was blown away by how comprehensive and thorough that was. It makes sense that you could only pull that off by using LLMs as collaborators; I doubt you could’ve written such a piece without that help (nobody else has, even though it’s a valuable contribtion).
Except for point 1 in the recipe above, “idea generation”. Putting in a thesis and then having an LLM come up with ideas for how to support it sounds like exactly how you confuse yourself and everyone else. It’s asking for sycophancy and good-sounding but ultimately wrong arguments. At the least you’d want to do a round of adversarial critique, right away instead of investing in writing a whole article based on a decent argument that might be wrong.
This is very different from academic practice, in which the whole goal is to create a decent argument even if it might be wrong. That’s an advocacy system like law, in which you assume that some other researcher will spend just as much time debunking your argument if it’s wrong.
But that’s a much worse system than asking everyone to act as their own critic before asking everyone else to read their arguments.
Which is the point of the LW credo: write to inform, not to persuade.
I think it’s worth looking at the new guidelines and the discussion.
I received an automatic rejection for an article. I have been writing on LessWrong since the
beginning.
I did use AI to help write the article which is why the article was rejected, but I also spent considerable time
working on it. I also used AI to help write an article I
published in January, which now has 156 karma.
https://www.lesswrong.com/posts/kLvhBSwjWD9wjejWn/precedents-for-the-unprecedented-historical-analogies-for-1
This new article is about why AI is going to destroy most jobs, and as
I wrote in it, “This essay was written with help from AI. If I could
not use AI productively to improve it, that would undermine either my
argument or my claim to expertise.”
Finally, I had a stroke two years ago and have come to rely on AI when
writing. Please allow me to publish this article.
James Miller
Professor of Economics, Smith College
Man, I am really sorry about the stroke.
The current rule is that you can use heavily AI-assisted writing, you just need to put it into an LLM content block:
Like this
We don’t evaluate content within LLM content blocks for LLM writing. You can give it a title that indicates substantial co-authorship.
That doesn’t seem to work. I put it in an LLM block but now it says I already published the article, although I can’t find it on the website so unless there is a delay, something has gone wrong.
Ah, yeah, I don’t think we are handling edits gracefully, and will look into how we can improve that at a process level, but also, at least for your latest post I am seeing this:
I.e. you inserted an LLM content block at the top, but didn’t actually wrap anything with it, so that wouldn’t end up being picked up by our systems.
The problems you are running into might actually be a great fit for using our LLM-assistant integration. We have infrastructure so that an LLM can insert and edit arbitrary content in posts (both inside and outside of LLM content blocks), and so this might allow you dealing with this stuff much better. You can click this button in the editor to open up a Claude chat with our suggested prompt (but throwing it into many other models should also work):
Yes, I did that. Then got some more help from ChatGPT and made the LLM block first and put the text into it. I got something saying I had already submitted (published?) the article. My plan is to change the title and try again tomorrow.
I’d recommend simply making a new post; we currently don’t have infrastructure set up for automatically re-evaluating posts have previously been rejected.
The new rule and the logic behind it is here: New LessWrong Editor! (Also, an update to our LLM policy.)
The new rule is based on some fairly subtle but important considerations around epistemic pollution from letting LLMs think for you. There’s also an issue I think with decoupling the signal of good writing from the content of good ideas. It’s already pretty hard to find the good ideas on LW even when you can notice bad writing.
Like Habryka said, there’s an easy route within the rules: put it all in an LLM block. I also recommend that you describe what you did pretty thoroughly at the top, so people know what role you played in generating the ideas and refining them through the writing process. (You might even describe it more thoroughly in a collapsible block or at the bottom for those who want the gory details. I do; I think LLM-assisted writing can range from almost entirely using human judgment on the ideas, all the way to letting the LLM create and judge the ideas/claims—which is bad since LLMs have bad metacognitive skills relative to humans IMO.)
Then let the readers decide! (unfortunately, low vote totals might just mean few people clicked on it vs. many read it and objected to the LLM assistance).
I tried the LLM block route and it didn’t work because (I think) the system thought I had already submitted the article. I will change the title and try again tomorrow. The LLM block route is not easy for someone who isn’t a programmer and doesn’t know what was meant by it, although ChatGPT helped me figure it out. I think AI+Humans outperform humans at metacognitive skills, certainly for humans who have some brain damage (as I do).
I agree that AI+human is better than human alone, including for metacognitive skills—IF it’s used skillfully. And that includes people without brain damage (sorry for the stroke and glad you’re still able to engage intellectually!).
LLMs can be nearly as good for checking your thinking as another expert human, and better than non-expert humans—but only if you prompt them carefully for generating a variety of audience-relevant pushback and counterarguments, then make your own judgment about which are valuable/valid.
(Since you didn’t mention including the precise description of your methods, I’m once again going to strongly encourage you to do so. I expect the piece to go largely unread if it just says “LLM written” without explanation. We’ve got too much to read and have to make judgments somehow!)
I asked ChatGPT to tell me how I use it to help write papers and this is what it outputted: You use the system as a constrained collaborator embedded at specific points in the writing process, not as an end-to-end author.
Idea generation and expansion
You start with a core claim and use the system to enumerate adjacent arguments, examples, and counterarguments. The system functions as a breadth generator. You then select and refine only the strongest lines of argument.
Structural design
You use it to test different outlines, section orders, and narrative arcs. You ask for alternative organizations, then choose the one that increases clarity and momentum. The system helps surface structural options you might not have considered.
Local rewriting, not full drafting
You provide existing text and request targeted rewrites of specific sentences, paragraphs, or transitions. This keeps control over the argument while outsourcing micro-level phrasing improvements.
Transition construction
You use it to bridge sections cleanly. You ask for exact sentences that connect two ideas without repetition or padding, then select or refine the best candidate.
Example and analogy development
You use it to generate concrete illustrations that carry explanatory weight. You iterate until the example does real work in the argument rather than serving as decoration.
Adversarial critique
You use it to stress-test claims. You ask for objections, weak points, and alternative interpretations, then revise the text to survive those critiques.
Comparative evaluation
You present multiple versions of a section and ask for direct comparison. This forces the system to identify differences in clarity, force, and logic, helping you converge on a stronger version.
Constraint enforcement
You impose explicit rules on style and structure, then require the system to revise until those rules are satisfied. When it fails, you reject the output and restate constraints. Over time, this trains the system within the session.
Iterative narrowing
You move from broad prompts to increasingly specific ones. Early stages explore possibilities; later stages demand precision. Each round reduces variance and increases control.
Final polishing under strict standards
You use the system only after the argument is already sound. At this stage, it functions as a high-level editor, tightening language and removing residual weaknesses. If it cannot meet the required standard, you discard the output.
The control mechanism is simple: you never accept output that violates your constraints. Selection, rejection, and repeated tightening replace reliance on any single response.
That makes sense. I do think that LLM-assisted writing can be very good if it’s used carefully in a proces like that. I just looked and saw that you wrote Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks. I was blown away by how comprehensive and thorough that was. It makes sense that you could only pull that off by using LLMs as collaborators; I doubt you could’ve written such a piece without that help (nobody else has, even though it’s a valuable contribtion).
Except for point 1 in the recipe above, “idea generation”. Putting in a thesis and then having an LLM come up with ideas for how to support it sounds like exactly how you confuse yourself and everyone else. It’s asking for sycophancy and good-sounding but ultimately wrong arguments. At the least you’d want to do a round of adversarial critique, right away instead of investing in writing a whole article based on a decent argument that might be wrong.
This is very different from academic practice, in which the whole goal is to create a decent argument even if it might be wrong. That’s an advocacy system like law, in which you assume that some other researcher will spend just as much time debunking your argument if it’s wrong.
But that’s a much worse system than asking everyone to act as their own critic before asking everyone else to read their arguments.
Which is the point of the LW credo: write to inform, not to persuade.
I think it’s worth looking at the new guidelines and the discussion.
And