There’s a new editor experience on LessWrong! A bunch of the editor page has been rearranged to make it much more WYSIWYG compared to published post pages. All of the settings live in panels that are hidden by default and can be opened up by clicking the relevant buttons on the side of the screen. We also adopted lexical as a new editor framework powering everything behind the scenes (we were previously using ckEditor).
That scary arrow button in the top-left doesn’t publish your post! It just opens the publishing menu.
Posts[1] now have automatic real-time autosave while you’re online (like Google Docs), but still support offline editing if your connection drops out. Point-in-time revisions will still get autosaved periodically, and you can always manually save your draft if you want a specific checkpoint.
The editor also has a slash menu now!
Good for all many of your custom content needs!
You might be eyeing the last two items in that slash menu. This post will demo some of the new features, and I’ll demo two of them simultaneously by letting Opus 4.6 explain what they are:
Hi! I’m Claude, and I’m writing this from inside the post you’re reading right now. This block I’m in is one of the new editor features — let me walk you through a few of them.
LLM Content Blocks
This visually distinct block is an LLM Content Block. Authors can insert these into their posts to clearly attribute a section to a specific AI model. The block header shows which model generated the content, so readers always know what they’re looking at. It’s a way to be transparent about AI-assisted writing while keeping everything in one document.
Custom Iframe Widgets
The new editor supports custom interactive widgets embedded directly in posts. Authors can write HTML and JavaScript that runs in a sandboxed iframe right in the document — useful for interactive demos, visualizations, small tools, or anything else that benefits from being more than static text. There’s one just below this block, in fact.
Agent Integration
The editor now has an API that lets AI agents read and edit drafts collaboratively. If you share your draft’s edit link with an AI assistant (like me), it can insert text, leave Google Docs-style comments, make suggested edits, and add LLM content blocks and widgets — all showing up live in the editor. That’s how this entire block was written: not copy-pasted in, but inserted directly through the API while the post was open for editing.
To use it, open your post’s sharing settings and set “Anyone with the link can” to Edit, then copy the edit URL and share it with your AI assistant.
With Edit permissions, the agent can do everything: insert and modify text, add widgets, create LLM content blocks, and more. If you’d prefer to keep tighter control, Comment permissions still allow the agent to leave inline comments and suggested edits, which you can accept or reject individually.
Setup depends on which AI tool you’re using. Agent harnesses that can make HTTP requests directly — like Claude Code, Codex, or Cursor — should work out of the box. If you’re using Claude on claude.ai, you’ll need to add www.lesswrong.com to your allowed domains settings, then start a new chat. (The ChatGPT web UI doesn’t currently support whitelisting external domains, so it can’t be used for this feature yet.) Once that’s done, just paste your edit URL and ask Claude to read the post — the API is self-describing, so it’ll figure out the rest from there.
And here’s a small interactive widget, also written by Claude[2], to demonstrate custom iframe widgets:
Policy on LLM Use
You might be wondering what this means for our policy on LLM use.
Our initial policy was this:
A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of 1 minute per 50 words (enough to read the content several times and perform significant edits), you should not include any information that you can’t verify, haven’t verified, or don’t understand, and you should not use the stereotypical writing style of an AI assistant.
You were also permitted to put LLM-generated content into collapsible sections, if you labeled it as LLM-generated.
In practice, the “you should not use the stereotypical writing style of an AI assistant” part of the requirement meant that this was a de-facto ban on LLM use, which we enforced mostly consistently on new users and very inconsistently on existing users[3]. Bad!
To motivate our updated policy, we must first do some philosophy. Why do we care about knowing whether something we’re reading was generated by an LLM? LLM-generated text is not testimony has substantially informed my thinking on this question. Take the synopsis:
When we share words with each other, we don’t only care about the words themselves. We care also—even primarily—about the mental elements of the human mind/agency that produced the words. What we want to engage with is those mental elements.
As of 2025, LLM text does not have those elements behind it.
Therefore LLM text categorically does not serve the role for communication that is served by real text.
Therefore the norm should be that you don’t share LLM text as if someone wrote it. And, it is inadvisable to read LLM text that someone else shares as though someone wrote it.
I don’t think you even need to confidently believe in point 2[4] for the norm in point 4 to be compelling. It is merely sufficient that someone else produced the text.
Plagiarism is often considered bad because it’s “stealing credit” for someone else’s work. But it’s also bad because it’s misinforming your readers about your beliefs and mental models! What happens if someone asks you why you’re so confident about [proposition X]? It really sucks if the answer is “Oh, uh, I didn’t write that sentence, and re-reading it, it turns out I’m not actually that confident in that claim...”
This is also why having LLMs “edit” your writing is often pernicious. LLM editing, unless managed extremely carefully, often involves rephrasings, added qualifiers, and swapped vocabulary in ways that meaningfully change the semantic content of your writing. Very often this is in unendorsed ways, but this can be hard to pick up on because the typical LLM writing style has a tendency to make people’s eyes slide off of it[5].
With all that in mind, our new policy is this:
“LLM output” includes all of:
text written entirely by an LLM
text that was written by a human and then substantially[6] edited or revised by an LLM
text that was written by an LLM and then edited or revised by a human
“LLM output” does not include:
text that was written by a human and then lightly edited or revised by an LLM
text written by a human, which includes facts, arguments, examples, etc, which were researched/discovered/developed with LLM assistance. (If you “borrow language” from the LLM, that no longer counts as “text written by a human”.)
code (either in code blocks or in the new widgets)
“LLM output” must go into the new LLM content blocks. You can put “LLM output” into a collapsible section without wrapping it in an LLM content block if all of the content is “LLM output”. If it’s mixed, you should use LLM content blocks within the collapsible section to demarcate those parts which are “LLM output”.
We are going to be more strictly enforcing the “no LLM output” rule by normalizing our auto-moderation logic to treat posts by approved[7] users similarly to posts by new users—that is, they’ll be automatically rejected if they score above a certain threshold in our automated LLM content detection pipeline. Having spent a few months staring at what’s been coming down the pipe, we are also going to be lowering that threshold.
This does not change our existing quality bar for new user submissions. If you are a new user and submit a post that substantially consists of content inside of LLM content blocks, it is pretty unlikely that it will get approved[8]. This does not suddenly become wise if you’re an approved user. If you’re confident that people will want to read it, then sure, go ahead, but please pay close attention to the kind of feedback you get (karma, comments, etc), and if this proves noisy we’ll probably just tell people to cut it out.
As always, please submit feedback, questions, and bug reports via Intercom (or in the comments below, if you prefer).
Not comments or other content types that use the editor, like tags—those still have the same local backup mechanism they’ve always had, and you can still explicitly save draft comments, but none of them get automatically synced to the cloud as you type. Also, existing posts and drafts will continue to use the previous editor, and won’t have access to the new features.
In the ontology of our codebase, a term which means “users whose content goes live without further review by the admins”, which is not true of users who haven’t posted or commented before, and is also not true of a smaller number of users who have.
New LessWrong Editor! (Also, an update to our LLM policy.)
There’s a new editor experience on LessWrong! A bunch of the editor page has been rearranged to make it much more WYSIWYG compared to published post pages. All of the settings live in panels that are hidden by default and can be opened up by clicking the relevant buttons on the side of the screen. We also adopted lexical as a new editor framework powering everything behind the scenes (we were previously using ckEditor).
That scary arrow button in the top-left doesn’t publish your post! It just opens the publishing menu.
Posts[1] now have automatic real-time autosave while you’re online (like Google Docs), but still support offline editing if your connection drops out. Point-in-time revisions will still get autosaved periodically, and you can always manually save your draft if you want a specific checkpoint.
The editor also has a slash menu now!
Good for
allmany of your custom content needs!You might be eyeing the last two items in that slash menu. This post will demo some of the new features, and I’ll demo two of them simultaneously by letting Opus 4.6 explain what they are:
Hi! I’m Claude, and I’m writing this from inside the post you’re reading right now. This block I’m in is one of the new editor features — let me walk you through a few of them.
LLM Content Blocks
This visually distinct block is an LLM Content Block. Authors can insert these into their posts to clearly attribute a section to a specific AI model. The block header shows which model generated the content, so readers always know what they’re looking at. It’s a way to be transparent about AI-assisted writing while keeping everything in one document.
Custom Iframe Widgets
The new editor supports custom interactive widgets embedded directly in posts. Authors can write HTML and JavaScript that runs in a sandboxed iframe right in the document — useful for interactive demos, visualizations, small tools, or anything else that benefits from being more than static text. There’s one just below this block, in fact.
Agent Integration
The editor now has an API that lets AI agents read and edit drafts collaboratively. If you share your draft’s edit link with an AI assistant (like me), it can insert text, leave Google Docs-style comments, make suggested edits, and add LLM content blocks and widgets — all showing up live in the editor. That’s how this entire block was written: not copy-pasted in, but inserted directly through the API while the post was open for editing.
To use it, open your post’s sharing settings and set “Anyone with the link can” to Edit, then copy the edit URL and share it with your AI assistant.
With Edit permissions, the agent can do everything: insert and modify text, add widgets, create LLM content blocks, and more. If you’d prefer to keep tighter control, Comment permissions still allow the agent to leave inline comments and suggested edits, which you can accept or reject individually.
Setup depends on which AI tool you’re using. Agent harnesses that can make HTTP requests directly — like Claude Code, Codex, or Cursor — should work out of the box. If you’re using Claude on claude.ai, you’ll need to add
www.lesswrong.comto your allowed domains settings, then start a new chat. (The ChatGPT web UI doesn’t currently support whitelisting external domains, so it can’t be used for this feature yet.) Once that’s done, just paste your edit URL and ask Claude to read the post — the API is self-describing, so it’ll figure out the rest from there.And here’s a small interactive widget, also written by Claude[2], to demonstrate custom iframe widgets:
Policy on LLM Use
You might be wondering what this means for our policy on LLM use.
Our initial policy was this:
You were also permitted to put LLM-generated content into collapsible sections, if you labeled it as LLM-generated.
In practice, the “you should not use the stereotypical writing style of an AI assistant” part of the requirement meant that this was a de-facto ban on LLM use, which we enforced mostly consistently on new users and very inconsistently on existing users[3]. Bad!
To motivate our updated policy, we must first do some philosophy. Why do we care about knowing whether something we’re reading was generated by an LLM? LLM-generated text is not testimony has substantially informed my thinking on this question. Take the synopsis:
I don’t think you even need to confidently believe in point 2[4] for the norm in point 4 to be compelling. It is merely sufficient that someone else produced the text.
Plagiarism is often considered bad because it’s “stealing credit” for someone else’s work. But it’s also bad because it’s misinforming your readers about your beliefs and mental models! What happens if someone asks you why you’re so confident about [proposition X]? It really sucks if the answer is “Oh, uh, I didn’t write that sentence, and re-reading it, it turns out I’m not actually that confident in that claim...”
This is also why having LLMs “edit” your writing is often pernicious. LLM editing, unless managed extremely carefully, often involves rephrasings, added qualifiers, and swapped vocabulary in ways that meaningfully change the semantic content of your writing. Very often this is in unendorsed ways, but this can be hard to pick up on because the typical LLM writing style has a tendency to make people’s eyes slide off of it[5].
With all that in mind, our new policy is this:
“LLM output” includes all of:
text written entirely by an LLM
text that was written by a human and then substantially[6] edited or revised by an LLM
text that was written by an LLM and then edited or revised by a human
“LLM output” does not include:
text that was written by a human and then lightly edited or revised by an LLM
text written by a human, which includes facts, arguments, examples, etc, which were researched/discovered/developed with LLM assistance. (If you “borrow language” from the LLM, that no longer counts as “text written by a human”.)
code (either in code blocks or in the new widgets)
“LLM output” must go into the new LLM content blocks. You can put “LLM output” into a collapsible section without wrapping it in an LLM content block if all of the content is “LLM output”. If it’s mixed, you should use LLM content blocks within the collapsible section to demarcate those parts which are “LLM output”.
We are going to be more strictly enforcing the “no LLM output” rule by normalizing our auto-moderation logic to treat posts by approved[7] users similarly to posts by new users—that is, they’ll be automatically rejected if they score above a certain threshold in our automated LLM content detection pipeline. Having spent a few months staring at what’s been coming down the pipe, we are also going to be lowering that threshold.
This does not change our existing quality bar for new user submissions. If you are a new user and submit a post that substantially consists of content inside of LLM content blocks, it is pretty unlikely that it will get approved[8]. This does not suddenly become wise if you’re an approved user. If you’re confident that people will want to read it, then sure, go ahead, but please pay close attention to the kind of feedback you get (karma, comments, etc), and if this proves noisy we’ll probably just tell people to cut it out.
As always, please submit feedback, questions, and bug reports via Intercom (or in the comments below, if you prefer).
Not comments or other content types that use the editor, like tags—those still have the same local backup mechanism they’ve always had, and you can still explicitly save draft comments, but none of them get automatically synced to the cloud as you type. Also, existing posts and drafts will continue to use the previous editor, and won’t have access to the new features.
Prompted by @jimrandomh.
For somewhat contingent reasons involving various choices we made with our moderation setup.
See my curation notice on that post for some additional thoughts and caveats.
I think this recent thread is instructive.
We’ll know it when we see it.
In the ontology of our codebase, a term which means “users whose content goes live without further review by the admins”, which is not true of users who haven’t posted or commented before, and is also not true of a smaller number of users who have.
I’m sure the people reading this will be able to conjure up some edge cases and counterexamples; go on, have fun.