LessWrong dev & admin as of July 5th, 2022.
RobertM
The search functionality in our mentions is slightly broken right now (it doesn’t appropriately weigh user results), but it’s the same
@prefix it was before.
This policy is almost strictly more permissive than the previous policy, so I think this is a pretty confused argument.
Yes, obviously content in the new LLM Content Block elements is excluded from the automated LLM content detection; how would it even be possible to use them otherwise?
Great question. For the sake of not getting your content auto-rejected, you should put it into either an LLM content block or collapsible section, and you can put whatever label you think is descriptive on either (i.e. instead of
Claude Opus 4.6you can write[name of author], suspected LLM usageor something).
I think it’d be much easier for you to explain why you’re worried about your previous posts under the new rules, and then I can explain why they probably would not have been fine under the old rules.
There is very little that would have been permitted under the old rules that is now forbidden.
If you are confused by how to make your several-month-old drafts comply with the new rules, that either means they would not have been complying with the old rules if published as-is, or you are confused about the new rules.
(People will almost never leave feedback that says “[x] worked just fine for me”. I don’t think I particularly have trouble distinguishing when I’m in an LLM content block vs. not, though I’m not particularly fast at the minigame. I wouldn’t be surprised if a lot of people had more trouble than “a few seconds”, though.)
@jmh please avail yourself of the new LLM content block (or a collapsible section) for the LLM output.
and i think that the second point in the quoted post, “As of 2025, LLM text does not have those [mental agency] elements behind it”, might be something about which reasonable people could differ, especially as time moves forward and more sophisticated models are released.
Note that I also explicitly acknowledge this in my curation notice for that post (and that I disagree with the strength of the claim). In any case, Tsvi’s post is not the moderation policy, and the moderation policy is not taking a stance on whether LLM text meaningfully constitutes “testimony” (only that it does not constitute the testimony of the human publishing the post).
Can you say more? There is very little that would have been permitted under the old rules that is now forbidden.
Hadn’t seen that comment at the time I left my previous comment; currently thinking about it. (Tentatively think that the original post contained more errors than I would have wanted, that the core thesis is still fine, and that most of the objections focusing on the use of LLMs as part of the research process of the post are barking up the wrong tree. Maybe don’t endorse the curation ex post, less sure about ex ante, still need to spend more time thinking about what updates to make here.)
toggle Markdown and not-Markdown with the new system which seems like it is just straightforwardly a bug
This is an option that you should have available to you if you have the markdown editor enabled in your user settings. It should be in the settings panel (the one with the gear icon). I don’t recommend relying on it; editor format conversion has never been particularly reliable (though we might improve the situation with markdown in particular for LLM-integration related reasons).
Are there other bugs you’ve noticed?
it probably violates the new LLM rules because it is about the bottom line results that one can get from accepting various people (digital people or human journalists or academics or whoever) with various commitments to truth and epistemics at “face value”
It is indeed the case that you should probably have included whatever section is mostly LLM-written in the new content block (I’m guessing the bit between “QUOTE BEGINS” and “QUOTE ENDS”?)[1]. I don’t think it violates the new rules because it’s “about” anything in particular (the rules contain no reference to subject matter) and don’t understand why you think that.
- ^
But the rules are new, so I’m hardly going to bring down the hammer here...
- ^
I continue to endorse that curation and think people have psyched themselves out into thinking that the post is full of major errors for basically no good reason.
EDIT: hadn’t seen Jeffrey’s most recent comment at the time that I wrote this comment, see follow-up.
This depends on what you mean by “good cyborg writing”, but I agree that the current feature doesn’t neatly cleave reality at its joints. We’re thinking about how to allow more nuanced representations, but this is a pretty tricky novel problem and increasing the surface area of a thing like this has a bunch of costs in terms of people being able to understand what’s going on (for both authors and readers).
You want the slash menu. Type
/in your editor. (Also, typing+++followed by a space or newline is a shortcut for creating collapsible sections.)
The second is definitely not falling into the central failure mode I described there, yeah; arguably people are just making a mistake if they’re not doing that for serious writing—enabling that kind of thing is exactly why we built the feature to allow LLMs to leave inline comments/etc on your posts by just giving them a link!
All four of those posts look fine to me and none of them would’ve gotten flagged by the automated LLM content detection.
If your epistemic state with respect to the claims made in your posts is such that you aren’t worried about receiving questions like “Why are you so confident in [proposition X]?” and then it turning out to be the case that you in fact don’t endorse what’s written, because an LLM said something meaningfully different from what you would have said in that situation, then I think the end result is fine.If you want to link to this comment on future posts so that readers understand how LLMs were used in the process of writing them, I think that’d be fine, but supererogatory.
Yes, we spent a while investigating this and thinking about the security risks. Sandboxes srcdoc iframes effectively have no origin, so in theory ought to be safe, but do sometimes end up running within the same process (though this is also true of remote-origin iframes; the heuristics here are browser-specific and complicated). Effectively, this is a risk if someone discovers a vulnerability that allows breaking out of that security boundary, which would be a pretty big deal as far as browser exploits go.
It might be the case that the FTC could bring an anti-trust case if the firms adopted such a framework. But:
Anthropic’s latest RSP already includes “competitor-contingent commitments” that might plausibly run afoul of the same issues (though they’re weaker/fuzzier than niplav’s proposal), so clearly at least one of the firms involved is not so deathly afraid of FTC action that it wouldn’t make noises on the subject.
The FTC action is not guaranteed to succeed.
The FTC might not take action at all.
The FTC action, even if undertaken and likely to be successful, will almost certainly not succeed immediately; one might hope they only decide to bring it once the firms actually pause R&D (rather than when the firms adopt the framework, though they could probably do it on adoption if they wanted to). If so, the point at which the firms decide to pause is hopefully one where they can also produce sufficiently scary demos that they can lobby lawmakers to step in and render the anti-trust question moot.
So, ultimately, I don’t think the question of legality[1] should be a very strong influence on their decision-making, though they probably shouldn’t put this kind of reasoning into their own internal conversations on the subject.
Which in this case doesn’t seem so overdetermined that the courts would give them a stink-eye for even thinking they could get away with trying something like it.