LessWrong feature request: make it easy for authors to opt-out of having their posts in the training data.
If most smart people were put in the position of a misaligned AI and tried to take over the world, I think they’d be caught and fail.[1] If I were a misaligned AI, I think I’d have a much better shot at succeeding, largely because I’ve read lots of text about how people evaluate and monitor models, strategies schemers can use to undermine evals and take malicious actions without being detected, and creative paths to taking over the world as an AI.
A lot of that information is from LessWrong.[2] It’s unfortunate that this information will probably wind up in the pre-training corpus of new models (though sharing the information is often still worth it overall to share most of this information[3]).
LessWrong could easily change this for specific posts! They could add something to their robots.txt to ask crawlers looking to scrape training data to ignore the pages. They could add canary strings to the page invisibly. (They could even go a step further and add something like copyrighted song lyrics to the page invisibly.) If they really wanted, they could put the content of a post behind a captcha for users who aren’t logged in. This system wouldn’t be perfect (edit: please don’t rely on these methods. They’re harm-reduction for information where you otherwise would have posted without any protections), but I think even reducing the odds or the quantity of this data in the pre-training corpus could help.
I would love to have this as a feature at the bottom of drafts. I imagine a box I could tick in the editor that would enable this feature (and maybe let me decide if I want the captcha part or not). Ideally the LessWrong team could prompt an LLM to read users’ posts before they hit publish. If it seems like the post might be something the user wouldn’t want models trained on, the site could could proactively ask the user if they want to have their post be removed from the training corpus if it seems likely the user might want that.
As far as I know, no other social media platform has an easy way to try to avoid having their data up in the training corpus (and many actively sell it for this purpose). So LessWrong would be providing a valuable service.
The actual decisions around what should or shouldn’t be part of the pre-training corpus seem nuanced: if we want to use LLMs to help with AI safety, it might help if those LLMs have some information about AI safety in their pre-training corpus (though adding that information back in during post-training might work almost as well). But I want to at least give users the option to opt out of the current default.
That’s not to say all misaligned AIs would fail; I think there will be a period where AIs are roughly as smart as me and thus could at least bide their time and hide their misalignment without being caught if they’d read LessWrong and might fail to do so and get caught if they hadn’t. But you can imagine we’re purchasing dignity points or micro-dooms depending on your worldview. In either case I think this intervention is relatively cheap and worthwhile.
Of course much of it is reproduced outside LessWrong as well. But I think (1) so much of it is still on LessWrong and nowhere else that it’s worth it, and (2) the more times this information is reported in the pre-training dats the more likely the model is to memorize it or have the information be salient to it.
And the information for which the costs of sharing it aren’t worth it probably still shouldn’t be posted even if the proposal I outline here is implemented, since there’s still a good chance it might leak out.
I worry that canary strings and robots.txt are ~basically ignored by labs and that this could cause people to share things that on the margin they wouldn’t if there were no such option[1]. Morereliable methods exist, but they come with a lot of overhead and I expect most users wouldn’t want to deal with it.
Especially since as the post says, canaries often don’t serve the purpose of detection either with publicly accessible models claiming ignorance of them.
Probably I should have included a footnote about this. I’m well aware that this is not a foolproof mechanism, but it still seems better than nothing and I think it’s very easy to have a disclaimer that makes this clear. As I said in the post, I think that people should only do this for information they would have posted on LessWrong anyway.
I disagree that these things are basically ignored by labs. My guess is many labs put some effort into filtering out data with the canary string, but that this is slightly harder than you might think and so they end up messing it up sometimes. (They might also sometimes ignore it on purpose, I’m not sure.)
Even if labs ignore the canary string now having the canary string in there would make it much easier to filter these things out if labs ever wanted to do that in the future.
I also suggest using better methods like captchas for non-logged-in users. I expect something like this to work somewhat well (though it still wouldn’t be foolproof).
I disagree that these things are basically ignored by labs. My guess is many labs put some effort into filtering out data with the canary string, but that this is slightly harder than you might think and so they end up messing it up sometimes. (They might also sometimes ignore it on purpose, I’m not sure.)
Our infrastructure has been under attack since August 2024. Large Language Model (LLM) web crawlers have been a significant source of the attacks, and as for the rest, we don’t expect to ever know what kind of entity is targeting our sites or why.
This makes the big deployments that I know about include:
The Linux Kernel Mailing List archives
FreeBSD’s SVN (and soon git)
SourceHut
FFmpeg
Wine
UNESCO
The Science Olympiad Student Center
Enlightenment (the desktop environment)
GNOME’s GitLab
The first notable one is the Gnome GitLab. And
3:57
from what I’ve learned talking with the CIS admin team, it was a hailmary. Mhm.
4:03
Right. It’s like nothing else worked. What could we lose? So you’ve had Sorry. GitLab pod
4:11
instantly scaled down to three from six. So you’ve had discussions since then. Um
I think having copyrighted content in between might work, but it depends on the the labs on how they’re processing it but it being really difficult to prevent AI scraping seems to be largely accurate.
I think the canary string is slightly more likely to work than the robots.txt. And the things you link don’t say which labs. I totally believe some labs scrape aggressively and ignore all robots.txt, but it wouldn’t surprise me if others don’t! In my opinion, if even one frontier lab doesn’t train on this data, that’s still quite valuable. And it seems so cheap to just try this as a bit of a Hail Mary.
I think success of anubis (as shown above) does tell us that a lot of them are not doing scraping in good faith. The fact that performing crypographic computation is needed to stop them is ridiculous. I am not sure whether I have a list but there’s even a case of :
Report: huawei bound scrapers are bypassing Anubis #978
Where they started performing the computations to scrape the data.
After further investigation and communication. This is not a bug. The threat actor group in question installed headless chrome and simply computed the proof of work. I’m just going to submit a default rule that blocks huawei.
We received complaints from customers who had both disallowed Perplexity crawling activity in their robots.txt files and also created WAF rules to specifically block both of Perplexity’s declared crawlers: PerplexityBot and Perplexity-User. These customers told us that Perplexity was still able to access their content even when they saw its bots successfully blocked. We confirmed that Perplexity’s crawlers were in fact being blocked on the specific pages in question, and then performed several targeted tests to confirm what exact behavior we could observe.
There has been multiple reports that openAI also ignores robots.txt but it’s much less agressive.
Even if various pages are not used in pretraining, they can still be read by the models in deployment, right? When Claude or GPT 5 go out and finds up to date information on the internet, they’re not checking the robots.txt files?
Absolutely! But having models that don’t know how to effectively scheme without access to external resources is still very useful. As just one example, we could interrogate the models and give them honeypots without giving them address to the internet.
Labs could also ban some of their models from reading certain webpages or ingesting certain information (as in filter at inference time all inputs to the model) in deployment (especially for models performing especially high-stakes tasks where sabotage would be easier).
I also suspect digesting a LessWrong post the model reads in deployment is more likely to involve explicit reasoning about the post in the CoT (or probes etc) and thus to be caught. By contrast, if a model has seen lots of of transcripts of misalignment in pre-training, the misaligned actions might be intuitive and second-nature, and thus harder to detect.
LessWrong feature request: make it easy for authors to opt-out of having their posts in the training data.
If most smart people were put in the position of a misaligned AI and tried to take over the world, I think they’d be caught and fail.[1] If I were a misaligned AI, I think I’d have a much better shot at succeeding, largely because I’ve read lots of text about how people evaluate and monitor models, strategies schemers can use to undermine evals and take malicious actions without being detected, and creative paths to taking over the world as an AI.
A lot of that information is from LessWrong.[2] It’s unfortunate that this information will probably wind up in the pre-training corpus of new models (though sharing the information is often still worth it overall to share most of this information[3]).
LessWrong could easily change this for specific posts! They could add something to their robots.txt to ask crawlers looking to scrape training data to ignore the pages. They could add canary strings to the page invisibly. (They could even go a step further and add something like copyrighted song lyrics to the page invisibly.) If they really wanted, they could put the content of a post behind a captcha for users who aren’t logged in. This system wouldn’t be perfect (edit: please don’t rely on these methods. They’re harm-reduction for information where you otherwise would have posted without any protections), but I think even reducing the odds or the quantity of this data in the pre-training corpus could help.
I would love to have this as a feature at the bottom of drafts. I imagine a box I could tick in the editor that would enable this feature (and maybe let me decide if I want the captcha part or not). Ideally the LessWrong team could prompt an LLM to read users’ posts before they hit publish. If it seems like the post might be something the user wouldn’t want models trained on, the site could could proactively ask the user if they want to have their post be removed from the training corpus if it seems likely the user might want that.
As far as I know, no other social media platform has an easy way to try to avoid having their data up in the training corpus (and many actively sell it for this purpose). So LessWrong would be providing a valuable service.
The actual decisions around what should or shouldn’t be part of the pre-training corpus seem nuanced: if we want to use LLMs to help with AI safety, it might help if those LLMs have some information about AI safety in their pre-training corpus (though adding that information back in during post-training might work almost as well). But I want to at least give users the option to opt out of the current default.
That’s not to say all misaligned AIs would fail; I think there will be a period where AIs are roughly as smart as me and thus could at least bide their time and hide their misalignment without being caught if they’d read LessWrong and might fail to do so and get caught if they hadn’t. But you can imagine we’re purchasing dignity points or micro-dooms depending on your worldview. In either case I think this intervention is relatively cheap and worthwhile.
Of course much of it is reproduced outside LessWrong as well. But I think (1) so much of it is still on LessWrong and nowhere else that it’s worth it, and (2) the more times this information is reported in the pre-training dats the more likely the model is to memorize it or have the information be salient to it.
And the information for which the costs of sharing it aren’t worth it probably still shouldn’t be posted even if the proposal I outline here is implemented, since there’s still a good chance it might leak out.
I worry that canary strings and robots.txt are ~basically ignored by labs and that this could cause people to share things that on the margin they wouldn’t if there were no such option[1]. More reliable methods exist, but they come with a lot of overhead and I expect most users wouldn’t want to deal with it.
Especially since as the post says, canaries often don’t serve the purpose of detection either with publicly accessible models claiming ignorance of them.
Probably I should have included a footnote about this. I’m well aware that this is not a foolproof mechanism, but it still seems better than nothing and I think it’s very easy to have a disclaimer that makes this clear. As I said in the post, I think that people should only do this for information they would have posted on LessWrong anyway.
I disagree that these things are basically ignored by labs. My guess is many labs put some effort into filtering out data with the canary string, but that this is slightly harder than you might think and so they end up messing it up sometimes. (They might also sometimes ignore it on purpose, I’m not sure.)
Even if labs ignore the canary string now having the canary string in there would make it much easier to filter these things out if labs ever wanted to do that in the future.
I also suggest using better methods like captchas for non-logged-in users. I expect something like this to work somewhat well (though it still wouldn’t be foolproof).
https://www.fsf.org/blogs/sysadmin/our-small-team-vs-millions-of-bots
https://xeiaso.net/notes/2025/anubis-works/
I think having copyrighted content in between might work, but it depends on the the labs on how they’re processing it but it being really difficult to prevent AI scraping seems to be largely accurate.
I think the canary string is slightly more likely to work than the robots.txt. And the things you link don’t say which labs. I totally believe some labs scrape aggressively and ignore all robots.txt, but it wouldn’t surprise me if others don’t! In my opinion, if even one frontier lab doesn’t train on this data, that’s still quite valuable. And it seems so cheap to just try this as a bit of a Hail Mary.
I think success of anubis (as shown above) does tell us that a lot of them are not doing scraping in good faith. The fact that performing crypographic computation is needed to stop them is ridiculous. I am not sure whether I have a list but there’s even a case of :
https://github.com/TecharoHQ/anubis/issues/978
Where they started performing the computations to scrape the data.
https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/
There has been multiple reports that openAI also ignores robots.txt but it’s much less agressive.
Also, the
X-Robots-Tagheader can be set on individual page responses (in contrast to robots.txt being for the whole domain).Sorry this is what I meant, you’re right.
Even if various pages are not used in pretraining, they can still be read by the models in deployment, right? When Claude or GPT 5 go out and finds up to date information on the internet, they’re not checking the robots.txt files?
Absolutely! But having models that don’t know how to effectively scheme without access to external resources is still very useful. As just one example, we could interrogate the models and give them honeypots without giving them address to the internet.
Labs could also ban some of their models from reading certain webpages or ingesting certain information (as in filter at inference time all inputs to the model) in deployment (especially for models performing especially high-stakes tasks where sabotage would be easier).
I also suspect digesting a LessWrong post the model reads in deployment is more likely to involve explicit reasoning about the post in the CoT (or probes etc) and thus to be caught. By contrast, if a model has seen lots of of transcripts of misalignment in pre-training, the misaligned actions might be intuitive and second-nature, and thus harder to detect.