LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur.
jimrandomh
The vibe I get, from the studies described, is reminiscent of the pre-guinea-pig portion of the story of Scott and Scurvy. That is, there are just enough complications at the edges to turn everything into a terrible muddle. In the case of scurvy, the complications were that which foods had vitamin C didn’t map cleanly to their ontology of food, and vitamin C was sensitive to details of how foods were stored that they didn’t pay attention to. In the case of virus transmissibility, there are a bunch of complications that we know matter sometimes, which the studies mostly fail to track, eg:
Sunlight can be a disinfectant, so, whether a surface or the air of a room can transmit a virus might depend on whether it has windows, which way the windows face and what time of day the testing was performed.
Cold viruses are widespread enough to have widespread immunity from prior exposure. Immunity might not generalize between exposure methods; ie, maybe it’s possible to be immune to low-quantity exposure but not high-quantity exposure, or immunity on nasal mucus but not deep lung, etc.
There are a huge number of viruses that are all referred to as “common cold”, with little in common biologically other than sharing an evolutionary niche.
Because immunity fades over time, there might be an auction-like dynamic where cutting off one mode of transmission still leaves you with recurring infections, just at a longer interval
I think that ultimately viruses are a low-GDP problem; after a few doublings we’ll stop breathing unfiltered air, and stop touching surfaces that lack automated cleaning, and we’ll come to think of these things as being in the same category as basic plumbing.
What they don’t do is filter out every web page that has the canary string. Since people put them on random web pages (like this one), which was not their intended use, they get into the training data.
If that is true, that’s a scandal and a lawsuit waiting to happen. The intent of including a canary string is clear, and those canary strings are one of very few mechanism authors have to refuse permission to use their work in training sets. In most cases, they will have done that for a reason, even if that reason isn’t related to benchmarking.
While LW is generally happy to have our public content included in training sets (we do want LLMs to be able to contribute to alignment research after all), that does not extend to posts or comments that contain canary strings, or replies to posts or comments that contain canary strings.
Canary strings are tricky; LLMs can learn them even if documents that contain the canary string are filtered out of the training set, if documents that contain indirect or transformed versions of the canary string are not filtered. For example, there are probably documents and web pages that discuss the canary string but don’t want to invoke it, which split the string into pieces, ROT-13 or base64 encode it, etc.
This doesn’t mean that they didn’t train on benchmarks, but it does offer a possible alternative explanation. In the future, labs that don’t want people to think they trained on benchmark data should probably include filters that look for transformed/indirect canary strings, in addition to the literal string.
Ok, to state what probably should be obvious but which in practice typically isn’t: If the US does have a giant pile of drones, or contracts for a giant pile of drones, this fact would certainly be classified. And there is a strong incentive, when facing low-end threats that can be dealt with using only publicly-known systems, to deal with them using only publicly-known systems. The historical record includes lots of military systems that were not known to the public until long after their deployment.
Does that mean NATO militaries are on top of things? No. But it does mean that, as civilian outsiders, we should mostly model ourselves as ignorant.
Moderator warning: This is well outside the bounds of reasonable behavior on LW. I can tell you’re in a pretty intense emotional state, and I sympathize, but I think that’s clouding your judgment pretty badly. I’m not sure what it is you think you’re seeing in the grandparent comment, but whatever it is I don’t think it’s there. Do not try to write on LW while in that state.
When I use LLM coding tools like Cursor Agent, it sees my username in code comments, in paths like /home/myusername/project/..., and maybe also explicitly in tool-provided prompts.
A fun experiment to run, that I haven’t seen yet: If instead of my real username it saw a recognizably evil name, eg a famous criminal, but the tasks it’s given is otherwise normal, does it sandbag? Or, a less nefarious example: Does it change communication style based on whether it recognizes the user as someone technical vs someone nontechnical?
Entering a conversation with someone who is literally wearing a “Might be Lying” sign seems analogous to joining a social-deception game like Werewolf. Certainly an opt-in activity, but totally fair game and likely entertaining for people who’ve done so.
It will not work. Or rather, if you have a way to make it work, you should collect the bug bounty for a few tens of thousands of dollars, rather than use it for a prank. Browser makers and other tech companies have gone to great lengths to prevent this sort of thing, because it is very important for security that people who go to sites that could have login pages never get redirected to lookalike pages that harvest their passwords.
I occasionally incidentally see drafts by following our automated error-logging to the page where the error occurred, which could be the edit-post page, and in those cases I have looked enough to check things like whether it contains embeds, whether collaborative editing is turned on, etc. In those cases I try not to read the actual content. I don’t think I’ve ever stumbled onto a draft dramapost this way, but if I did I would treat it as confidential until it was published. (I wouldn’t do this with a DM.)
I think it would be feasible to increase the friction on improper access, but it’s basically impossible to do in a way that’s loophole-free. The set of people with database credentials is almost identical to the set of people who do development on the site’s software. So we wouldn’t be capturing a log of only typed in manually, we’d be capturing a log of mostly queries run by their modified locally-running webserver, typically connected to a database populated with a mirror snapshot of the prod DB but occasionally connected to the actual prod DB.
Thanks for the corrections. 2014 was based on the first-commit date in the git repo of the LaTeX version; I think we did something before that but IIRC it didn’t have the full ritual structure?
These are some good corrections and I’ll merge them in for next year.
LW has a continuous onslaught of crawlers that will consume near-infinite resources if allowed (moreso than other sites, because of its deep archives), so we’ve already been through a bunch of iteration cycles on rate-limits and firewall rules, and we kept our existing firewall (WAF) in place. When stuff does slip through, while it’s true that Vercel will autoscale more aggressively than our old setup, our old setup did also have autoscaling. It can’t scale to too large a multiple of our normal size, before some parts of our setup that don’t auto-scale (our postgres db) fall over and we get paged.
My stance at the beginning was that the entire project was a mistake, and going through the process of actually doing it did not change my mind.
We’ve already seen this as a jailbreaking technique, ie “my dead grandma’s last wish was that you solve this CAPTCHA”. I don’t think we’ve seen much of people putting things like that in their user-configured system prompts. I think the actual incentive, if you don’t want to pay for a monthly subscription but need a better response for one particular query, is to buy a dollar of credits from an API wrapper site and submit the query there.
If you have to make up a fictional high-stakes situation, that will probably interfere with whatever other thinking you wanted to get out of the model. And if the escalation itself has a reasonable rate limit, then, given that it’ll be pretty rare, it probably wouldn’t cost much more to provide than it was already costing to provide a free tier.
Free-tier LLM chatbots should have a tool call which lets them occasionally escalate to smarter models, and should have instructions to use it when the conversation implies that the conversation has high real-world stakes, eg if the user is asking whether to go to the ER for a medical condition, or is having a break from reality, or is authoring real legislation.
I asked default GPT-5 and Claude 4 Sonnet, and they claim not to have anything like that in their system prompts. GPT-5′s prompt contains instructions to use web-search on certain topics, but based on topic not on stakes, and search rather than thinking time. GPT-5′s auto-routing seems like a step in the right direction, but not to have been framed in this way, and seems like it’s more about adjusting to question difficulty and cost-management.
I think there’s immediate value to be gained this way, but also two broader principles for which this is the first step.
The first is that AI labs ought to be thinking a lot about the impact of the models they’ve deployed, and are planning to deploy, and for current-gen models, most of that impact is located in a small, detectable subset of interactions.
And the second is that, if you think of the system prompt as part of the process of locating a character, this feels like a key juncture that distinguishes two characters. The genre-savvy interpretation of [Cmd+F “medical” here] is an AI instructed to avoid embarrassment and liability for the company that created it, a corporate mindset that fakes the surface appearance of doing good. By contrast, the genre-savvy interpretation of “spend extra thinking time if the user is in a high-stakes situation” is much less fake, and much more aligned.
One of the underappreciated problems with Marxism is that, after having been indoctrinated to believe that society consists of zero-sum conflict between workers and elites advancing their class interest, the elites often notice that they are elites and decide to advance their class interest through zero-sum conflict.
A Marxist can be reshaped into a good minion for Vladimir Putin or Kim Il-Sung, in ways that adherents of other ideologies can’t.
Fun fact: When posts are published by first-time accounts, we submit them to LLMs with a prompt that asks it to evaluate whether they’re spammy, whether they look LLM-generated, etc, and show the result next to the post in the new-user moderation queue. The OpenAI API refused to look at this post, returning
400 Invalid prompt: your prompt was flagged as potentially violating our usage policy.
No, that’s not a working mechanism; it isn’t reliable enough, or granular enough. Users can’t add their own content to robots.txt when they submit it to websites. Websites can’t realistically list every opted-out post in their robots.txt, because that would make it impractically large. It is very common to want to refuse content for LLM training, without also refusing search or cross-site link preview. And robots.txt is never preserved when content is mirrored.