This makes me wonder: would this effect be much weaker in embodied AI, since it would have a constant stream of video and audio that provides extremely strong evidence against the “I am Hitler” hypothesis?
Expertium
I think you will find the RLM paper interesting: https://arxiv.org/pdf/2512.24601
TLDR: instead of the LLM processing the input directly, the LLM is given access to a Python environment and the input prompt is stored as a variable. Then the LLM can do any standard stuff like
print(prompt[:100])to read the beginning or use regex to search for relevant keywords. Additionally, the LLM can recursively call itself on chunks of the prompt, hence Recursive Language Models (RLMs). This is like having a pseudo-infinite context window + the ability to make the output arbitrarily long as well.The paper reports that even without specifically fine-tuning base LLMs to use this scaffold, the results on long context tasks show a big improvement with median costs being almost the same (though RLMs are significantly more expensive in a minority of cases). Fine-tuning improves the results further. Note that RLMs outperformed base LLMs even on tasks that fit into the context window of the base LLM, where theoretically chunking is not needed.
EDIT: I forgot to mention that while for GPT-5 the median costs with and without this scaffold were similar, the runtime was always several times longer, so there is a downside. For Qwen3-Coder-480B both cost and runtime were higher, though the authors note that Qwen was pretty bad at using this scaffold.
A bit of a necrocomment, but I’d like to know if LLMs solving unsolved math problems has changed your mind.
Erdos problems 205 and 1051: AI contributions to Erdős problems · teorth/erdosproblems Wiki. Note: I don’t know what LLM Aristotle is based on, but Aletheia is based on Gemini.
Also this paper: [2512.14575] Extremal descendant integrals on moduli spaces of curves: An inequality discovered and proved in collaboration with AI
Usually when I read Anthropic’s blog posts, I feel like they want the takeaway to be something like “We came up with interesting methodology and got interesting results”.
But this post reads differently. It’s like a really weird attempt to assuage people that AIs won’t try to take over the world and that it will be business as usual. Kinda reminds me of Altman’s “Gentle Singularity” a little, and that’s not a compliment. It’s like the takeaway is supposed to be “Don’t worry about the numbers and the methodology, that’s not important. What’s important is that nothing scary will happen, just business as usual”.
It would be interesting if you gave OpenAI, Google DeepMind, Anthropic, xAI and DeepSeek scores based on how well they fit your checklist.
and also LLMs seem to do well directly plugged into robots
I’m not so sure about that: Butter-Bench: Evaluating LLM Controlled Robots for Practical Intelligence
On a side note, I like how this excerpt below would be considered absurdist sci-fi 20 years ago, and now it’s real.
In hard RSI all memories and goals of the model remain unchanged (somehow) even though the architecture changes. In easy RSI model A trains model B from scratch.
GPT-5 training GPT-6 would be easy RSI. GPT-5 turning itself into something else with zero loss of information stored in GPT-5′s weights would be hard RSI.
Some feedback:
As others have pointed out, more concise responses would be better.
I feel like this chatbot over-relies on analogies related to your job.
Some of the outputs feel a bit incoherent. For example, it talks about jailbreaking, but then in the next sentence says that AI that is faking alignment is a disaster waiting to happen. It jumped from jailbreaking to alignment faking, but those are pretty different issues.
Personally, I wouldn’t link to Yudkowsky’s list of lethalities. If you want to use something for persuasion, it needs to be either easy to understand for a layperson or carry a sense of authority (like “world’s leading scientists and Nobel prize winners believe [X] is true”), and I don’t think Yudkowsky’s list meets either criteria.
Also, if that’s how “memetic warfare” will be done in the future—via debate-bots—then I don’t see how AI safety people are going to win, given that anti-AI-safety people have many billions of dollars to burn.
At that point, the shut down argument is no longer speculative, and you can probably actually do it.
To be clear, I’m not saying that’s a good plan if you can foresee all the developments in advance. But, if you’re uncertain about all of it, then it seems like there is likely to be a period of time before it’s necessarily too late when a lot of the uncertainty is resolved.
I think we are talking past each other, at least somewhat.
Let me clarify: even if humanity wins a fight against an intelligent-but-not-SUPER-intelligent AI (by dropping an EMP on the datacenter with that AI or whatever, the exact method doesn’t matter for my argument), we will still be left with the technical question “What code do we need to write and what training data do we need to use so that the next AI won’t try to kill everyone?”.
Winning against a misaligned AI doesn’t help you solve alignment. It might make an international treaty more likely, depending on the scale of damages caused by that AI. But if the plan is “let’s wait for an AI dangerous enough to cause something 10 times worse than Chernobyl to go rogue, then drop an EMP on it before things get too out of hand, then once world leaders crap their pants, let’s advocate for an international treaty”, then it’s one hell of a gamble.
How do we know the AI will want to survive?
Because LLMs are already avoiding being shut down: https://arxiv.org/abs/2509.14260 . And even if future superintelligent AI will be radically different from LLMs, it likely will avoid being shut down as well. This is what people on lesswrong call a convergent instrumental goal:
If your terminal goal is to enjoy watching a good movie, you can’t achieve it if you’re dead/shut down.
If your terminal goal is to take over the world, you can’t achieve it if you’re dead/shut down.
If your goal is anything other than self-destruction, then self-preservation comes together in a bundle. You can’t Do Things if you’re dead/shut down.
Why should we think that there is no “in between” period where AI is powerful enough that it might be able to kill us and weak enough that we might win the fight?
Ok, let’s say there is an “in between” period, and let’s say we win the fight against a misaligned AI. After the fight, we will still be left with the same alignment problems, as other people in this thread pointed out. We will still need to figure out how to make safe, benevolent AI, because there is no guarantee that we will win the next fight, and the fight after that, and the one after that, etc.
If there will be an “in between” period, it could be good in the sense that it buys more time to solve alignment, but we won’t be in that “in between” period forever.
I’ve still found them useful. If METR’s trend actually holds, they will indeed become increasingly more useful. If it actually holds to >1-month tasks, they may actually become transformative within the decade. Perhaps they will automate the within-paradigm AI R&D[1], and it will lead to a software-only Singularity that will birth an AI model capable of eradicating humanity.
But that thing will still not be an AGI.
No offense, but to me it seems like you are being overly pedantic with a term that most people use differently. If you surveyed people on lesswrong, as well as AI researchers, I’m pretty sure almost everyone (>90% of people) would call an AI model capable enough to eradicate humanity an AGI.
Let me put it another way—do you expect that “LLMs do not optimize for a goal” will still be a valid objection in 2030? If yes, then I guess we have a very different idea of how progress will go.
But frontier labs are deliberately working on making LLMs more agentic. Why wouldn’t they—AI that can do work autonomously is more economically valuable than a chatbot.
Another suggestion: https://cybench.github.io/
https://x.com/alexwei_/status/1946477742855532918
I believe this qualifies as “technical capability existing by end of 2025”.
For example, did any of the examples derive their improvement by some way other than chewing through bits of algebraicness?
I don’t think so.
https://arxiv.org/pdf/2506.13131
What did the system invent?
Example: matrix multiplication using fewer multiplication operations.
There were also combinatorics problems, “packing” problems (like multiple hexagons inside a bigger hexagon), and others. All of that is in the paper.
Also, “This automated approach enables AlphaEvolve to discover a heuristic that yields an average 23% kernel speedup across all kernels over the existing expert-designed heuristic, and a corresponding 1% reduction in Gemini’s overall training time.”
How did the system work?
It’s essentially an evolutionary/genetic algorithm, with LLMs providing “mutations” for the code. Then the code is automatically evaluated, bad solutions are discarded, and good solutions are kept.
What makes you think it’s novel?
These solutions weren’t previously discovered by humans. Unless the authors just couldn’t find the right references, of course, but I assume the authors were diligent.
Would it have worked without the LLM?
You mean, “could humans have discovered them, given enough time and effort?”. Yes, most likely.
I’m surprised to see zero mentions of AlphaEvolve. AlphaEvolve generated novel solutions to math problems, “novel” in the “there are no records of any human ever proposing those specific solutions” sense. Of course, LLMs didn’t generate them unprompted, humans had to do a lot of scaffolding. And it was for problems where it’s easy to verify that the solution is correct; “low messiness” problems if you will. Still, this means that LLMs can generate novel solutions, which seems like a crux for “Can we get to AGI just by incrementally improving LLMs?”.
I think people are updating too much based on a measurement that even METR staff explicitly called noisy.
EDIT: I noticed that later in the post you did mention that it’s noisy.