What is LessWrong view on the biggest Chinese Lab score in Epoch Capabilities Index or METR if the U.S. Government tells all the Labs to stop the public release of SOTA AI? 18 months after that, will China still be on their own trend?
MP
We have finally gave China the inputs to distill the evil central planner Superintelligence from the famous book “Don’t give China the inputs to distill the evil central planner Superintelligence”
Jokes aside. What is Anthropic thinking? If they put their models on an API for everyone to use, or course they’ll get distilled. If it’s just a natsec thing, sell it directly to the Department of War and call it a day.
It’s not like Anthropic is the player who settled the largest intelectual property suit in history.
Someone, somewhere, should put some effort and answer the following question: is the cost of AI tasks increasing exponentially that at some point, LLMs coding at a certain time horizon will be more expensive than humans?
I used to spend $2/mo in AI tokens for SOTA GPT-3.5 in mid 2022 for all my SOTA LLM needs. I now spend thousands, despite the insane price per token decline.
I have some moderate confidence the following is happening:
Tokens used to solve a task with greater difficulty is increasing faster than the amount of pricing decline per token.
Pricing decline of tokens is in the neighborhood of 90% per year. Token per task at SOTA is increasing 100x to 10,000x per year, in my experience.
Some take off scenario projects the compute available for LLM-led experiments?
I assume today you can model developments per human AI researcher as: number of experiments X number of FLOPS X research taste.
Over time, you have a lower bound on how many FLOPS you need for an experiment.
My question is how realistic is the improvement by having LLMs doing AI research, to the extent this means the LLMs will be conducting the experiments themselves. You’d probably need to 100-1000x the compute available to small scale experiments to see any meaningful acceleration.
Thoughts?
Inspired by this tweet: https://x.com/chrispainteryup/status/2020738025907712225?s=46
Meta has basically shut down FAIR after their Llama 4 fiasco, fired the lead and Yann, and they are starting again by creating a new lab called Meta Superintelligence Labs. The guys Alexander Wang has assembled STARTED working toghether like during the summer.
I plan to brief and lobby a presidential candidate with high single digits probability of getitng elected in my country about AI safety. What books or essays you guys think I should read as a last sprint ditch to get read to the task?
New type of LLM psychosis: “LLM type 2 bipolatity”
In December I had very high levels of anxiety (started taking meds, difficulty sleeping) as part of the Claude Code moment. Sort of depression
Now I am having difficulty sleeping because I want to keep programming with ClawdBot, eating badly, and I unilateraly stopped taking the anxiety medication. Sort of mania.
Let’s if this bipolarity disorder continues and see what my doctor thinks.
Why you think Anthropic has low odds of succeeding?
“We are updating Grok’s constitution. We think it should be able to play many characters, like the waifu anime girl Ani, the truth-seeking AI assistant Grok, and the unhinged version MechaHitler.”
What annoys me about alignment research is that it could be used for create evil models just by trivially plugging a minus sign in front of it.
I have less confidence than most here in how much automated AI researchers will accelerate the takeoff. The difficulty of improving time horizons seem like a big data & compute problem. And it is also not clear how much you’ll be able to iterate in other vectors.
But! I am relatively more confident that for a given level of intelligence, automated AI researchers will be brutal in optimizing the cost to run it. We should expect an acceleration of the price of intelligence. See NanoGPT and PostTrainBench.
This could also be a way the bubble pops. At some point, running more expensive models stop making economic sense for a given task.
Imagine if by Summer 2028 we can run a Claude Opus 6-level model in your own Mac. Will Claude Opus 7 be that much better if you’re not doing advanced research? What about Summer 2029?
No wonder we are hearing about the possibility of a 2026 Anthropic IPO.
Of course being right is better than being wrong. Ideally he should know the exact date of the arrival of the Superintelligence and organize finances for that.
But it seems to me that he has the best shot of creating the AI god with his current process.
Altman doesn’t own equity in OpenAI and he’s doing it for the glory. He genuinely believes he might give birth to the AI god. Why should he do anything different, from his vantage point
I am fairly confident That GPT-5.1, which I’m confident is a check-out of GPT-4o, has more than 60% of its training flops in post-training.
If openai created another GPT-4 pre train, they’d post-train it all over again.
Of course they’ll do it. But just not that often. Likely once a year or something like that.
Many many things in the world are explained by society having an unconscious visceral forecast of AGI timelines in the late 2020s. Many actors behaviors make more sense if you assume they have late 2020s timelines.
Why the end of neoliberalism and the post-war world? Late 2020s timelines. They understand having raw power will be relatively more important than having non-zero-sum relationship with your allies and trading partners.
Why the 2nd Cold War? Late 2020s timelines. U.S. and China know that being ahead during the late 2020s probably means being ahead forever.
Why did Russia invade Ukraine? Late 2020s timelines. Russia knows they couldn’t do it later.
Why Gen Z isn’t obsessed with work but they are obsessed with health and positional goods? Late 2020s timelines. They want to be healthy to await the arrival of the cure of aging and know they won’t have a long career.
Why politics has become more raw, polarized, with each side thinking losing is the end of the world? They want their side in power during the 2020s.
Why long-term interest rates bottomed the same month GPT-3 launched? Late 2020s timelines (ok, this is a joke)
Why the arrival of crypto and gambling culture and ticket prize (e.g.: creator and only fans and hustlers). Late 2020s timelines.
Why are governments running big fiscal deficts with no end in sight? Late 2020s timelines.
This is a bit “wtf happened in 1971?” I mean, the Intel 4004 was launched in 1971, the release of the first microprocessor. And I think what explains this is that this is all part of the same trend of Moore’s Law pushing for automation, just that we are getting to the part of the exponential where the numbers get insane.
Think about theory of the firm: the firm is the largest portion of the economy that is better run through an authoritarian central planning regime rather than using the market prices to orient and organize production.
What we have seen with informatics over the past few decades is exactly that bigger is getting better. For years now the small-cap factor no longer works. J.P. Morgan Chase & Co. is world’s largest bank and it’s outperforming the industry. Amazon is capable to coordinate more than 1.5 million full-time employees worldwide.
As AGI accelerates this trend, there’s no reason to imagine we won’t see further consolidation. Yeah, sure, some people like Pepsi and other people like Coca-Cola. But likely there won’t be 2,000 different soda brands that each one needs to be individual oversight by humans.
If you can organize production more through central planning through informatics and AGI, I dunno there will be much work left for humans to do.
And obviously, people on LW are überbulls on ASI. The view is that it’ll get millions of times smarter, whatever you define, than humans.
I think we should just assume that Claude Code has already been attacked by multiple fronts.
Someone posted on X’s fintwit:
Claude Code virality is first big AI hit which inspires lots of short ideas but almost no longs. Who benefits besides anthropic or ai infra trade that’s already max long?
To me, this is more evidence of dystopia. Maybe I am more optimistic than the average LessWronger who believes in apocalypse, but this updates me towards a very weird future.
This hints to a future where AI is extremely deflationary.
My take is that because intelligence no longer will be what defines us as humans, reconnecting with our bodies is a nice way to find purpose.
What your dad would do in a letter of last resort? Would he bomb the United Kingdom’s enemies in revenge?