China does not have access to the computational resources[1] (compute, here specifically data centre-grade GPUs) needed for large-scale training runs of large language models.
While it’s true that Chinese semiconductor fabs are a decade behind TSMC (and will probably remain so for some time), that doesn’t seem to have stopped them from building 162 of the top 500 largest supercomputers in the world.
There are two inputs to building a large supercomputer: quality and quantity, and China seems more than willing to make up in quantity what they lack in quality.
The CCP is not interested in reaching AGI by scaling LLMs.
For a country that is “not interested” in scaling LLMs, they sure do seem to do a lot of research into large language models.
It’s also worth noting that China currently has the best open-source text-to-video model, has trained a state of the art text-to-image model, was the first to introduce AI in a mass consumer product, and is miles ahead of the west in terms of facial recognition.
I suspect that “China is not racing for AGI” will end up in the same historical bin as “Russia has no plans to invade Ukraine”, a claim that requires us to believe the Chinese stated preferences while completely ignoring their revealed ones.
I do agree that if the US and China were both racing, the US would handily win the race given current conditions. But if the US stops racing, there’s absolutely no reason to think the Chinese response would be anything other than “thanks, we’ll go ahead without you”.
--edit--
If a Chinese developer ever releases an LLM that is so powerful it inevitably oversteps censorship rules at some point, the Chinese government will block it and crack down on the company that released it.
This is a bit of a weird take to have if you are worried about AGI Doom. If your belief is “people will inevitably notice that powerful systems are misaligned and refuse to deploy them”, why are you worried about Doom in the first place?
Is the claim that China due to its all-powerful bureaucracy is somehow less prone to alignment failure and hard-left-turns than reckless American corporations? If so, I suggest you consider the possibility that Xi Jinping isn’t all that smart.
Can we crank this in reverse: given a utility function, design a market that whose representative agent has this utility function?
It seems like trivially, we could just have the market where a singleton agent has the desired utility function. But I wonder if there’s some procedure where we can “peel off” sub-agents within a market and end up at a market composed of the simplest possible sub-agents for some metric of complexity.
Either there is some irreducible complexity there, or maybe there is a Universality theorem proving that we can express any utility function using a market of agents who all have some extremely simple finite state, similar to how we can show any form of computation can be expressed using Turing Machines.