I’m not convinced that these were bad predictions for the most part.
The main prediction: 1) China lacks compute. 2) CCP values stability and control → China will not be the first to build unsafe AI/AGI.
Both of these premises are unambiguously true as far as I’m aware. So, these predictions being bad suggests that we now believe China is likely to build AGI without realizing it threatens stability/control, and with minimal compute, before USA? All while refusing to agree to any sort of deal to slow down? Why? Seems unlikely.
American companies, on the other hand, are still explicitly racing toward AGI, are incredibly well resourced, have strong government support, and have a penchant for disruption. The current administration also cares less about stability than any other in recent history.
So, from my perspective, USA racing to AGI looks even more dangerous than before, almost desperate. Whereas China is fast following, which I think everyone expected? Did anyone suggest that China would not be able to fast-follow American AI?
I think LW consensus has been that the main existential risk is AI development in general. The only viable long-term option is to shut it all down. Or at least slow it down as much as possible until we can come up with better solutions. DeepSeek from my perspective should incentivize slowing down development (if you agree with the fast follower dynamic. Also by reducing profit margins generally), and I believe it has.
Anyway, I don’t see how this relates to these predictions. The predictions are about China’s interest in racing to AGI. Do you believe China would now rather have an AGI race with USA than agree to a pause?
DeepSeek from my perspective should incentivize slowing down development (if you agree with the fast follower dynamic. Also by reducing profit margins generally), and I believe it has.
Any evidence of DeepSeek marginally slowing down AI development?
and the response to ‘shut it down’ has always “what about China, or India, or the UAE, or Europe to which the response was...they want to pause bc XYZ
Well, you not have proof, not speculation, that they are not pausing. They don’t find your arguments pursuasive. What to do?!?
Which is why the original post was about updating. Something you don’t seem very interested doing. Which is irrational. So is this forum about rationality or about AI risk? I would think the later flows from the former, but I don’t see much evidence of the former.
I’m not convinced that these were bad predictions for the most part.
The main prediction: 1) China lacks compute. 2) CCP values stability and control → China will not be the first to build unsafe AI/AGI.
Both of these premises are unambiguously true as far as I’m aware. So, these predictions being bad suggests that we now believe China is likely to build AGI without realizing it threatens stability/control, and with minimal compute, before USA? All while refusing to agree to any sort of deal to slow down? Why? Seems unlikely.
American companies, on the other hand, are still explicitly racing toward AGI, are incredibly well resourced, have strong government support, and have a penchant for disruption. The current administration also cares less about stability than any other in recent history.
So, from my perspective, USA racing to AGI looks even more dangerous than before, almost desperate. Whereas China is fast following, which I think everyone expected? Did anyone suggest that China would not be able to fast-follow American AI?
The argument has historically been that existential risk from AI came from some combination of a) SOTA models, and b) open source.
China is now publishing SOTA open source models. Oh and they found a way to optimize around their lack of GPUs.
Are you sure you aren’t under the influence of cognitive dissonance/selective memory?
I think LW consensus has been that the main existential risk is AI development in general. The only viable long-term option is to shut it all down. Or at least slow it down as much as possible until we can come up with better solutions. DeepSeek from my perspective should incentivize slowing down development (if you agree with the fast follower dynamic. Also by reducing profit margins generally), and I believe it has.
Anyway, I don’t see how this relates to these predictions. The predictions are about China’s interest in racing to AGI. Do you believe China would now rather have an AGI race with USA than agree to a pause?
Any evidence of DeepSeek marginally slowing down AI development?
and the response to ‘shut it down’ has always “what about China, or India, or the UAE, or Europe to which the response was...they want to pause bc XYZ
Well, you not have proof, not speculation, that they are not pausing. They don’t find your arguments pursuasive. What to do?!?
Which is why the original post was about updating. Something you don’t seem very interested doing. Which is irrational. So is this forum about rationality or about AI risk? I would think the later flows from the former, but I don’t see much evidence of the former.