I just don’t buy that these agencies are dismissive without good reason
On what possible publicly-unavailable evidence could they have updated in order to correctly attain such a high degree of dismissiveness?
I could think of three types of evidence:
Strong theoretical reasons.
E. g., some sort of classified, highly advanced, highly empirically supported theory of deep learning/intelligence/agency, such that you can run a bunch of precise experiments, or do a bunch of math derivations, and definitively conclude that DL/LLMs don’t scale to AGI.
Empirical tests.
E. g., perhaps the deep state secretly has 100x the compute of AGI labs, and they already ran the pretraining game to GPT-6 and been disappointed by the results.
Overriding expert opinions.
E. g., a large number of world-class best-of-the-best AI scientists with an impeccable track record firmly and unanimously saying that LLMs don’t scale to AGI. This requires either a “shadow industry” of AI experts working for the government, or for the AI-expert public speakers to be on the deep state’s payroll and lying in public about their uncertainty.
I mean, I guess it’s possible that what we see of the AI industry is just the tip of the iceberg and the government has classified research projects that are a decade ahead of the public state of knowledge. But I find this rather unlikely.
And unless we do postulate that, I don’t see any possible valid pathway by which they could’ve attained high certainty regarding the current paradigm not working out.
They’ve explored remote viewing and other ideas that are almost certainly bullshit
There are two ways we can update on it:
The fact that they investigated psychic phenomena means they’re willing to explore a wide variety of ambitious ideas, regardless of their weirdness – and therefore we should expect them not do dismiss the AGI Risk out of hand.
The fact that they investigated psychic phenomena means they have a pretty bad grip on reality – and therefore we should not expect them to get the AGI Risk right.
I never looked into it enough to know which interpretation is the correct one. Expecting less competence rather than more is usually a good rule of thumb, though.
it sure seems like progress at OpenAI is slowing down rather than speeding up
To be clear, I personally very much agree with that. But:
at least from this article, it seems like Ilya Sutskever was running out of confidence that OpenAI would reach AGI by mid 2023
I find that I’m not inclined to take Sutskever’s current claims about this at face value. He’s raising money for his thing, he has a vested interest in pushing the agenda that the LLM paradigm is a dead end and that his way is the only way. Same how it became advantageous for him to talk about the data wall once he’s no longer with the unlimited-compute company.
Again, I do believe both in LLMs being a dead end and in the data wall. But I don’t trust Sutskever to be a clean source of information regarding that, so I’m not inclined to update on his claims to that end.
Those are good points. The last thing i’ll say drastically reduces the amount of competence required by the government in order for them to be dismissive while still being rational, and it is that the leading AI labs may already be fairly confident that the current techniques of deep-learning won’t get to AGI in the near-future, so the security agencies know this as well.
That would make sense. But I doubt all AGI companies are that good at informational security and deception. This would require all of {OpenAI, Anthropic, DeepMind, Meta, xAI} to decide on the deceptive narrative, and then not fail to keep up the charade, which would require both sending the right public messages and synchronizing their research publications such that the set of paradigm-damning ones isn’t public.
In addition, how do we explain people who quit AGI companies and remain with short timelines?
I guess I would respond to the first point by saying all of the companies you mentioned have incentive to say they are closing in on AGI even if they aren’t. It doesn’t seem that sophisticated to say “we’re close to AGI” when you’re not. Mark Zuckerberg said that AI would be at the level of a junior SWE this year, and Meta proceeded to release Llama 4. Unless prognosticators at Meta seriously fucked up, the most likely scenario is that Zuckerberg made that comment knowing it was bullshit. And the sharing of research did slow down a lot in 2023, which gave companies cover to not release unflattering results.
And to your last point, it seems reasonable that companies could pressure former employees to act as if they believe AGI is imminent. And some researchers may be emotionally invested in believing that what they worked on is what will lead to superintelligence.
And my question for you is: if DeepMind had solid evidence that AGI would be here in 1 year, and if the security agencies had access to DeepMind’s evidence and reasoning, do you believe they would still do nothing?
On what possible publicly-unavailable evidence could they have updated in order to correctly attain such a high degree of dismissiveness?
I could think of three types of evidence:
Strong theoretical reasons.
E. g., some sort of classified, highly advanced, highly empirically supported theory of deep learning/intelligence/agency, such that you can run a bunch of precise experiments, or do a bunch of math derivations, and definitively conclude that DL/LLMs don’t scale to AGI.
Empirical tests.
E. g., perhaps the deep state secretly has 100x the compute of AGI labs, and they already ran the pretraining game to GPT-6 and been disappointed by the results.
Overriding expert opinions.
E. g., a large number of world-class best-of-the-best AI scientists with an impeccable track record firmly and unanimously saying that LLMs don’t scale to AGI. This requires either a “shadow industry” of AI experts working for the government, or for the AI-expert public speakers to be on the deep state’s payroll and lying in public about their uncertainty.
I mean, I guess it’s possible that what we see of the AI industry is just the tip of the iceberg and the government has classified research projects that are a decade ahead of the public state of knowledge. But I find this rather unlikely.
And unless we do postulate that, I don’t see any possible valid pathway by which they could’ve attained high certainty regarding the current paradigm not working out.
There are two ways we can update on it:
The fact that they investigated psychic phenomena means they’re willing to explore a wide variety of ambitious ideas, regardless of their weirdness – and therefore we should expect them not do dismiss the AGI Risk out of hand.
The fact that they investigated psychic phenomena means they have a pretty bad grip on reality – and therefore we should not expect them to get the AGI Risk right.
I never looked into it enough to know which interpretation is the correct one. Expecting less competence rather than more is usually a good rule of thumb, though.
To be clear, I personally very much agree with that. But:
I find that I’m not inclined to take Sutskever’s current claims about this at face value. He’s raising money for his thing, he has a vested interest in pushing the agenda that the LLM paradigm is a dead end and that his way is the only way. Same how it became advantageous for him to talk about the data wall once he’s no longer with the unlimited-compute company.
Again, I do believe both in LLMs being a dead end and in the data wall. But I don’t trust Sutskever to be a clean source of information regarding that, so I’m not inclined to update on his claims to that end.
Those are good points. The last thing i’ll say drastically reduces the amount of competence required by the government in order for them to be dismissive while still being rational, and it is that the leading AI labs may already be fairly confident that the current techniques of deep-learning won’t get to AGI in the near-future, so the security agencies know this as well.
That would make sense. But I doubt all AGI companies are that good at informational security and deception. This would require all of {OpenAI, Anthropic, DeepMind, Meta, xAI} to decide on the deceptive narrative, and then not fail to keep up the charade, which would require both sending the right public messages and synchronizing their research publications such that the set of paradigm-damning ones isn’t public.
In addition, how do we explain people who quit AGI companies and remain with short timelines?
I guess I would respond to the first point by saying all of the companies you mentioned have incentive to say they are closing in on AGI even if they aren’t. It doesn’t seem that sophisticated to say “we’re close to AGI” when you’re not. Mark Zuckerberg said that AI would be at the level of a junior SWE this year, and Meta proceeded to release Llama 4. Unless prognosticators at Meta seriously fucked up, the most likely scenario is that Zuckerberg made that comment knowing it was bullshit. And the sharing of research did slow down a lot in 2023, which gave companies cover to not release unflattering results.
And to your last point, it seems reasonable that companies could pressure former employees to act as if they believe AGI is imminent. And some researchers may be emotionally invested in believing that what they worked on is what will lead to superintelligence.
And my question for you is: if DeepMind had solid evidence that AGI would be here in 1 year, and if the security agencies had access to DeepMind’s evidence and reasoning, do you believe they would still do nothing?