So if I combine commentary from both sides regarding Clark versus Sacks: under Biden, AI policy was controlled by EA “shills” and “lawyers”; and under Trump 2.0, it’s “Thiel business associates and a16z/Scale AI executives”.
I wonder if/when this coalition to deregulate AI, will find an issue scary enough that it divides over the need to regulate after all? In his musings, even Thiel occasionally concedes that AI has genuine potential to go badly for the human race.
It’s also interesting to hear about Anthropic’s actual business model. Sometimes they seem more like a big think tank, like OpenAI before 2023…
Let me review my impressions of power relations among the big four of American AI: OpenAI is the actual leader. Musk snipes at Altman because he wants xAI to be the leader. And Anthropic, apparently, is viewed by the current political regime, with its focus on deregulation, as a Trojan horse for the return of the regulators of the previous regime.
And what about Google? This is less clear to me. As one of the goliaths of the previous era of tech, Google is not fighting to prove its very existence in the marketplace, the way that the others are. I also wonder if DeepMind being in a different jurisdiction (UK rather than US) gives it a slight distance from these other power struggles.
Meanwhile, let’s not forget Ilya Sutskever’s stealth project, Safe Superintelligence. If there is a deep-state-connected AGI-conscious Manhattan Project anywhere, quietly leaching researchers, surely this is it… I wonder what prospect there is, that such an organization could do training runs for frontier-level AI, without anyone knowing about it? Would it have to be done in an NSA data center?
Then there are the economic worries that the “AI bubble” could burst. Currently I’m just agnostic about how likely that is. If it did burst, I expect there would be some kind of culling of organizations, but research would continue at places where they don’t have to worry about market share, and meanwhile the code and weights of AIs from defunct organizations would be bought up by someone. They’re not going to just be deleted or put on the dark web.
I still wish for a clearer understanding of how things are in China. They have a much more coordinated regulatory regime, but they are also the overwhelming leaders in open AI (RIP, Meta’s Llama strategy, I guess), and the ambition to dominate the global market.
edit I’m not sure why this is being so decisively downvoted. It’s a bit of a stream of consciousness, but I don’t think that’s it. Maybe it’s the characterisation of who the AI policymakers were under Biden? The references to “EA shills” and “EA lawyers” are quotes from the X thread following Seán Ó hÉigeartaigh’s tweet that Thiel/a16z/ScaleAI are in charge now.
So if I combine commentary from both sides regarding Clark versus Sacks: under Biden, AI policy was controlled by EA “shills” and “lawyers”; and under Trump 2.0, it’s “Thiel business associates and a16z/Scale AI executives”.
I wonder if/when this coalition to deregulate AI, will find an issue scary enough that it divides over the need to regulate after all? In his musings, even Thiel occasionally concedes that AI has genuine potential to go badly for the human race.
It’s also interesting to hear about Anthropic’s actual business model. Sometimes they seem more like a big think tank, like OpenAI before 2023…
Let me review my impressions of power relations among the big four of American AI: OpenAI is the actual leader. Musk snipes at Altman because he wants xAI to be the leader. And Anthropic, apparently, is viewed by the current political regime, with its focus on deregulation, as a Trojan horse for the return of the regulators of the previous regime.
And what about Google? This is less clear to me. As one of the goliaths of the previous era of tech, Google is not fighting to prove its very existence in the marketplace, the way that the others are. I also wonder if DeepMind being in a different jurisdiction (UK rather than US) gives it a slight distance from these other power struggles.
Meanwhile, let’s not forget Ilya Sutskever’s stealth project, Safe Superintelligence. If there is a deep-state-connected AGI-conscious Manhattan Project anywhere, quietly leaching researchers, surely this is it… I wonder what prospect there is, that such an organization could do training runs for frontier-level AI, without anyone knowing about it? Would it have to be done in an NSA data center?
Then there are the economic worries that the “AI bubble” could burst. Currently I’m just agnostic about how likely that is. If it did burst, I expect there would be some kind of culling of organizations, but research would continue at places where they don’t have to worry about market share, and meanwhile the code and weights of AIs from defunct organizations would be bought up by someone. They’re not going to just be deleted or put on the dark web.
I still wish for a clearer understanding of how things are in China. They have a much more coordinated regulatory regime, but they are also the overwhelming leaders in open AI (RIP, Meta’s Llama strategy, I guess), and the ambition to dominate the global market.
edit I’m not sure why this is being so decisively downvoted. It’s a bit of a stream of consciousness, but I don’t think that’s it. Maybe it’s the characterisation of who the AI policymakers were under Biden? The references to “EA shills” and “EA lawyers” are quotes from the X thread following Seán Ó hÉigeartaigh’s tweet that Thiel/a16z/ScaleAI are in charge now.