People are noticing Dario’s China hawkishness, and wondering where it comes from. It’s two things, one obvious reason reflects poorly on Dario, the other non-obvious gives him an out.
The obvious: Dario’s company benefits from US technological superiority. Closed frontier models are the moat. Giving chips to China cuts into it.
The non-obvious: Dario is still stuck in the old antagonisms of the 20th century. The Cold War scenario. And if the nature of AI isn’t what it is he’d be right.
Dario hints at it only barely, but obviously hasn’t done the deep thought necessary to understand the repercussions of what he said. He said that like the transition from feudalism to capitalism, the roles we have in society change.
We’re about to go through the same transition. Concepts of Authoritarianism and Liberal Democracy are about to break; because the nature of identity when intelligence flows between individuals in a post-scarcity intelligence environment, means that identity is no longer constrained to a human body.
What does authoritarianism mean when neither an authoritarian nor a citizen are coherent concepts? When cheap intelligence and BCI makes one’s identity so defuse that it can’t be constrained to a single skull?
Ironically, the Chinese strategy of open-sourcing models is more likely to make that kind of diffusion possible, and the closed-source vertical integration like OpenAI’s purchase of Cerebras is going to create the 21st century’s equivalent of Authoritarianism, very similar to how recommender systems created parasitic and monolithic attention systems.
Thus here’s Dario’s out: if he understood this, he’d find a way to open-source models immediately, because he stated his primary concerns multiple times throughout the interview: diffusion of the benefits of capabilities.
What does authoritarianism mean when neither an authoritarian nor a citizen are coherent concepts? When cheap intelligence and BCI makes one’s identity so defuse that it can’t be constrained to a single skull?
For starters, it could mean that the authoritarian’s brain contents overwrite the citizens’ brain contents. What happens next is… irrelevant for the current citizens.
It could also mean that the citizens’ brains are scanned, and parts of them are erased (and replaced by a copy of someone else’s brain) whenever the AI police detects something suspicious.
Those are great. Reminds me a lot of the Focused in A Deepness in the Sky. So what kind of extension would we want between people’s minds? Authoritarian homogeneity seems like a state of the world we’d want to avoid, seems like it would create a fragile system that was globally vulnerable to certain memetics. Another failure mode would be conformity in thought where populations are similarly vulnerable but from a more horizontally distributed zeitgeist rather than being imposed by hierarchy.
What I still want to keep in focus is that this does still break the concept of an authoritarian, but maybe makes the failure mode more “pure”? Agents in this case become a conglomeration of brains as mind, and its effects on body could be just as grave but without physical force.
Why doesn’t it seem sufficiently important to you? Seems to me like this is the first frontier of the consequences of AI that are obvious and talked about, but invisible in the sense where they’re the water in which we’re submerged so we assume we can’t do anything about it. Recommender systems are misaligned AI, and have been for decades. This is obvious by the documented effects on depression, anxiety, and political polarization (Stuart Russel talks about the later in that recommender systems radicalize because it’s easier to predict and control the attention of someone who is radicalized). This https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai demonstrates the first rumblings of the next wave of similar consequences. Addressing the harms of the recommender systems is training wheels for being prepared for the next wave of persuasive AI. And thinking about how these things extend identity and consciousness in the way that McLuhan would claim that electric media does for civilization, would give us insight into how to engineer resilience.
People are noticing Dario’s China hawkishness, and wondering where it comes from. It’s two things, one obvious reason reflects poorly on Dario, the other non-obvious gives him an out.
The obvious: Dario’s company benefits from US technological superiority. Closed frontier models are the moat. Giving chips to China cuts into it.
The non-obvious: Dario is still stuck in the old antagonisms of the 20th century. The Cold War scenario. And if the nature of AI isn’t what it is he’d be right.
Dario hints at it only barely, but obviously hasn’t done the deep thought necessary to understand the repercussions of what he said. He said that like the transition from feudalism to capitalism, the roles we have in society change.
We’re about to go through the same transition. Concepts of Authoritarianism and Liberal Democracy are about to break; because the nature of identity when intelligence flows between individuals in a post-scarcity intelligence environment, means that identity is no longer constrained to a human body.
What does authoritarianism mean when neither an authoritarian nor a citizen are coherent concepts? When cheap intelligence and BCI makes one’s identity so defuse that it can’t be constrained to a single skull?
Ironically, the Chinese strategy of open-sourcing models is more likely to make that kind of diffusion possible, and the closed-source vertical integration like OpenAI’s purchase of Cerebras is going to create the 21st century’s equivalent of Authoritarianism, very similar to how recommender systems created parasitic and monolithic attention systems.
Thus here’s Dario’s out: if he understood this, he’d find a way to open-source models immediately, because he stated his primary concerns multiple times throughout the interview: diffusion of the benefits of capabilities.
For starters, it could mean that the authoritarian’s brain contents overwrite the citizens’ brain contents. What happens next is… irrelevant for the current citizens.
It could also mean that the citizens’ brains are scanned, and parts of them are erased (and replaced by a copy of someone else’s brain) whenever the AI police detects something suspicious.
Those are great. Reminds me a lot of the Focused in A Deepness in the Sky. So what kind of extension would we want between people’s minds? Authoritarian homogeneity seems like a state of the world we’d want to avoid, seems like it would create a fragile system that was globally vulnerable to certain memetics. Another failure mode would be conformity in thought where populations are similarly vulnerable but from a more horizontally distributed zeitgeist rather than being imposed by hierarchy.
What I still want to keep in focus is that this does still break the concept of an authoritarian, but maybe makes the failure mode more “pure”? Agents in this case become a conglomeration of brains as mind, and its effects on body could be just as grave but without physical force.
I would start with “only voluntary”.
But of course there are other risks, such as people being scammed into providing consent, things like cults, mass hysteria, etc.
(I don’t have much opinion on this. Doesn’t seem sufficiently important to me now.)
Why doesn’t it seem sufficiently important to you? Seems to me like this is the first frontier of the consequences of AI that are obvious and talked about, but invisible in the sense where they’re the water in which we’re submerged so we assume we can’t do anything about it. Recommender systems are misaligned AI, and have been for decades. This is obvious by the documented effects on depression, anxiety, and political polarization (Stuart Russel talks about the later in that recommender systems radicalize because it’s easier to predict and control the attention of someone who is radicalized). This https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai demonstrates the first rumblings of the next wave of similar consequences. Addressing the harms of the recommender systems is training wheels for being prepared for the next wave of persuasive AI. And thinking about how these things extend identity and consciousness in the way that McLuhan would claim that electric media does for civilization, would give us insight into how to engineer resilience.