What does authoritarianism mean when neither an authoritarian nor a citizen are coherent concepts? When cheap intelligence and BCI makes one’s identity so defuse that it can’t be constrained to a single skull?
For starters, it could mean that the authoritarian’s brain contents overwrite the citizens’ brain contents. What happens next is… irrelevant for the current citizens.
It could also mean that the citizens’ brains are scanned, and parts of them are erased (and replaced by a copy of someone else’s brain) whenever the AI police detects something suspicious.
Those are great. Reminds me a lot of the Focused in A Deepness in the Sky. So what kind of extension would we want between people’s minds? Authoritarian homogeneity seems like a state of the world we’d want to avoid, seems like it would create a fragile system that was globally vulnerable to certain memetics. Another failure mode would be conformity in thought where populations are similarly vulnerable but from a more horizontally distributed zeitgeist rather than being imposed by hierarchy.
What I still want to keep in focus is that this does still break the concept of an authoritarian, but maybe makes the failure mode more “pure”? Agents in this case become a conglomeration of brains as mind, and its effects on body could be just as grave but without physical force.
Why doesn’t it seem sufficiently important to you? Seems to me like this is the first frontier of the consequences of AI that are obvious and talked about, but invisible in the sense where they’re the water in which we’re submerged so we assume we can’t do anything about it. Recommender systems are misaligned AI, and have been for decades. This is obvious by the documented effects on depression, anxiety, and political polarization (Stuart Russel talks about the later in that recommender systems radicalize because it’s easier to predict and control the attention of someone who is radicalized). This https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai demonstrates the first rumblings of the next wave of similar consequences. Addressing the harms of the recommender systems is training wheels for being prepared for the next wave of persuasive AI. And thinking about how these things extend identity and consciousness in the way that McLuhan would claim that electric media does for civilization, would give us insight into how to engineer resilience.
For starters, it could mean that the authoritarian’s brain contents overwrite the citizens’ brain contents. What happens next is… irrelevant for the current citizens.
It could also mean that the citizens’ brains are scanned, and parts of them are erased (and replaced by a copy of someone else’s brain) whenever the AI police detects something suspicious.
Those are great. Reminds me a lot of the Focused in A Deepness in the Sky. So what kind of extension would we want between people’s minds? Authoritarian homogeneity seems like a state of the world we’d want to avoid, seems like it would create a fragile system that was globally vulnerable to certain memetics. Another failure mode would be conformity in thought where populations are similarly vulnerable but from a more horizontally distributed zeitgeist rather than being imposed by hierarchy.
What I still want to keep in focus is that this does still break the concept of an authoritarian, but maybe makes the failure mode more “pure”? Agents in this case become a conglomeration of brains as mind, and its effects on body could be just as grave but without physical force.
I would start with “only voluntary”.
But of course there are other risks, such as people being scammed into providing consent, things like cults, mass hysteria, etc.
(I don’t have much opinion on this. Doesn’t seem sufficiently important to me now.)
Why doesn’t it seem sufficiently important to you? Seems to me like this is the first frontier of the consequences of AI that are obvious and talked about, but invisible in the sense where they’re the water in which we’re submerged so we assume we can’t do anything about it. Recommender systems are misaligned AI, and have been for decades. This is obvious by the documented effects on depression, anxiety, and political polarization (Stuart Russel talks about the later in that recommender systems radicalize because it’s easier to predict and control the attention of someone who is radicalized). This https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai demonstrates the first rumblings of the next wave of similar consequences. Addressing the harms of the recommender systems is training wheels for being prepared for the next wave of persuasive AI. And thinking about how these things extend identity and consciousness in the way that McLuhan would claim that electric media does for civilization, would give us insight into how to engineer resilience.