Those are great. Reminds me a lot of the Focused in A Deepness in the Sky. So what kind of extension would we want between people’s minds? Authoritarian homogeneity seems like a state of the world we’d want to avoid, seems like it would create a fragile system that was globally vulnerable to certain memetics. Another failure mode would be conformity in thought where populations are similarly vulnerable but from a more horizontally distributed zeitgeist rather than being imposed by hierarchy.
What I still want to keep in focus is that this does still break the concept of an authoritarian, but maybe makes the failure mode more “pure”? Agents in this case become a conglomeration of brains as mind, and its effects on body could be just as grave but without physical force.
Why doesn’t it seem sufficiently important to you? Seems to me like this is the first frontier of the consequences of AI that are obvious and talked about, but invisible in the sense where they’re the water in which we’re submerged so we assume we can’t do anything about it. Recommender systems are misaligned AI, and have been for decades. This is obvious by the documented effects on depression, anxiety, and political polarization (Stuart Russel talks about the later in that recommender systems radicalize because it’s easier to predict and control the attention of someone who is radicalized). This https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai demonstrates the first rumblings of the next wave of similar consequences. Addressing the harms of the recommender systems is training wheels for being prepared for the next wave of persuasive AI. And thinking about how these things extend identity and consciousness in the way that McLuhan would claim that electric media does for civilization, would give us insight into how to engineer resilience.
Those are great. Reminds me a lot of the Focused in A Deepness in the Sky. So what kind of extension would we want between people’s minds? Authoritarian homogeneity seems like a state of the world we’d want to avoid, seems like it would create a fragile system that was globally vulnerable to certain memetics. Another failure mode would be conformity in thought where populations are similarly vulnerable but from a more horizontally distributed zeitgeist rather than being imposed by hierarchy.
What I still want to keep in focus is that this does still break the concept of an authoritarian, but maybe makes the failure mode more “pure”? Agents in this case become a conglomeration of brains as mind, and its effects on body could be just as grave but without physical force.
I would start with “only voluntary”.
But of course there are other risks, such as people being scammed into providing consent, things like cults, mass hysteria, etc.
(I don’t have much opinion on this. Doesn’t seem sufficiently important to me now.)
Why doesn’t it seem sufficiently important to you? Seems to me like this is the first frontier of the consequences of AI that are obvious and talked about, but invisible in the sense where they’re the water in which we’re submerged so we assume we can’t do anything about it. Recommender systems are misaligned AI, and have been for decades. This is obvious by the documented effects on depression, anxiety, and political polarization (Stuart Russel talks about the later in that recommender systems radicalize because it’s easier to predict and control the attention of someone who is radicalized). This https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai demonstrates the first rumblings of the next wave of similar consequences. Addressing the harms of the recommender systems is training wheels for being prepared for the next wave of persuasive AI. And thinking about how these things extend identity and consciousness in the way that McLuhan would claim that electric media does for civilization, would give us insight into how to engineer resilience.