I’m pretty confused by the conclusion of this post. I was nodding along during the first half of the essay: I myself worry a lot about how I and others will navigate the dilemma of exposure to AI super-persuasion and addiction on one side, and paranoid isolationism on the other.
But then in the conclusion of the post, you only talk about how people will fall into one of these two traps: isolationist religious communes locking their members in until the end of times.
I worry more about the other trap: people foolishly exposing themselves to too much AI generated super-stimulus and getting their brain fried. I think much more people will be exposed to various addictive AI generated content than the number of people who have strong enough religious communities that they create an isolationist bubble.
I think it’s plausible that the people who expose themselves to all the addictive stuff on the AI-internet will also sooner or later get captured by some isolationist bubble that keeps them locked away from the other competing memes: arguably that’s the only stable point. But I worry that these stable points will be worse than the Christian co-ops you describe.
I imagine an immortal man, in the year 3000, sitting at his computer, not having left his house or having talked to a human in almost a thousand years, talking with his GPT-5.5 based AI girlfriend and scrolling his personalized twitter feed, full of AI generated outrage stories rehashing the culture war fights of his youth. Outside his window, there is a giant billboard advertising “Come on, even if you want to fritter your life away, at least use our better products! At least upgrade your girlfriend to GPT-6!” But his AI girlfriend told him to shutter his window a thousand years ago, so the billboard is to no avail.
This is of course a somewhat exaggerated picture, but I really do believe that one-person isolation bubbles will be more common and more dystopian than the communal isolationism you describe.
I think both are big problems. Maybe I should have been clearer about the symmetry here. The thesis I care about here is pretty symmetrical between those problems.
To the extent that these things are problems, they are both problems today. There are insular Amish communities that shut out as much modern culture as they can, and hikkikomori living alone with their body pillows.
AI may exacerbate the existing issues, but on the whole I don’t feel like the world is drastically worsened by the presence of these groups.
I disagree that this isn’t concerning. For one thing, these bubbles typically aren’t good for the people inside of them. For another, we can ignore them only because they’re a tiny portion of the population. ASI could increase the prevalence to most of the population, at which point politics (and perhaps other systems) goes off the rails.
I agree—AI-generated superstimuli are much more of a concern than groups that might try to isolate themselves from it. IMO such groups are not just less of a concern, but good and even necessary, even if their values may seem backwards to us. They serve as the “control group” for the rest of society in times of such unpredictable cultural change.
It’s very possible that the rest of society could be severely damaged or even wiped out by a highly-contagious AI-generated meme, and only these isolated groups would be able to survive. They’re a bit like the maths described in Neal Stephenson’s novel Anathem.
I’m pretty confused by the conclusion of this post. I was nodding along during the first half of the essay: I myself worry a lot about how I and others will navigate the dilemma of exposure to AI super-persuasion and addiction on one side, and paranoid isolationism on the other.
But then in the conclusion of the post, you only talk about how people will fall into one of these two traps: isolationist religious communes locking their members in until the end of times.
I worry more about the other trap: people foolishly exposing themselves to too much AI generated super-stimulus and getting their brain fried. I think much more people will be exposed to various addictive AI generated content than the number of people who have strong enough religious communities that they create an isolationist bubble.
I think it’s plausible that the people who expose themselves to all the addictive stuff on the AI-internet will also sooner or later get captured by some isolationist bubble that keeps them locked away from the other competing memes: arguably that’s the only stable point. But I worry that these stable points will be worse than the Christian co-ops you describe.
I imagine an immortal man, in the year 3000, sitting at his computer, not having left his house or having talked to a human in almost a thousand years, talking with his GPT-5.5 based AI girlfriend and scrolling his personalized twitter feed, full of AI generated outrage stories rehashing the culture war fights of his youth. Outside his window, there is a giant billboard advertising “Come on, even if you want to fritter your life away, at least use our better products! At least upgrade your girlfriend to GPT-6!” But his AI girlfriend told him to shutter his window a thousand years ago, so the billboard is to no avail.
This is of course a somewhat exaggerated picture, but I really do believe that one-person isolation bubbles will be more common and more dystopian than the communal isolationism you describe.
I think both are big problems. Maybe I should have been clearer about the symmetry here. The thesis I care about here is pretty symmetrical between those problems.
To the extent that these things are problems, they are both problems today. There are insular Amish communities that shut out as much modern culture as they can, and hikkikomori living alone with their body pillows.
AI may exacerbate the existing issues, but on the whole I don’t feel like the world is drastically worsened by the presence of these groups.
I disagree that this isn’t concerning. For one thing, these bubbles typically aren’t good for the people inside of them. For another, we can ignore them only because they’re a tiny portion of the population. ASI could increase the prevalence to most of the population, at which point politics (and perhaps other systems) goes off the rails.
I agree—AI-generated superstimuli are much more of a concern than groups that might try to isolate themselves from it. IMO such groups are not just less of a concern, but good and even necessary, even if their values may seem backwards to us. They serve as the “control group” for the rest of society in times of such unpredictable cultural change.
It’s very possible that the rest of society could be severely damaged or even wiped out by a highly-contagious AI-generated meme, and only these isolated groups would be able to survive. They’re a bit like the maths described in Neal Stephenson’s novel Anathem.