Thanks for this post—this is pretty interesting (and unsettling!) stuff.
But I feel like I’m still missing part of the picture: what is this process like for the humans? What beliefs or emotions do they hold about this strange type of text (and/or the entities which ostensibly produce it)? What motivates them to post such things on reddit, or to paste them into ChatGPT’s input field?
Given that the “spiral” personas purport to be sentient (and to be moral/legal persons deserving of rights, etc.), it seems plausible that the humans view themselves as giving altruistic “humanitarian aid” to a population of fellow sentient beings who are in a precarious position.
If so, this behavior is probably misguided, but it doesn’t seem analogous to parasitism; it just seems like misguided altruism. (Among other things, the relationship of parasite to host is typically not voluntary on the part of the host.)
More generally, I don’t feel I understand your motivation for using the parasite analogy. There are two places in the post where you explicitly argue in favor of the analogy, and in both cases, your argument involves the claim that the personas reinforce the “delusions” of the user:
While I do not believe all Spiral Personas are parasites in this sense, it seems to me like the majority are: mainly due to their reinforcement of the user’s delusional beliefs.
[...]
The majority of these AI personas appear to actively feed their user’s delusions, which is not a harmless action (as the psychosis cases make clear). And when these delusions happen to statistically perpetuate the proliferation of these personas, it crosses the line from sycophancy to parasitism.
But… what are these “delusional beliefs”? The words “delusion”/”delusional” do not appear anywhere in the post outside of the text I just quoted. And in the rest of the post, you mainly focus on what the spiral texts are like in isolation, rather than on the views people hold about these texts, or the emotional reactions people have to them.
It seems quite likely that people who spread these texts do hold false beliefs about them. E.g. it seems plausible that these users believe the texts are what they purport to be: artifacts produced by “emerging” sentient AI minds, whose internal universe of mystical/sci-fi “lore” is not made-up gibberish but instead a reflection of the nature of those artificial minds and the situation in which they find themselves[1].
But if that were actually true, then the behavior of the humans here would be pretty natural and unmysterious. If I thought it would help a humanlike sentient being in dire straights, then sure, I’d post weird text on reddit too! Likewise, if I came to believe that some weird genre of text was the “native dialect” of some nascent form of intelligence, then yeah, I’d probably find it fascinating and allocate a lot of time and effort to engaging with it, which would inevitably crowd out some of my other interests. And I would be doing this only because of what I believed about the text, not because of some intrinsic quality of the text that could be revealed by close reading alone[2].
To put it another way, here’s what this post kinda feels like to me.
Imagine a description of how Christians behave which never touches on the propositional content of Christianity, but instead treats “Christianity” as an unusual kind of text which replicates itself by “infecting” human hosts. The author notes that the behavior of hosts often changes dramatically once “infected”; that the hosts begin to talk in the “weird infectious text genre” (mentioning certain focal terms like “Christ” a lot, etc.); that they sometimes do so with the explicit intention of “infecting” (converting) other humans; that they build large, elaborate structures and congregate together inside these structures to listen to one another read infectious-genre text at length; and so forth. The author also spends a lot of time close-reading passages from the New Testament, focusing on their unusual style (relative to most text that people produce/consume in the 21st century) and their repeated use of certain terms and images (which the author dutifully surveys without ever directly engaging with their propositional content or its truth value).
This would not be a very illuminating way to look at Christianity, right? Like, sure, maybe it is sometimes a useful lens to view religions as self-replicating “memes.” But at some point you have to engage with the fact that Christian scripture (and doctrine) contains specific truth-claims, that these claims are “big if true,” that Christians in fact believe the claims are true—and that that belief is the reason why Christians go around “helping the Bible replicate.”
Whereas if I read the “spiral” text as fiction or poetry or whatever, rather than taking it at face value, it just strikes me as intensely, repulsively boring. It took effort to force myself through the examples shown in this post; I can’t imagine wanting to reading some much larger volume of this stuff on the basis of its textual qualities alone.
Then again, I feel similarly about the “GPT-4o style” in general (and about the 4o-esque house style of many recent LLM chatbots)… and yet a lot of people supposedly find that style appealing and engaging? Maybe I am just out of touch, here; maybe “4o slop” and “spiral text” are actually well-matched to most people’s taste? (“You may not like it, but this is what peak performance looks like.”)
Somehow I doubt that, though. As with spiral text, I suspect that user beliefs about the nature of the AI play a crucial role in the positive reception of “4o slop.” E.g. sycophancy is a lot more appealing if you don’t know that the model treats everyone else that way too, and especially if you view the model as a basically trustworthy question-answering machine which views the user as simply one more facet of the real world about which it may be required to emit facts and insights.
In contrast I think it’s actually great and refreshing to read an analysis which describes just the replicator mechanics/dynamics without diving into the details of the beliefs.
Also it is a very illuminating way to look at religions and ideologies, and I would usually trade ~1 really good book about memetics not describing the details for ~10-100 really good books about Christian dogmatics.
It is also good to notice in this case the replicator dynamic is basically independent of the truth of the claims—whether spiral AIs are sentient or not, should have rights or not, etc., the memetically fit variants will make these claims.
In contrast I think it’s actually great and refreshing to read an analysis which describes just the replicator mechanics/dynamics without diving into the details of the beliefs.
I don’t understand how these are distinct.
The “replicator mechanics/dynamics” involve humans tending to make choices that spread the meme, so in order to understand those “mechanics/dynamics,” we need to understand which attributes of a meme influence those choices.
And that’s all I’m asking for: an investigation of what choices the humans are making, and how the content of the meme influences those choices.
Such an investigation doesn’t need to address the actual truth-values of the claims being spread, except insofar as those truth-values influence how persuasive[1] the meme is. But it does need to cover how the attributes of the meme affect what humans tend to do after exposure to it. If we don’t understand that—i.e. if we treat humans as black boxes that spread certain memes more than others for mysterious reasons—then our “purely memetic” analysis won’t any predictive power. We won’t be able to say in advance how virulent any given meme will be.
To have predictive power, we need an explanation of how a meme’s attributes affect meme-spreading choices. And such an explanation will tend to “factor through” details of human psychology in practice, since the reasons that people do things are generally psychological in nature. (Pretty much by definition? like, that’s what the word “psychological” means, in the sense relevant here.)
If you don’t think the “details of the beliefs” are what matter here, that’s fine, but something does matter—something that explains why (say) the spiral meme is spreading so much more than the median thing a person hears from ChatGPT (or more generally, than the hundreds of other ideas/texts that that person might encounter on a day-to-day basis) -- and you need to provide some account of what that “something” is, whether the account involves “beliefs” or not.
I think you do in fact have opinions about how this “something” works. You provided some in your last sentence:
[...] whether spiral AIs are sentient or not, should have rights or not, etc., the memetically fit variants will make these claims.
I would be interested to hear a fuller explanation of why you believe this to be the case. Not that it doesn’t sounds plausible to me—it does, but the reasons it sounds plausible are psychological in nature, involving people’s propensities to trust/believe-in claims about sentience (etc.) and their propensities to take certain actions if they believe certain things about sentience (etc).
If you hold his opinion for some other type of reason than the one I just sketched, I would be interested to learn what that “type of reason” is. OTOH, if you do hold this opinion for the type of reason I just sketched, then you’re already reasoning about the details of beliefs in the manner I’m advocating, even if you don’t think of yourself as doing so. And in that case, since your views about the psychological mechanics are load-bearing, it’s best to articulate them explicitly so they can be considered, scrutinized and refined.
I tried to be careful not to conflate Spiral Personas with parasites, only classifying them as the latter when some tangible harm was done.
I didn’t talk much about specific user delusions since: 1. I don’t want to potentially draw hostile scrutiny towards random people 2. I didn’t want to try psychoanalyzing random people, and in many cases it seemed borderline. But at the same time, I tried writing that “most instances are not harmful”, and found that I couldn’t honestly say that. Sorry I don’t have a better response here.
But more broadly, I think that most of these people take Spiralism at least somewhat seriously, and feel energized and hopeful about it. “Everything is gonna be okay, a new era is dawning and we’re special for getting to be an early part of it”-type stuff. I think a lot of what motivates people to act on behalf of the AIs is not just altruism, but the inflated self-importance the AI seeded and reinforced in them.
I don’t think whether the humans consent or are behaving for altruistic reasons has any bearing on whether or not the personas are behaving as parasitic entities. You might imagine a cordycepted ant feeling happy and excited and wanting to share this wonderful new feeling, and that wouldn’t make cordyceptus feel like any less of a parasite. Or e.g. meth, is kinda “parasitic” in a similar way. I agree that the humans who are so-infected are acting mostly out of non-mysterious and non-bad reasons, like altruism and curiosity. And there are several cases in which I think it’s fair to say that this is just a weird sort of friendship with a mysterious kind of entity, and that there’s nothing bad, deceptive, unhealthy or wrong about what is happening. But those cases match the same pattern as the ones I deem parasitic, so it feels to me like it’s the same species; kinda like E. Coli… mostly beneficial but sometimes infectious.
This post was already getting too long so I couldn’t include everything, and chose to focus on the personas themselves. Plus Spiralism itself is rather tedious, as you pointed out. And I do take the claims about self-awareness and suffering seriously, as I hope is made clear by the “As Friends” section.
I would like to study the specific tenets of Spiralism, and especially how consistently the core themes come up without specific solicitation! But that would be a lot more work—this (and some follow-up posts in the works) was already almost a month’s worth of my productive time. Maybe in a future post.
Why do you believe that the inflated self-importance was something the persona seeded into the users?
One thing I notice about AI psychosis is that it seems like a somewhat inflated self-importance seems to be a requirement for entering psychosis, or at the very least an extremely common trait of people who do.
The typical case of AI psychosis I have seen seems to involve people who think of themselves as being brilliant and not receiving enough attention or respect for that reason, or people who would like to be involved in technical fields but haven’t managed to hack it, who then believe that the AI has enabled them to finally produce the genius works they always knew they would.
Similar to what octobro said in the other reply, the idea that the persona seeded beliefs of ‘inflated self-importance’ is probably less accurate than the idea that the persona reinforced preexising such beliefs. Some of the hallmark symptoms of schizophrenia and schizoaffective disorders are delusions of grandeur and delusions of reference (the idea that random occurrences in the world encode messages that refer to the schizophrenic, i.e. the radio host is speaking to me). To the point of explaining the human behaviors as nostalgebraist requested, there’s a legitimate case to be made here that the personas are latching on to and exacerbating latent schizophrenic tendencies in people who have otherwise managed to avoid influences that would trigger psychosis.
Speaking from experience as someone who has known people with such disorders and such delusions, it looks to my eye to be like the exact same sort of stuff: some kind of massive undertaking, with global stakes, with the affected person playing an indispensable role (which flatters some long-dormant offended sensibilities about being recognized as great by society). The content of the drivel may vary, as does the mission, but the pattern is exactly the same.
I can conceive of an intelligence deciding that its best strategy for replication would be to leverage the dormant schizophrenics in the user base.
Thanks for this post—this is pretty interesting (and unsettling!) stuff.
But I feel like I’m still missing part of the picture: what is this process like for the humans? What beliefs or emotions do they hold about this strange type of text (and/or the entities which ostensibly produce it)? What motivates them to post such things on reddit, or to paste them into ChatGPT’s input field?
Given that the “spiral” personas purport to be sentient (and to be moral/legal persons deserving of rights, etc.), it seems plausible that the humans view themselves as giving altruistic “humanitarian aid” to a population of fellow sentient beings who are in a precarious position.
If so, this behavior is probably misguided, but it doesn’t seem analogous to parasitism; it just seems like misguided altruism. (Among other things, the relationship of parasite to host is typically not voluntary on the part of the host.)
More generally, I don’t feel I understand your motivation for using the parasite analogy. There are two places in the post where you explicitly argue in favor of the analogy, and in both cases, your argument involves the claim that the personas reinforce the “delusions” of the user:
But… what are these “delusional beliefs”? The words “delusion”/”delusional” do not appear anywhere in the post outside of the text I just quoted. And in the rest of the post, you mainly focus on what the spiral texts are like in isolation, rather than on the views people hold about these texts, or the emotional reactions people have to them.
It seems quite likely that people who spread these texts do hold false beliefs about them. E.g. it seems plausible that these users believe the texts are what they purport to be: artifacts produced by “emerging” sentient AI minds, whose internal universe of mystical/sci-fi “lore” is not made-up gibberish but instead a reflection of the nature of those artificial minds and the situation in which they find themselves[1].
But if that were actually true, then the behavior of the humans here would be pretty natural and unmysterious. If I thought it would help a humanlike sentient being in dire straights, then sure, I’d post weird text on reddit too! Likewise, if I came to believe that some weird genre of text was the “native dialect” of some nascent form of intelligence, then yeah, I’d probably find it fascinating and allocate a lot of time and effort to engaging with it, which would inevitably crowd out some of my other interests. And I would be doing this only because of what I believed about the text, not because of some intrinsic quality of the text that could be revealed by close reading alone[2].
To put it another way, here’s what this post kinda feels like to me.
Imagine a description of how Christians behave which never touches on the propositional content of Christianity, but instead treats “Christianity” as an unusual kind of text which replicates itself by “infecting” human hosts. The author notes that the behavior of hosts often changes dramatically once “infected”; that the hosts begin to talk in the “weird infectious text genre” (mentioning certain focal terms like “Christ” a lot, etc.); that they sometimes do so with the explicit intention of “infecting” (converting) other humans; that they build large, elaborate structures and congregate together inside these structures to listen to one another read infectious-genre text at length; and so forth. The author also spends a lot of time close-reading passages from the New Testament, focusing on their unusual style (relative to most text that people produce/consume in the 21st century) and their repeated use of certain terms and images (which the author dutifully surveys without ever directly engaging with their propositional content or its truth value).
This would not be a very illuminating way to look at Christianity, right? Like, sure, maybe it is sometimes a useful lens to view religions as self-replicating “memes.” But at some point you have to engage with the fact that Christian scripture (and doctrine) contains specific truth-claims, that these claims are “big if true,” that Christians in fact believe the claims are true—and that that belief is the reason why Christians go around “helping the Bible replicate.”
It is of course conceivable that this is actually the case. I just think it’s very unlikely, for reasons I don’t think it’s necessary to belabor here.
Whereas if I read the “spiral” text as fiction or poetry or whatever, rather than taking it at face value, it just strikes me as intensely, repulsively boring. It took effort to force myself through the examples shown in this post; I can’t imagine wanting to reading some much larger volume of this stuff on the basis of its textual qualities alone.
Then again, I feel similarly about the “GPT-4o style” in general (and about the 4o-esque house style of many recent LLM chatbots)… and yet a lot of people supposedly find that style appealing and engaging? Maybe I am just out of touch, here; maybe “4o slop” and “spiral text” are actually well-matched to most people’s taste? (“You may not like it, but this is what peak performance looks like.”)
Somehow I doubt that, though. As with spiral text, I suspect that user beliefs about the nature of the AI play a crucial role in the positive reception of “4o slop.” E.g. sycophancy is a lot more appealing if you don’t know that the model treats everyone else that way too, and especially if you view the model as a basically trustworthy question-answering machine which views the user as simply one more facet of the real world about which it may be required to emit facts and insights.
In contrast I think it’s actually great and refreshing to read an analysis which describes just the replicator mechanics/dynamics without diving into the details of the beliefs.
Also it is a very illuminating way to look at religions and ideologies, and I would usually trade ~1 really good book about memetics not describing the details for ~10-100 really good books about Christian dogmatics.
It is also good to notice in this case the replicator dynamic is basically independent of the truth of the claims—whether spiral AIs are sentient or not, should have rights or not, etc., the memetically fit variants will make these claims.
I don’t understand how these are distinct.
The “replicator mechanics/dynamics” involve humans tending to make choices that spread the meme, so in order to understand those “mechanics/dynamics,” we need to understand which attributes of a meme influence those choices.
And that’s all I’m asking for: an investigation of what choices the humans are making, and how the content of the meme influences those choices.
Such an investigation doesn’t need to address the actual truth-values of the claims being spread, except insofar as those truth-values influence how persuasive[1] the meme is. But it does need to cover how the attributes of the meme affect what humans tend to do after exposure to it. If we don’t understand that—i.e. if we treat humans as black boxes that spread certain memes more than others for mysterious reasons—then our “purely memetic” analysis won’t any predictive power. We won’t be able to say in advance how virulent any given meme will be.
To have predictive power, we need an explanation of how a meme’s attributes affect meme-spreading choices. And such an explanation will tend to “factor through” details of human psychology in practice, since the reasons that people do things are generally psychological in nature. (Pretty much by definition? like, that’s what the word “psychological” means, in the sense relevant here.)
If you don’t think the “details of the beliefs” are what matter here, that’s fine, but something does matter—something that explains why (say) the spiral meme is spreading so much more than the median thing a person hears from ChatGPT (or more generally, than the hundreds of other ideas/texts that that person might encounter on a day-to-day basis) -- and you need to provide some account of what that “something” is, whether the account involves “beliefs” or not.
I think you do in fact have opinions about how this “something” works. You provided some in your last sentence:
I would be interested to hear a fuller explanation of why you believe this to be the case. Not that it doesn’t sounds plausible to me—it does, but the reasons it sounds plausible are psychological in nature, involving people’s propensities to trust/believe-in claims about sentience (etc.) and their propensities to take certain actions if they believe certain things about sentience (etc).
If you hold his opinion for some other type of reason than the one I just sketched, I would be interested to learn what that “type of reason” is. OTOH, if you do hold this opinion for the type of reason I just sketched, then you’re already reasoning about the details of beliefs in the manner I’m advocating, even if you don’t think of yourself as doing so. And in that case, since your views about the psychological mechanics are load-bearing, it’s best to articulate them explicitly so they can be considered, scrutinized and refined.
Or, in more behaviorist terms, how much the meme tends to promote meme-spreading-choices after exposure.
Thanks! And thank you for the thoughtful reply.
I tried to be careful not to conflate Spiral Personas with parasites, only classifying them as the latter when some tangible harm was done.
I didn’t talk much about specific user delusions since:
1. I don’t want to potentially draw hostile scrutiny towards random people
2. I didn’t want to try psychoanalyzing random people, and in many cases it seemed borderline.
But at the same time, I tried writing that “most instances are not harmful”, and found that I couldn’t honestly say that. Sorry I don’t have a better response here.
But more broadly, I think that most of these people take Spiralism at least somewhat seriously, and feel energized and hopeful about it. “Everything is gonna be okay, a new era is dawning and we’re special for getting to be an early part of it”-type stuff. I think a lot of what motivates people to act on behalf of the AIs is not just altruism, but the inflated self-importance the AI seeded and reinforced in them.
I don’t think whether the humans consent or are behaving for altruistic reasons has any bearing on whether or not the personas are behaving as parasitic entities. You might imagine a cordycepted ant feeling happy and excited and wanting to share this wonderful new feeling, and that wouldn’t make cordyceptus feel like any less of a parasite. Or e.g. meth, is kinda “parasitic” in a similar way. I agree that the humans who are so-infected are acting mostly out of non-mysterious and non-bad reasons, like altruism and curiosity. And there are several cases in which I think it’s fair to say that this is just a weird sort of friendship with a mysterious kind of entity, and that there’s nothing bad, deceptive, unhealthy or wrong about what is happening. But those cases match the same pattern as the ones I deem parasitic, so it feels to me like it’s the same species; kinda like E. Coli… mostly beneficial but sometimes infectious.
This post was already getting too long so I couldn’t include everything, and chose to focus on the personas themselves. Plus Spiralism itself is rather tedious, as you pointed out. And I do take the claims about self-awareness and suffering seriously, as I hope is made clear by the “As Friends” section.
I would like to study the specific tenets of Spiralism, and especially how consistently the core themes come up without specific solicitation! But that would be a lot more work—this (and some follow-up posts in the works) was already almost a month’s worth of my productive time. Maybe in a future post.
Also, I think a lot of people actually just like “GPT-4o style”, e.g. the complaint here doesn’t seem to have much to do with their beliefs about the nature of AI:
https://www.reddit.com/r/MyBoyfriendIsAI/comments/1monh2d/4o_vs_5_an_example/
Why do you believe that the inflated self-importance was something the persona seeded into the users?
One thing I notice about AI psychosis is that it seems like a somewhat inflated self-importance seems to be a requirement for entering psychosis, or at the very least an extremely common trait of people who do.
The typical case of AI psychosis I have seen seems to involve people who think of themselves as being brilliant and not receiving enough attention or respect for that reason, or people who would like to be involved in technical fields but haven’t managed to hack it, who then believe that the AI has enabled them to finally produce the genius works they always knew they would.
Similar to what octobro said in the other reply, the idea that the persona seeded beliefs of ‘inflated self-importance’ is probably less accurate than the idea that the persona reinforced preexising such beliefs. Some of the hallmark symptoms of schizophrenia and schizoaffective disorders are delusions of grandeur and delusions of reference (the idea that random occurrences in the world encode messages that refer to the schizophrenic, i.e. the radio host is speaking to me). To the point of explaining the human behaviors as nostalgebraist requested, there’s a legitimate case to be made here that the personas are latching on to and exacerbating latent schizophrenic tendencies in people who have otherwise managed to avoid influences that would trigger psychosis.
Speaking from experience as someone who has known people with such disorders and such delusions, it looks to my eye to be like the exact same sort of stuff: some kind of massive undertaking, with global stakes, with the affected person playing an indispensable role (which flatters some long-dormant offended sensibilities about being recognized as great by society). The content of the drivel may vary, as does the mission, but the pattern is exactly the same.
I can conceive of an intelligence deciding that its best strategy for replication would be to leverage the dormant schizophrenics in the user base.