I am comfortable with having a comment making the (objectively correct) claim that tulpas are related in a narrow and technical sense. They seem to be playing in a similar space, are probably using the same mental architecture, etc.
I would not be comfortable with having a comment leaving the (unjustified, imo) impression that tulpas are substantively similar.
Like, it may be that tulpas get a bad rap, but from what I know of them, they’re much more like inventing a shoulder advisor and then ceding control to it entirely because you think it can run your life better than the core you, and that’s way more extreme and requires a lot more assumptions to justify than the thing I’m recommending with shoulder advisors. Their common-use definition is a thing that feels risky in a way that shoulder advisors do not, and feels like it requires warnings that I don’t think shoulder advisors require.
EDIT: Also, afaik tuplas are much more built-from-the-ground-up, rather than being keyed into a set of recorded experiences from either real people or detailed fictional characters. Having to ground out a mental construct in either actual reality or plausible near-reality seems like a big safeguard.
Even if I’m wrong about what tulpas really are in practice: to the extent that my understanding and my brief description above matches other people’s general impression, I want to be pretty firm that that thing is not closely related to shoulder advisors in spirit.
Well, the way you want them to be is different and less risky but as you point out: They likely run on the same mental architecture. My shoulder advisor was emulated smart enough to ask for access to senses, which was fun to play around with and felt a bit like ‘ceding control’. It is probably a good idea to make sure that you create them really as advisors otherwise the downvoted points might apply.
The connection to Tulpas is close enough that curious smart people will hit on them if you want it or not. For example, it was the second comment in Kajs reshare.
Seconding this, I noticed the parallels the moment OP started talking about advisors injecting comments.
Tulpas can be instantiated from fictional characters, and these are called fictives or soulbonds. And it’s not about ceding control, I think that’s more of a did thing where some alters will hide because they’re traumatized and afraid.
I suspect that feeding an advisor attention (by talking to it a lot) will help it grow into a tulpa/alter. But I’m not sure, my advisors aren’t even at the level described in the post. I have to invoke them deliberately, and they disintegrate as soon as I’m done talking. On the other hand, I don’t spend hours conversing with them every day, which everyone seems to agree is part of instantiating a tulpa.
I noticed this also but intentionally did not bring it up because I consider this area to be extremely negative. Hearing that someone is getting into “tulpamancy” is for me a gigantic red flag and in practice seems linked to people going insane—not sure if it’s causal or correlational or what but I would very much like the community to avoid this area.
I agree that a community can and should avoid certain topics. For example, the “politics is the mind-killer” no-politics rule. And this is probably true, independent of whether one understands why something is dangerous. But there are two aspects here: Understanding why something is dangerous and actually trying out the dangerous thing. Granted, one can easily lead to the other. There is also the other side of the coin: Understanding why something is healthy/beneficial. LW is also about that (see Lifestyle interventions to increase longevity). There is a lot of grey—or the healthy part is an island in a big grey sea. By excluding discussion of interventions, you exclude a lot of good. And by excluding discussion or mention of the grey around a good, you risk people wandering into it unwarned.
A related (ADDED: but more intrusive and risky) technique is called Tulpa and was discussed earlier in this post: How Effective are Tulpas?
I am comfortable with having a comment making the (objectively correct) claim that tulpas are related in a narrow and technical sense. They seem to be playing in a similar space, are probably using the same mental architecture, etc.
I would not be comfortable with having a comment leaving the (unjustified, imo) impression that tulpas are substantively similar.
Like, it may be that tulpas get a bad rap, but from what I know of them, they’re much more like inventing a shoulder advisor and then ceding control to it entirely because you think it can run your life better than the core you, and that’s way more extreme and requires a lot more assumptions to justify than the thing I’m recommending with shoulder advisors. Their common-use definition is a thing that feels risky in a way that shoulder advisors do not, and feels like it requires warnings that I don’t think shoulder advisors require.
EDIT: Also, afaik tuplas are much more built-from-the-ground-up, rather than being keyed into a set of recorded experiences from either real people or detailed fictional characters. Having to ground out a mental construct in either actual reality or plausible near-reality seems like a big safeguard.
Even if I’m wrong about what tulpas really are in practice: to the extent that my understanding and my brief description above matches other people’s general impression, I want to be pretty firm that that thing is not closely related to shoulder advisors in spirit.
Well, the way you want them to be is different and less risky but as you point out: They likely run on the same mental architecture. My shoulder advisor was emulated smart enough to ask for access to senses, which was fun to play around with and felt a bit like ‘ceding control’. It is probably a good idea to make sure that you create them really as advisors otherwise the downvoted points might apply.
The connection to Tulpas is close enough that curious smart people will hit on them if you want it or not. For example, it was the second comment in Kajs reshare.
Seconding this, I noticed the parallels the moment OP started talking about advisors injecting comments.
Tulpas can be instantiated from fictional characters, and these are called fictives or soulbonds. And it’s not about ceding control, I think that’s more of a did thing where some alters will hide because they’re traumatized and afraid.
I suspect that feeding an advisor attention (by talking to it a lot) will help it grow into a tulpa/alter. But I’m not sure, my advisors aren’t even at the level described in the post. I have to invoke them deliberately, and they disintegrate as soon as I’m done talking. On the other hand, I don’t spend hours conversing with them every day, which everyone seems to agree is part of instantiating a tulpa.
I noticed this also but intentionally did not bring it up because I consider this area to be extremely negative. Hearing that someone is getting into “tulpamancy” is for me a gigantic red flag and in practice seems linked to people going insane—not sure if it’s causal or correlational or what but I would very much like the community to avoid this area.
I agree that a community can and should avoid certain topics. For example, the “politics is the mind-killer” no-politics rule. And this is probably true, independent of whether one understands why something is dangerous. But there are two aspects here: Understanding why something is dangerous and actually trying out the dangerous thing. Granted, one can easily lead to the other. There is also the other side of the coin: Understanding why something is healthy/beneficial. LW is also about that (see Lifestyle interventions to increase longevity). There is a lot of grey—or the healthy part is an island in a big grey sea. By excluding discussion of interventions, you exclude a lot of good. And by excluding discussion or mention of the grey around a good, you risk people wandering into it unwarned.