Does anyone here have any serious information regarding Tulpas? When I first heard of them they immediately seemed to be the kind of thing that is obviously and clearly a very bad idea, and may not even exist in the sense that people describe them. A very obvious sign of a persons who is legitimately crazy, even.
Naturally, my first re-reaction is the desire to create one myself (One might say I’m a bit contrarian by nature). I don’t know any obvious reason not to (ignoring social stigma and time consuming initial investment), And there may be some advantage to having one, such as parallel focus, more “outside” self analysis, etc. I don’t really know much of anything right now, which is why I’m asking if there’s been any decent research done already.
I had not, actually. The link you’ve given just links me to Google’s homepage, but I did just search LW for “Tulpa” and found it fine, so thanks regardless.
edit: The link’s original purpose now works for me. I’m not sure what the problem was before, but it’s gone now.
There’s tons of easily discovered information on the web about it.
I’m not sure the Tulpa-crowed would agree with this, but I think a non-esoteric example of Tulpas in everyday life is how some religious people say that God really speaks and appears to them. The “learning process” and stuff seem pretty similar—the only difference I can see is that in the case of Tulpas it is commonly acknowledged that the phenomenon is imaginary.
Come to think of it, that’s probably a really good method for creating Tulpas quickly—building off a real or fictional character for whom you already have a relatively sophisticated mental model. It’s probably also important that you are predisposed to take seriously the notion that this thing might actually be an agent which interacts with you...which might be why God works so well, and why the Tulpa-crowed keeps insisting that Tulpas are “real” in the sense that they carry moral weight. It’s an imagination-belief driven phenomenon.
It might also illustrate some of the “dangers” - for example, some people who grew up with notions of the angry sort of God might always feel guilty about certain “sinful” things which they might not intellectually feel are bad.
I’ve also heard claims of people who gain extra abilities / parallel processing / “reminders” with Tulpas....basically, stuff that they couldn’t do on their own. I don’t really believe that this is possible, and if this were demonstrated to me I would need to update my model of the phenomenon. To the tupla-community’s credit, they seem willing to test the belief.
a non-esoteric example of Tulpas in everyday life is how some religious people say that God really speaks and appears to them. The “learning process” and stuff seem pretty similar—the only difference I can see is that in the case of Tulpas it is commonly acknowledged that the phenomenon is imaginary.
I’ve been doing some research (mainly hanging on their subreddit) and I think I have a fairly good idea of how tulpas work and the answers to your questions.
There are a myriad very different things tulpas are described as and thus “tulpas exist in the way people describe them” is not well defined.
There undisputably exist SOME specific interesting phenomena that’s the referent of the word Tulpa.
I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer’s victim, dolphin, or beloved family pet dog.
I estimates it’s ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.
I estimate it’s power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.
It does not seem deciding to make a tulpa is a sign of being crazy. Tulpas themselves seem to not be automatically unhealthy and can often help their host overcome depression or anxiety. However, there are many signs that the act of making a tulpa is dangerous and can trigger latent tendencies or be easily done in a catastrophically wrong way. I estimate the risk is similar to doing extensive meditation or taking a single largeish dose of LSD. For this reason I have not and will not attempt making one.
I am to lazy to find citations or examples right now but I probably could. I’ve tried to be a good rationalist and am fairly certain of most of these claims.
Has anyone worked on making a tulpa which is smarter than they are? This seems at least possible if you assume that many people don’t let themselves make full use of their intelligence and/or judgement.
Unless everything I think I understand about tulpas is wrong, this is at the very least significantly harder than just thinking yourself smarter without one. All the idea generating is done before credit is assigned to either the “self” or the “tulpa”.
What there ARE several examples of however are tulpas that are more emotionally mature, better at luminosity, and don’t share all their hosts preconceptions. This is not exactly smarts though, or even general purpose formal rationality.
One CAN imagine scenarios where you end up with a tulpa smarter than the host. For example the host might have learned helplessness, or the tulpa being imagined as “smarter than me” and thus all the brains good ideas get credited to it.
Disclaimer: this is based of only lots of anecdotes I’ve read, gut feeling, and basic stuff that should be common knowledge to any LWer.
I’m reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.
So she and I created this whole whiteboard flowchart working out the design, at the end of which I said “Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I… um… completely failed to notice?”
I could certainly describe that as having a “Mark” in my head who is smarter about tax-code-related designs than I am, and there’s nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.
But “Mark” in this case would just be pointing to a subset of “Dave”, just as “Dave’s fantasies about aliens” does.
See also ‘rubberducking’ and previous discussions of this on LW. My basic theory is that reasoning was developed for adversarial purposes, and by rubberducking you are essentially roleplaying as an ‘adversary’ which triggers deeper processing (if we ever get brain imaging of system I vs system II thinking, I’d expect that adversarial thinking triggers system II more compared to ‘normal’ self-centered thinking).
Yes. Indeed, I suspect I’ve told this story before on LW in just such a discussion.
I don’t necessarily buy your account—it might just be that our brains are simply not well-integrated systems, and enabling different channels whereby parts of our brains can be activated and/or interact with one another (e.g., talking to myself, singing, roleplaying different characters, getting up and walking around, drawing, etc.) gets different (and sometimes better) results.
This is also related to the circumlocution strategy for dealing with aphasia.
Yea in that case presumably the tulpa would help—but not necessarily significantly more than such a non-tulpa model that requires considerably less work and risk.
Basically, a tulpa can technically do almost anything you can… but the absence of a tulpa can do them to, and for almost all of them there’s some much easier and at least as effective way to do the same thing.
Basically, a tulpa can technically do almost anything you can...
Mental process like waking up without an alarm clock at a specific time aren’t easy. I know a bunch of people who have that skill but it’s not like there a step by step manual that you can easily follow that gives you that ability.
A tulpa can do things like that. There are many mental processes that you can’t access directly but that a tulpa might be able to access.
I am surprised to know there isn’t such a step by step manual, suspect that you’re wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.
But I guess you could make this argument; that a tulpa is more flexible and has a simpler user interface, even if it’s less powerful and has a bunch of logistic and moral problems. I dont like it but I can’t think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.
I am surprised to know there isn’t such a step by step manual, suspect that you’re wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.
My point isn’t so much that it impossible but that it isn’t easy.
Creating a mental device that only wakes me up will be easier than creating a whole Tupla but once you do have a Tulpa you can reuse it a lot.
Let’s say I want to practice Salsa dance moves at home. Visualising a full dance partner completely just for the purpose of having a dance partner at home wouldn’t be worth the effort.
I’m not sure about how much you gain by pair programming with a Tulpa, but the Tulpa might be useful for that task.
It takes a lot of energy to create it the first time but afterwards you reap the benefits.
I dont like it but I can’t think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.
Tulpa creation involves quite a lot of effort so it doesn’t seem the lazy road.
Mental process like waking up without an alarm clock at a specific time aren’t easy. I know a bunch of people who have that skill but it’s not like there a step by step manual that you can easily follow that gives you that ability.
I do not have “wake up at a specific time” ability, but I have trained myself to have “wake up within ~1.5 hours of the specific time” ability. I did this over a summer break in elementary school because I learned about how sleep worked and thought it would be cool. Note that you will need to have basically no sleep debt (you consistently wake up without an alarm) for this to work correctly.
The central point of this method is this: a sleep cycle (the time it takes to go from a light stage of sleep to the deeper stages of sleep and back again) is about 1.5 hours long. If I am not under stress or sleep debt, I can estimate my sleeping time to the nearest sleep cycle. Using the sleep cycle as a unit of measurement lets me partition out sleep without being especially reliant on my (in)ability to perceive time.
The way I did it is this (each step was done until I could do it reliably, which took up to a week each for me [but I was a preteen then, so it may be different for adults]):
Block off approximately 2 hours (depending on how long it takes you to fall asleep), right after lunch so it has the least danger of merging with your consolidated/night sleep, and take a nap. Note how this makes you feel.
Do that again, but instead of blocking off the 2 hours with an alarm clock, try doing it naturally, and awakening when it feels natural, around the 1.5h mark (repeating this because it is very important: you will need to have very little to no accumulated sleep debt for this to work). Note how this makes you feel.
Do that again, but with a ~3.5-hour block. Take two 1.5 hour sleep cycle naps one after another (wake up in between).
During a night’s sleep, try waking up between every sleep cycle. Check this against [your sleep time in hours / 1.5h per sleep cycle] to make sure that you caught all of them.
Block off a ~3.5 hour nap and try taking it as two sleep cycles without waking up in between them. (Not sure about the order with this point and the previous one. Did I do them in the opposite order? I’m reconstructing from memory here. It’s probably possible to make this work in either order.)
You probably know from step 4 how many sleep cycles you have in a night. Now you should be able to do things like consciously split up your sleep biphasically, or waking up a sleep cycle earlier than you usually do.
I then spent the rest of summer break with a biphasic “first/second sleep” rhythm, which disappeared once I was in school and had to wake up at specific times again.
To this day, I sleep especially lightly, must take my naps in 1.5 hour intervals, and will frequently wake up between sleep cycles (I’ve had to keep a clock on my nightstand since then so I can orient myself if I get woken unexpectedly by noises, because a 3:30AM waking is different from a 5AM waking, but they’re at the same point on the cycle so they feel similar). I also almost always wake up 10-45 minutes before any set alarms, which would be more useful if the spread was smaller (45 minutes before I actually need to wake up seems like a waste). It’s a cool skill to have, but it has its downsides.
Well if you consider that the tulpa doing it on it’s own
Well, let me put it this way: suppose my tulpa composes a sonnet (call that event E1), recites that sonnet using my vocal cords (E2), and writes the sonnet down using my fingers (E3).
I would not consider any of those to be the tulpa doing something “on its own”, personally. (I don’t mean to raise the whole “independence” question again, as I understand you don’t consider that very important, but, well, you brought it up.)
But if I were willing to consider E1an example of the tulpa doing something on its own (despite using my brain) I can’t imagine a justification for not considering E2 and E3 equally well examples of the tulpa doing something on its own (despite using my muscles).
But I infer that you would consider E1 (though not E2 or E3) the tulpa doing something on its own. Yes?
So, that’s interesting. Can you expand on your reasons for drawing that distinction?
I feel like I’m tangled up in a lot of words and would like to point out that I’m not an expert and don’t have a tulpa, I just got the basics from reading lots of anecdotes on reddit.
You are entirely right here- although I’d like to point out most tulpas wouldn’t be able to do E2 and E3, independent or not. Also, that something like “composing a sonnet” is probably more the kind of thing brains do when their resources are dedicated to it by identities, not something identities do, and tulpas are mainly just identities. But I could be wrong both about that and what kind of activity sonet composing is.
“composing a sonnet” is probably more the kind of thing brains do when their resources are dedicated to it by identities, not something identities do, and tulpas are mainly just identities
Interesting! OK, that’s not a distinction I’d previously understood you as making. So, what do identities do, as distinct from what brains can be directed to do? (In my own model, FWIW, brains construct identities in much the same way brains compose sonnets.)
I guess I basically think of identities as user accounts, in this case. I just grabbed the closest fitting language dichotomy to “brain” (which IS referring to the physical brain) and trying to define and it further now will just lead to overfitting, especially since it almost certainly varies far more than either of us expect (due to the typical mind fallacy) from brain to brain.
And yea, brains construct identities the same way they construct sonnets. And just like music it can be small (jingle, minor character in something you write) or big (long symphony, Tulpa). And identities only slightly more compose sonnets, than sonnets create identities.
It’s all just mental content, that can be composed, remixed, deleted, executed, etc. Now, brains have a strong tendency to in the lack of an identity create one and give it root access, and this identity end up WAY more developed and powerful than even the most ancient and powerful tulpas, but there is no probably no or very little qualitative difference.
There are a lot of confounding factors. For example, something that I consider impossibly absurd seems to be the norm for most humans; considering their physical body as a part of “themselves” and feel as if they are violated if their body is. Put in their perspective, it’s not surprising most people can’t disentangle parts of their own brain(s), mind(s), and identities without meditating for years until they get it shaved in their face via direct perception, and even then probably often get it wrong. Although I guess my illness has shaved it in my face just as anviliciouslly.
Disclaimer: I got tired trying to put disclaimers on the dubious sources on each individual sentence, so just take it with a grain of salt OK and don’t assume I believe everything I say in any persistent way.
OK… I think I understand this. And I agree with much of it.
Some exceptions...
Now, brains have a strong tendency to in the lack of an identity create one and give it root access,
I don’t think I understand what you mean by “root access” here. Can you give me some examples of things that an identity with root access can do, that an identity without root access cannot do?
something that I consider impossibly absurd seems to be the norm for most humans; considering their physical body as a part of “themselves”
This is admittedly a digression, but for my own part, treating my physical body as part of myself seems no more absurd or arbitrary to me than treating my memories of what I had for breakfast this morning as part of myself, or my memories of my mom, or my inability to juggle. It’s kind of absurd, yes, but all attachment to personal identity is kind of absurd. We do it anyway.
All of that said… well, let me put it this way: continuing the sonnet analogy, let’s say my brain writes a sonnet (S1) today and then writes a sonnet (S2) tomorrow. To my way of thinking, the value-add of S2 over and above S1 depends significantly on the overlap between them. If the only difference is that S2 corrects a mis-spelled word in S1, for example, I’m inclined to say that value(S1+S2) = value(S2) ~= value(S1) .
For example, if S1 → S2 is an improvement, I’m happy to discard S1 if I can keep S2, but I’m almost as happy to discard S2 if I can keep S1 -- while I do have a preference for keeping S2 over keeping S1, it’s noise relative to my preference for keeping one of them over losing both.
I can imagine exceptions to the above, but they’re contrived.
So, the fix-a-mispelling case is one extreme, where the difference between S1 and S2 is very small. But as the difference increases, the value(S1+S2) = value(S2) ~= value(S1) equation becomes less and less acceptable. At the other extreme, I’m inclined to say that S2 is simply a separate sonnet, which was inspired by S1 but is distinct from it, and value(S1+S2) ~= value(S2) + value(S1).
And those extremes are really just two regions in a multidimensional space of sonnet-valuation.
Does that seem like a reasonable way to think about sonnets? (I don’t mean is it complete; of course there’s an enormous amount of necessary thinking about sonnets I’m not including here. I just mean have I said anything that strikes you as wrong?)
Does it seem like an equally reasonable way to think about identities?
Root access was probably a to metaphorical choice of words. Is “skeletal musculature privileges” clearer?
All those things like memories or skillsets you list as part of identity does seem weird, but even irrelevant software not nearly as weird as specific hardware. I mean seriously attaching significance to specific atoms? Wut? But of course, I know it’s really me thats weird and most humans do it.
I agree about what you say about sonnets, it’s very well put in fact. And yes identities do follow the same rules. Trying to come up with fitting tulpa stuff in the metaphor. Doesn’t really work though because I don’t know enough about it.
This is getting a wee bit complicated and I think we’re starting to reach the point where we have to dissolve the classifications and actually model things in detail on continuums, which means more conjecture and guesswork and less data and what data we have being less relevant. We’ve been working mostly in metaphors that doesn’t really go this far without breaking down. Also, since we’re getting into more and more detail, it also means th stuff we are examining is likely to be drowned out in the differences between brains, and the conversation turn into nonsense due to the typical mind fallacy.
As such, I am unwilling to widely sprout what’s likely to end up half nonsense at least publicly. Contact me by PM if you’re really all that interested in getting my working model of identities and mental bestiary.
I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer’s victim, dolphin, or beloved family pet dog.
Would you classify a novel in the same “moral-status” tier as these four examples?
No, thats much much lower. As in torture a novel for decades in order to give a tulpa a quick amusement being a moral thing to do lower.
Assuming you mean either a physical book, or the simulation of the average minor character in the author’s mind, here. Main characters or RPing PCs can vary in complexity of simulation from author to author a lot and it’s a theory that some become effectively tulpas.
Your answer clarifies what I was trying to get at with my question but wasn’t quite sure how to ask, thanks; my question was deeply muddled.
For my own part, treating a tulpa as having the moral status of an independent individual distinct from its creator seems unjustified. I would be reluctant to destroy one because it is the unique and likely-unreconstructable creative output of a human being, much like I would be reluctant to destroy a novel someone had written (as in, erase all copies of such that the novel itself no longer exists), but that’s about as far as I go.
I didn’t mean a physical copy of a novel, sorry that wasn’t clear.
Yes, destroying all memory of a character someone played in an RPG and valued remembering I would class similarly.
But all of these are essentially property crimes, whose victim is the creator of the artwork (or more properly speaking the owner, though in most cases I can think of the roles are not really separable), not the work of art itself.
I have no idea what “torture a novel” even means, it strikes me as a category error on a par with “paint German blue” or “burn last Tuesday”.
Ah. No, I think you’d change your mind if you spent a few hours talking to accounts that claim to be tulpas.
A newborn infant or alzheimer’s patient is not an independent individual distinct from it’s caretaker either. Do you count their destruction as property crime as well? “Person”-ness is not binary, it’s not even a continuum. It’s a cluster of properties that usually correlate but in the case of tulpas does not. I recommend re-reading Diseased Thinking.
As for your category error:
/me argues for how german is a depressing language and spends all that was gained in that day on something that will not last. Then a pale-green tulpasnores in an angry manner.
As for your category error: /me argues for how german is a depressing language and spends all that was gained in that day on something that will not last. Then a pale-green tulpa snores in an angry manner.
I picture a sheet of paper with a paragraph in each of several languages, a paintbrush, and watercolours. Then boring-sounding environmental considerations make me feel outraged without me consciously realizing what’s happening.
I agree that person-ness is cluster of properties and not a binary.
I don’t believe that tulpas possess a significant subset those properties independent of the person whose tulpa they are.
I don’t think I’m failing to understand any of what’s discussed in Diseased Thinking. If there’s something in particular you think I’m failing to understand, I’d appreciate you pointing it out.
It’s possible that talking to accounts that claim to be tulpas would change my mind, as you suggest. It’s also possible that talking to bodies that claim to channel spirit-beings or past lives would change my mind about the existence of spirit-beings or reincarnation. Many other people have been convinced by such experiences, and I have no especially justified reason to believe that I’m relevantly different from them.
Of course, that doesn’t mean that reincarnation happens, nor that spirit-beings exist who can be channeled, or that tulpas possess a significant subset of the properties which constitute person-ness independent of the person whose tulpa they are.
A newborn infant or alzheimer’s patient is not an independent individual distinct from it’s caretaker either.
Eh?
I can take a newborn infant away from its caretaker and hand it to a different caretaker… or to no caretaker at all… or to several caretakers. I would say it remains the same newborn infant. The caretaker can die, and the newborn infant continues to live; and vice-versa.
That seems to me sufficient justification (not necessary, but sufficient) to call it an independent individual.
Why do you say it isn’t?
Do you count their destruction as property crime as well?
I count it as less like a property crime than destroying a tulpa, a novel, or an RPG character. There are things I count it as more like a property crime than.
Seems I were wrong about you not understanding the word thing. Apologies.
You keep saying that word “independent”. I’m starting to think we might not disagree about any objective properties of tulpas, just things need to be “independent” or only the most important count towards your utility, but I just add up the identifiable patterns not caring about if they overlap. Metaphor: tulpas are “10101101”, you’re saying “101″ occurs 2 times, I’m saying “101” occurs 3 times.
I’m fairly certain talking to bodies that claim those things would not change my probability estimates on those claims unless powerful brainwashing techniques were used, and I certainly hope the same is the case for you. If I believed that doing that would predictably shift my beliefs I’d already have those beliefs. Conservation of Expected Evidence.
((You can move a tulpa between minds to, probably, it just requires a lot of high tech, unethical surgery, and work. And probably gives the old host permanent severe brain damage. Same as with any other kind of incommunicable memory.))
(shrug) Well, I certainly agree that when I interact with a tulpa, I am interacting with a person… specifically, I’m interacting with the person whose tulpa it is, just as I am when I interact with a PC in an RPG.
What I disagree with is the claim that the tulpa has the moral status of a person (even a newborn person) independent of the moral status of the person whose tulpa it is.
I’m fairly certain talking to bodies that claim those things would not change my probability estimates on those claims unless powerful brainwashing techniques were used, and I certainly hope the same is the case for you.
On what grounds do you believe that? As I say, I observe that such experiences frequently convince other people; without some grounds for believing that I’m relevantly different from other people, my prior (your hopes notwithstanding) is that they stand a good chance of convincing me too. Ditto for talking to a tulpa.
((You can move a tulpa between minds to, probably, it just requires a lot of high tech, unethical surgery, and work. And probably gives the old host permanent severe brain damage. Same as with any other kind of incommunicable memory.))
(shrug) I don’t deny this (though I’m not convinced of it either) but I don’t see the relevance of it.
As someone with personal experience with a tulpa, I agree with most of this.
I estimates it’s ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.
I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how “well-realized” they are.
I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer’s victim, dolphin, or beloved family pet dog.
I have no idea what a tulpa’s moral status is, besides not less than a fictional character and not more than a typical human.
I estimate it’s power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.
I would expect most of them to have about the same intelligence, rather than lower intelligence.
You are probably counting more properties things can vary under as “ontological”. I’m mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.
I’m basing the moral status largely on “well realized”, “complex” and “technically sentient” here. You’ll notice all my example ALSO has the actual utility function multiplier at “unknown”.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it’s power over reality.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it’s power over reality.
Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.
That brings up a moral question. To what extend is it immoral to create a Tulpa and have it be in pain?
Tulpa are supposed to suffer from not getting enough attention so if you can’t commit to giving it a lot of attention for the rest of your life you might commit an immoral act by creating it.
Just so facts without getting entangled in the argument: In anecdotes tulpas seem to report more abstract and less intense types of suffering than humans. The by far dominant source of suffering in tulpas seems to be via empathy with the host. The suffering from not getting enough attention is probably fully explainable by loneliness, and sadness over fading away losing the ability to think and do things.
Look around yourself on http://www.reddit.com/r/Tulpas/ or ask some yourself on the verius IRC rooms that can be reached form there. I only have vague memories built from threads buried noths back on that redit.
I think the really relevant ethical question is whether a tulpa has a separate consciousness from its host. From my own researches in the area (which have been very casual, mind you), I consider it highly unlikely that they have separate consciousness, but not so unlikely that I would be willing to create a tulpa and then let it die, for example.
In fact, my uncertainty on this issue is the main reason I am ambivalent about creating a tulpa. It seems like it would be very useful: I solve problems much better when working with other people, even if they don’t contribute much; a tulpa more virtuous than myself could be a potent tool for self-improvement; it could help ameliorate the “fear of social isolation” obstacle to potential ambitious projects; I would gain a better understanding of how tulpas work; I could practice dancing and shaking hands more often; etc. etc. But I worry about being responsible for what may be (even with only ~15% subjective probability) a conscious mind, which will then literally die if I don’t spend time with it regularly (ref).
Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.
Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.
“Sufficiently accurate simulation of consciousness” is a subset of set of things that are artificial minds. You might have a consensus for that class. I don’t think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.
That’s my understanding as well.… though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole “a tulpa {is,isn’t} an artificial intelligence” discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don’t think it matters much in context.
It’s not your normal mind, so it’s artifical for ethical considerations.
I don’t find this argument convincing.
As far as I read stuff written by people with Tulpa’s they treat them as entity who’s desires matter.
Yes, and..?
Let me quote William Gibson here:
Addictions … started out like magical pets, pocket monsters. They did extraordinary tricks, showed you things you hadn’t seen, were fun. But came, through some gradual dire alchemy, to make decisions for you. Eventually, they were making your most crucial life-decisions. And they were … less intelligent than goldfish.
There a good chance that you will also hold that belief when you will interact with the Tulpa on a daily basis. As such it makes sense to think about the implications of the whole affair before creating one.
I still don’t see what you are getting at. If I treat a tulpa as a shard of my own mind, of course its desires matter, it’s the desires of my own mind.
Think of having an internal dialogue with yourself. I think of tulpas as a boosted/uplifted version of a party in that internal dialogue.
Well, if you think that the human illusion of unified agency is a good ideal to strive for, it then seems that messing around w/ tulpas is a bad thing. If you have really seriously abandoned that ideal (very few people I know have), then knock yourself out!
Is this a serious question? Everything in our society, from laws to social conventions, is based on unified agency.
The consequentialist view of rationality as expressed here seems to be based on the notion of unified agency of people (the notion of a single utility function is only coherent for unified agents).
It’s fine if you don’t want to maintain unified agency, but it’s obviously an important concept for a lot of people. I have not met a single person who truly has abandoned this concept in their life, interactions with others, etc. The conventional view is someone without unified agency has demons to be cast out (“my name is Legion, for we are many.”)
By “agency”, are you referring to physical control of the body? As far as I can tell, the process of “switching” (allowing the tulpa to control the host’s body temporarily) is a very rare process which is a good deal more difficult than just creating a tulpa, and which many people who have tulpas cannot do at all even if they try.
Welp, look at that, I just found this thread after finishing up a long comment on the subject in an older thread. Go figure. (By the way, I do recommend reading that entire discussion, which included some actual tulpas chiming in).
A fairly obvious reason is that to generate a tulpa you need to screw up you mind in a sufficiently radical fashion. And once you do that, you may not be able to unfuck it back to normal.
I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that creating tulpas is basically self-induced schizophrenia. I don’t think schizophrenia is fun.
A fairly obvious reason is that to generate a tulpa you need to screw up you mind in a sufficiently radical fashion. And once you do that, you may not be able to unfuck it back to normal.
This is a concern I share. However...
I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that creating tulpas is basically self-induced schizophrenia. I don’t think schizophrenia is fun.
I don’t think so, it can be rephrased tabooing emotional words. I am not trying to attach some stigma of mental illness, I’m pointing out that tulpas are basically a self-inflicted case of what the medical profession calls dissociative identity disorder and that it has significant mental costs.
I’m pointing out that tulpas are basically a self-inflicted case of what the medical profession calls dissociative identity disorder and that it has significant mental costs.
Taylor et al. claim that although people who exhibit the illusion of independent agency do score higher than the population norm on a screening test of dissociative symptoms, the profile on the most diagnostic items is different from DID patients, and scores on the test do not predict IIA:
The writers also scored higher than general population norms on the Dissociative Experiences Scale. The mean score across all 28 items on the DES in our sample of writers was 18.52 (SO = 16.07), ranging from a minimum of 1.43 to a maximum of 42.14. This mean is significantly higher from the average DES score of 7.8 found in a general population sample of 415 [27], ((48) = 8.05, p < .001.
In fact, the writers’ scores are closer to the average DES score for a sample of 61 schizophrenics (schizophrenic M = 17.7) [27]. Seven of the writers scored at or above 30, a commonly used cutoff for “normal scores” [29]. There was no difference between men’s and women’s overall DES scores in our sample, a finding consistent with results found in other studies of normal populations [26].
With these comparisons, our goal is to highlight the unusually high scores for our writers, not to suggest that they were psychologically unhealthy. Although scores of 30 or above are more common among people with dissociative disorders (such as Dissociative Identity Disorder), scoring in this range does not guarantee that the person has a dissociative disorder, nor does it constitute a diagnosis of a dissociative disorder [27,29]. Looking at the different subscales of the DES, it is clear that our writers deviated from the norm mainly on items related to the absorption and changeability factor of the DES. Average scores on this subscale (M = 26.22, SD = 14.45) were significantly different from scores on the two subscales that are particularly diagnostic for dissociative disorders: derealization and depersonalization subscale (At = 7.84, SD = 7.39) and the amnestic experiences subscale (M = 6.80, SD = 8.30), F(l,48) = 112.49, p < ,001. These latter two subscales did not differ from each other, F(l, 48) = ,656, p = .42. Seventeen writers scored above 30 on the absorption and changeability scale, whereas only one writer scored above 30 on the derealization and depersonalization scale and only one writer (a different participant) scored above 30 on the amnestic experiences scale.
A regression analysis using the IRI subscales (fantasy, empathic concern, perspective taking, and personal distress) and the DES subscales (absorption and changeability, arnnestic experiences, and derealization and depersonalization) to predict overall IIA was run. The overall model was not significant r^2 = .22, F(7, 41) = 1.63, p = .15. However, writers who had higher IIA scores scored higher on the fantasy suhscale of IRI, b = .333, t(48) = 2.04, p < .05 andmarginally lower on the empathic concern subscale, b = -.351, t(48) = −1.82, p < .10 (all betas are standardized). Because not all of the items on the DES are included in one of the three subscales, we also ran a regression model predicting overall IIA from the mean score across DES items. Neither the r^2 nor the standardized beta for total DES scores was significant in this analysis.
Your mind is a very complicated entity. It has been suggested that looking at it as a network (or an ecology) of multiple agents is a more useful view than thinking about it as something monolithic.
In particular, your reasoning consciousness is very much not the only agent in your mind and is not the only controller. An early example of such analysis is Freud’s distinction between the id, the ego, and the superego.
Usually, though, your conscious self has sufficient control in day-to-day activities. This control breaks down, for example, under severe emotional stress. Or it can be subverted (cf. problems with maintaining diets). The point is that it’s not absolute and you can have more of it or less of it. People with less are often described as having “poor impulse control” but that’s not the only mode. Addiction would be another example.
So what I mean here is that the part of your mind that you think of as “I”, the one that does conscious reasoning, will have less control over yourself.
So what I mean here is that the part of your mind that you think of as “I”, the one that does conscious reasoning, will have less control over yourself.
So you mean having less willpower and impulse control?
For example someone who is having hallucinations is usually powerless to stop them. She lost control and it’s not exactly an issue of willpower.
If you’re scared your body dumped a lot of adrenaline in your blood and you are shaking, your hands are trembling and you can’t think straight. You’re on the verge of losing control and again it’s not really a matter of controlling your impulses.
Does anyone here have any serious information regarding Tulpas? When I first heard of them they immediately seemed to be the kind of thing that is obviously and clearly a very bad idea, and may not even exist in the sense that people describe them. A very obvious sign of a persons who is legitimately crazy, even.
Naturally, my first re-reaction is the desire to create one myself (One might say I’m a bit contrarian by nature). I don’t know any obvious reason not to (ignoring social stigma and time consuming initial investment), And there may be some advantage to having one, such as parallel focus, more “outside” self analysis, etc. I don’t really know much of anything right now, which is why I’m asking if there’s been any decent research done already.
Have you read the earlier discussions on this topic?
I had not, actually. The link you’ve given just links me to Google’s homepage, but I did just search LW for “Tulpa” and found it fine, so thanks regardless.
edit: The link’s original purpose now works for me. I’m not sure what the problem was before, but it’s gone now.
There’s tons of easily discovered information on the web about it.
I’m not sure the Tulpa-crowed would agree with this, but I think a non-esoteric example of Tulpas in everyday life is how some religious people say that God really speaks and appears to them. The “learning process” and stuff seem pretty similar—the only difference I can see is that in the case of Tulpas it is commonly acknowledged that the phenomenon is imaginary.
Come to think of it, that’s probably a really good method for creating Tulpas quickly—building off a real or fictional character for whom you already have a relatively sophisticated mental model. It’s probably also important that you are predisposed to take seriously the notion that this thing might actually be an agent which interacts with you...which might be why God works so well, and why the Tulpa-crowed keeps insisting that Tulpas are “real” in the sense that they carry moral weight. It’s an imagination-belief driven phenomenon.
It might also illustrate some of the “dangers” - for example, some people who grew up with notions of the angry sort of God might always feel guilty about certain “sinful” things which they might not intellectually feel are bad.
I’ve also heard claims of people who gain extra abilities / parallel processing / “reminders” with Tulpas....basically, stuff that they couldn’t do on their own. I don’t really believe that this is possible, and if this were demonstrated to me I would need to update my model of the phenomenon. To the tupla-community’s credit, they seem willing to test the belief.
Very good! A psychologist who studies evangelicals recognized it as the same phenomenon.
There is pretty good empirical evidence against the parallel-processing idea now.
What is stopping is me is the possibility that I will be potentially permanently relinquishing cognitive resources for the sake of the Tulpa.
I’ve been doing some research (mainly hanging on their subreddit) and I think I have a fairly good idea of how tulpas work and the answers to your questions.
There are a myriad very different things tulpas are described as and thus “tulpas exist in the way people describe them” is not well defined.
There undisputably exist SOME specific interesting phenomena that’s the referent of the word Tulpa.
I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer’s victim, dolphin, or beloved family pet dog.
I estimates it’s ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.
I estimate it’s power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.
It does not seem deciding to make a tulpa is a sign of being crazy. Tulpas themselves seem to not be automatically unhealthy and can often help their host overcome depression or anxiety. However, there are many signs that the act of making a tulpa is dangerous and can trigger latent tendencies or be easily done in a catastrophically wrong way. I estimate the risk is similar to doing extensive meditation or taking a single largeish dose of LSD. For this reason I have not and will not attempt making one.
I am to lazy to find citations or examples right now but I probably could. I’ve tried to be a good rationalist and am fairly certain of most of these claims.
Has anyone worked on making a tulpa which is smarter than they are? This seems at least possible if you assume that many people don’t let themselves make full use of their intelligence and/or judgement.
Unless everything I think I understand about tulpas is wrong, this is at the very least significantly harder than just thinking yourself smarter without one. All the idea generating is done before credit is assigned to either the “self” or the “tulpa”.
What there ARE several examples of however are tulpas that are more emotionally mature, better at luminosity, and don’t share all their hosts preconceptions. This is not exactly smarts though, or even general purpose formal rationality.
One CAN imagine scenarios where you end up with a tulpa smarter than the host. For example the host might have learned helplessness, or the tulpa being imagined as “smarter than me” and thus all the brains good ideas get credited to it.
Disclaimer: this is based of only lots of anecdotes I’ve read, gut feeling, and basic stuff that should be common knowledge to any LWer.
I’m reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.
So she and I created this whole whiteboard flowchart working out the design, at the end of which I said “Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I… um… completely failed to notice?”
I could certainly describe that as having a “Mark” in my head who is smarter about tax-code-related designs than I am, and there’s nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.
But “Mark” in this case would just be pointing to a subset of “Dave”, just as “Dave’s fantasies about aliens” does.
See also ‘rubberducking’ and previous discussions of this on LW. My basic theory is that reasoning was developed for adversarial purposes, and by rubberducking you are essentially roleplaying as an ‘adversary’ which triggers deeper processing (if we ever get brain imaging of system I vs system II thinking, I’d expect that adversarial thinking triggers system II more compared to ‘normal’ self-centered thinking).
Yes. Indeed, I suspect I’ve told this story before on LW in just such a discussion.
I don’t necessarily buy your account—it might just be that our brains are simply not well-integrated systems, and enabling different channels whereby parts of our brains can be activated and/or interact with one another (e.g., talking to myself, singing, roleplaying different characters, getting up and walking around, drawing, etc.) gets different (and sometimes better) results.
This is also related to the circumlocution strategy for dealing with aphasia.
Obligatory link.
Yea in that case presumably the tulpa would help—but not necessarily significantly more than such a non-tulpa model that requires considerably less work and risk.
Basically, a tulpa can technically do almost anything you can… but the absence of a tulpa can do them to, and for almost all of them there’s some much easier and at least as effective way to do the same thing.
Mental process like waking up without an alarm clock at a specific time aren’t easy. I know a bunch of people who have that skill but it’s not like there a step by step manual that you can easily follow that gives you that ability.
A tulpa can do things like that. There are many mental processes that you can’t access directly but that a tulpa might be able to access.
I am surprised to know there isn’t such a step by step manual, suspect that you’re wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.
But I guess you could make this argument; that a tulpa is more flexible and has a simpler user interface, even if it’s less powerful and has a bunch of logistic and moral problems. I dont like it but I can’t think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.
My point isn’t so much that it impossible but that it isn’t easy.
Creating a mental device that only wakes me up will be easier than creating a whole Tupla but once you do have a Tulpa you can reuse it a lot.
Let’s say I want to practice Salsa dance moves at home. Visualising a full dance partner completely just for the purpose of having a dance partner at home wouldn’t be worth the effort.
I’m not sure about how much you gain by pair programming with a Tulpa, but the Tulpa might be useful for that task.
It takes a lot of energy to create it the first time but afterwards you reap the benefits.
Tulpa creation involves quite a lot of effort so it doesn’t seem the lazy road.
Hmm, you have a point, I hadn’t thought about it that way. If it wasn’t so dangerous I would have asked you to experiment.
I do not have “wake up at a specific time” ability, but I have trained myself to have “wake up within ~1.5 hours of the specific time” ability. I did this over a summer break in elementary school because I learned about how sleep worked and thought it would be cool. Note that you will need to have basically no sleep debt (you consistently wake up without an alarm) for this to work correctly.
The central point of this method is this: a sleep cycle (the time it takes to go from a light stage of sleep to the deeper stages of sleep and back again) is about 1.5 hours long. If I am not under stress or sleep debt, I can estimate my sleeping time to the nearest sleep cycle. Using the sleep cycle as a unit of measurement lets me partition out sleep without being especially reliant on my (in)ability to perceive time.
The way I did it is this (each step was done until I could do it reliably, which took up to a week each for me [but I was a preteen then, so it may be different for adults]):
Block off approximately 2 hours (depending on how long it takes you to fall asleep), right after lunch so it has the least danger of merging with your consolidated/night sleep, and take a nap. Note how this makes you feel.
Do that again, but instead of blocking off the 2 hours with an alarm clock, try doing it naturally, and awakening when it feels natural, around the 1.5h mark (repeating this because it is very important: you will need to have very little to no accumulated sleep debt for this to work). Note how this makes you feel.
Do that again, but with a ~3.5-hour block. Take two 1.5 hour sleep cycle naps one after another (wake up in between).
During a night’s sleep, try waking up between every sleep cycle. Check this against [your sleep time in hours / 1.5h per sleep cycle] to make sure that you caught all of them.
Block off a ~3.5 hour nap and try taking it as two sleep cycles without waking up in between them. (Not sure about the order with this point and the previous one. Did I do them in the opposite order? I’m reconstructing from memory here. It’s probably possible to make this work in either order.)
You probably know from step 4 how many sleep cycles you have in a night. Now you should be able to do things like consciously split up your sleep biphasically, or waking up a sleep cycle earlier than you usually do.
I then spent the rest of summer break with a biphasic “first/second sleep” rhythm, which disappeared once I was in school and had to wake up at specific times again.
To this day, I sleep especially lightly, must take my naps in 1.5 hour intervals, and will frequently wake up between sleep cycles (I’ve had to keep a clock on my nightstand since then so I can orient myself if I get woken unexpectedly by noises, because a 3:30AM waking is different from a 5AM waking, but they’re at the same point on the cycle so they feel similar). I also almost always wake up 10-45 minutes before any set alarms, which would be more useful if the spread was smaller (45 minutes before I actually need to wake up seems like a waste). It’s a cool skill to have, but it has its downsides.
Yes, I would expect this.
Indeed, I’m surprised by the “almost”—what are the exceptions?
Anything that requires you using your body and interacting physically with the world.
I’m startled. Why can’t a tulpa control my body and interact physically with the world, if it’s (mutually?) convenient for it to do so?
Well if you consider that the tulpa doing it on it’s own then no I can’t think of any specific exceptions. Most tulpas can’t do that trick though.
Well, let me put it this way: suppose my tulpa composes a sonnet (call that event E1), recites that sonnet using my vocal cords (E2), and writes the sonnet down using my fingers (E3).
I would not consider any of those to be the tulpa doing something “on its own”, personally. (I don’t mean to raise the whole “independence” question again, as I understand you don’t consider that very important, but, well, you brought it up.)
But if I were willing to consider E1an example of the tulpa doing something on its own (despite using my brain) I can’t imagine a justification for not considering E2 and E3 equally well examples of the tulpa doing something on its own (despite using my muscles).
But I infer that you would consider E1 (though not E2 or E3) the tulpa doing something on its own. Yes?
So, that’s interesting. Can you expand on your reasons for drawing that distinction?
I feel like I’m tangled up in a lot of words and would like to point out that I’m not an expert and don’t have a tulpa, I just got the basics from reading lots of anecdotes on reddit.
You are entirely right here- although I’d like to point out most tulpas wouldn’t be able to do E2 and E3, independent or not. Also, that something like “composing a sonnet” is probably more the kind of thing brains do when their resources are dedicated to it by identities, not something identities do, and tulpas are mainly just identities. But I could be wrong both about that and what kind of activity sonet composing is.
Interesting! OK, that’s not a distinction I’d previously understood you as making.
So, what do identities do, as distinct from what brains can be directed to do?
(In my own model, FWIW, brains construct identities in much the same way brains compose sonnets.)
I guess I basically think of identities as user accounts, in this case. I just grabbed the closest fitting language dichotomy to “brain” (which IS referring to the physical brain) and trying to define and it further now will just lead to overfitting, especially since it almost certainly varies far more than either of us expect (due to the typical mind fallacy) from brain to brain.
And yea, brains construct identities the same way they construct sonnets. And just like music it can be small (jingle, minor character in something you write) or big (long symphony, Tulpa). And identities only slightly more compose sonnets, than sonnets create identities.
It’s all just mental content, that can be composed, remixed, deleted, executed, etc. Now, brains have a strong tendency to in the lack of an identity create one and give it root access, and this identity end up WAY more developed and powerful than even the most ancient and powerful tulpas, but there is no probably no or very little qualitative difference.
There are a lot of confounding factors. For example, something that I consider impossibly absurd seems to be the norm for most humans; considering their physical body as a part of “themselves” and feel as if they are violated if their body is. Put in their perspective, it’s not surprising most people can’t disentangle parts of their own brain(s), mind(s), and identities without meditating for years until they get it shaved in their face via direct perception, and even then probably often get it wrong. Although I guess my illness has shaved it in my face just as anviliciouslly.
Disclaimer: I got tired trying to put disclaimers on the dubious sources on each individual sentence, so just take it with a grain of salt OK and don’t assume I believe everything I say in any persistent way.
OK… I think I understand this. And I agree with much of it.
Some exceptions...
I don’t think I understand what you mean by “root access” here. Can you give me some examples of things that an identity with root access can do, that an identity without root access cannot do?
This is admittedly a digression, but for my own part, treating my physical body as part of myself seems no more absurd or arbitrary to me than treating my memories of what I had for breakfast this morning as part of myself, or my memories of my mom, or my inability to juggle. It’s kind of absurd, yes, but all attachment to personal identity is kind of absurd. We do it anyway.
All of that said… well, let me put it this way: continuing the sonnet analogy, let’s say my brain writes a sonnet (S1) today and then writes a sonnet (S2) tomorrow. To my way of thinking, the value-add of S2 over and above S1 depends significantly on the overlap between them. If the only difference is that S2 corrects a mis-spelled word in S1, for example, I’m inclined to say that value(S1+S2) = value(S2) ~= value(S1) .
For example, if S1 → S2 is an improvement, I’m happy to discard S1 if I can keep S2, but I’m almost as happy to discard S2 if I can keep S1 -- while I do have a preference for keeping S2 over keeping S1, it’s noise relative to my preference for keeping one of them over losing both.
I can imagine exceptions to the above, but they’re contrived.
So, the fix-a-mispelling case is one extreme, where the difference between S1 and S2 is very small. But as the difference increases, the value(S1+S2) = value(S2) ~= value(S1) equation becomes less and less acceptable. At the other extreme, I’m inclined to say that S2 is simply a separate sonnet, which was inspired by S1 but is distinct from it, and value(S1+S2) ~= value(S2) + value(S1).
And those extremes are really just two regions in a multidimensional space of sonnet-valuation.
Does that seem like a reasonable way to think about sonnets? (I don’t mean is it complete; of course there’s an enormous amount of necessary thinking about sonnets I’m not including here. I just mean have I said anything that strikes you as wrong?)
Does it seem like an equally reasonable way to think about identities?
Root access was probably a to metaphorical choice of words. Is “skeletal musculature privileges” clearer?
All those things like memories or skillsets you list as part of identity does seem weird, but even irrelevant software not nearly as weird as specific hardware. I mean seriously attaching significance to specific atoms? Wut? But of course, I know it’s really me thats weird and most humans do it.
I agree about what you say about sonnets, it’s very well put in fact. And yes identities do follow the same rules. Trying to come up with fitting tulpa stuff in the metaphor. Doesn’t really work though because I don’t know enough about it.
This is getting a wee bit complicated and I think we’re starting to reach the point where we have to dissolve the classifications and actually model things in detail on continuums, which means more conjecture and guesswork and less data and what data we have being less relevant. We’ve been working mostly in metaphors that doesn’t really go this far without breaking down. Also, since we’re getting into more and more detail, it also means th stuff we are examining is likely to be drowned out in the differences between brains, and the conversation turn into nonsense due to the typical mind fallacy.
As such, I am unwilling to widely sprout what’s likely to end up half nonsense at least publicly. Contact me by PM if you’re really all that interested in getting my working model of identities and mental bestiary.
Would you classify a novel in the same “moral-status” tier as these four examples?
No, thats much much lower. As in torture a novel for decades in order to give a tulpa a quick amusement being a moral thing to do lower.
Assuming you mean either a physical book, or the simulation of the average minor character in the author’s mind, here. Main characters or RPing PCs can vary in complexity of simulation from author to author a lot and it’s a theory that some become effectively tulpas.
Your answer clarifies what I was trying to get at with my question but wasn’t quite sure how to ask, thanks; my question was deeply muddled.
For my own part, treating a tulpa as having the moral status of an independent individual distinct from its creator seems unjustified. I would be reluctant to destroy one because it is the unique and likely-unreconstructable creative output of a human being, much like I would be reluctant to destroy a novel someone had written (as in, erase all copies of such that the novel itself no longer exists), but that’s about as far as I go.
I didn’t mean a physical copy of a novel, sorry that wasn’t clear.
Yes, destroying all memory of a character someone played in an RPG and valued remembering I would class similarly.
But all of these are essentially property crimes, whose victim is the creator of the artwork (or more properly speaking the owner, though in most cases I can think of the roles are not really separable), not the work of art itself.
I have no idea what “torture a novel” even means, it strikes me as a category error on a par with “paint German blue” or “burn last Tuesday”.
Ah. No, I think you’d change your mind if you spent a few hours talking to accounts that claim to be tulpas.
A newborn infant or alzheimer’s patient is not an independent individual distinct from it’s caretaker either. Do you count their destruction as property crime as well? “Person”-ness is not binary, it’s not even a continuum. It’s a cluster of properties that usually correlate but in the case of tulpas does not. I recommend re-reading Diseased Thinking.
As for your category error: /me argues for how german is a depressing language and spends all that was gained in that day on something that will not last. Then a pale-green tulpa snores in an angry manner.
I picture a sheet of paper with a paragraph in each of several languages, a paintbrush, and watercolours. Then boring-sounding environmental considerations make me feel outraged without me consciously realizing what’s happening.
I agree that person-ness is cluster of properties and not a binary.
I don’t believe that tulpas possess a significant subset those properties independent of the person whose tulpa they are.
I don’t think I’m failing to understand any of what’s discussed in Diseased Thinking. If there’s something in particular you think I’m failing to understand, I’d appreciate you pointing it out.
It’s possible that talking to accounts that claim to be tulpas would change my mind, as you suggest. It’s also possible that talking to bodies that claim to channel spirit-beings or past lives would change my mind about the existence of spirit-beings or reincarnation. Many other people have been convinced by such experiences, and I have no especially justified reason to believe that I’m relevantly different from them.
Of course, that doesn’t mean that reincarnation happens, nor that spirit-beings exist who can be channeled, or that tulpas possess a significant subset of the properties which constitute person-ness independent of the person whose tulpa they are.
Eh?
I can take a newborn infant away from its caretaker and hand it to a different caretaker… or to no caretaker at all… or to several caretakers. I would say it remains the same newborn infant. The caretaker can die, and the newborn infant continues to live; and vice-versa.
That seems to me sufficient justification (not necessary, but sufficient) to call it an independent individual.
Why do you say it isn’t?
I count it as less like a property crime than destroying a tulpa, a novel, or an RPG character. There are things I count it as more like a property crime than.
Seems I were wrong about you not understanding the word thing. Apologies.
You keep saying that word “independent”. I’m starting to think we might not disagree about any objective properties of tulpas, just things need to be “independent” or only the most important count towards your utility, but I just add up the identifiable patterns not caring about if they overlap. Metaphor: tulpas are “10101101”, you’re saying “101″ occurs 2 times, I’m saying “101” occurs 3 times.
I’m fairly certain talking to bodies that claim those things would not change my probability estimates on those claims unless powerful brainwashing techniques were used, and I certainly hope the same is the case for you. If I believed that doing that would predictably shift my beliefs I’d already have those beliefs. Conservation of Expected Evidence.
((You can move a tulpa between minds to, probably, it just requires a lot of high tech, unethical surgery, and work. And probably gives the old host permanent severe brain damage. Same as with any other kind of incommunicable memory.))
(shrug) Well, I certainly agree that when I interact with a tulpa, I am interacting with a person… specifically, I’m interacting with the person whose tulpa it is, just as I am when I interact with a PC in an RPG.
What I disagree with is the claim that the tulpa has the moral status of a person (even a newborn person) independent of the moral status of the person whose tulpa it is.
On what grounds do you believe that? As I say, I observe that such experiences frequently convince other people; without some grounds for believing that I’m relevantly different from other people, my prior (your hopes notwithstanding) is that they stand a good chance of convincing me too. Ditto for talking to a tulpa.
(shrug) I don’t deny this (though I’m not convinced of it either) but I don’t see the relevance of it.
Yea this seems to definitely be just a fundamental values conflict. Let’s just end the conversation here.
What do you think about the moral status of torturing an uploaded human mind that’s in silicon?
Does that mind have a different moral status than one in a brain?
Certainly not by virtue of being implemented in silicon, no. Why do you ask?
As someone with personal experience with a tulpa, I agree with most of this.
I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how “well-realized” they are.
I have no idea what a tulpa’s moral status is, besides not less than a fictional character and not more than a typical human.
I would expect most of them to have about the same intelligence, rather than lower intelligence.
You are probably counting more properties things can vary under as “ontological”. I’m mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.
I’m basing the moral status largely on “well realized”, “complex” and “technically sentient” here. You’ll notice all my example ALSO has the actual utility function multiplier at “unknown”.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it’s power over reality.
Ah. I see what you mean. That makes sense.
Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.
That brings up a moral question. To what extend is it immoral to create a Tulpa and have it be in pain?
Tulpa are supposed to suffer from not getting enough attention so if you can’t commit to giving it a lot of attention for the rest of your life you might commit an immoral act by creating it.
Just so facts without getting entangled in the argument: In anecdotes tulpas seem to report more abstract and less intense types of suffering than humans. The by far dominant source of suffering in tulpas seems to be via empathy with the host. The suffering from not getting enough attention is probably fully explainable by loneliness, and sadness over fading away losing the ability to think and do things.
This is very useful information if true. Could you link to some of the anecdotes which you draw this from?
Look around yourself on http://www.reddit.com/r/Tulpas/ or ask some yourself on the verius IRC rooms that can be reached form there. I only have vague memories built from threads buried noths back on that redit.
No, I don’t think so. It’s notably missing the “artificial” part of AI.
I think of tulpa creation as splitting off a shard of your own mind. It’s still your own mind, only split now.
I think the really relevant ethical question is whether a tulpa has a separate consciousness from its host. From my own researches in the area (which have been very casual, mind you), I consider it highly unlikely that they have separate consciousness, but not so unlikely that I would be willing to create a tulpa and then let it die, for example.
In fact, my uncertainty on this issue is the main reason I am ambivalent about creating a tulpa. It seems like it would be very useful: I solve problems much better when working with other people, even if they don’t contribute much; a tulpa more virtuous than myself could be a potent tool for self-improvement; it could help ameliorate the “fear of social isolation” obstacle to potential ambitious projects; I would gain a better understanding of how tulpas work; I could practice dancing and shaking hands more often; etc. etc. But I worry about being responsible for what may be (even with only ~15% subjective probability) a conscious mind, which will then literally die if I don’t spend time with it regularly (ref).
Just to clarify this a little… how many separate consciousnesses do you estimate your brain currently hosts?
By my current (layman’s) understanding of consciousness, my brain currently hosts exactly one.
OK, thanks.
It’s not your normal mind, so it’s artifical for ethical considerations.
As far as I read stuff written by people with Tulpa’s they treat them as entity who’s desires matter.
This might be a stupid question, but what ethical considerations are different for an “artificial” mind?
When talking about AGI few people label it as murder to shut down the AI that’s in the box. At least it’s worth a discussion whether it is.
Only if it’s not sapient, which is a non-trivial question.
Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.
Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.
“Sufficiently accurate simulation of consciousness” is a subset of set of things that are artificial minds. You might have a consensus for that class. I don’t think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.
At least for me, personally, the relevant property for moral status is whether it has consciousness.
That’s my understanding as well.… though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole “a tulpa {is,isn’t} an artificial intelligence” discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don’t think it matters much in context.
I don’t find this argument convincing.
Yes, and..?
Let me quote William Gibson here:
Addictions … started out like magical pets, pocket monsters. They did extraordinary tricks, showed you things you hadn’t seen, were fun. But came, through some gradual dire alchemy, to make decisions for you. Eventually, they were making your most crucial life-decisions. And they were … less intelligent than goldfish.
There a good chance that you will also hold that belief when you will interact with the Tulpa on a daily basis. As such it makes sense to think about the implications of the whole affair before creating one.
I still don’t see what you are getting at. If I treat a tulpa as a shard of my own mind, of course its desires matter, it’s the desires of my own mind.
Think of having an internal dialogue with yourself. I think of tulpas as a boosted/uplifted version of a party in that internal dialogue.
Well, if you think that the human illusion of unified agency is a good ideal to strive for, it then seems that messing around w/ tulpas is a bad thing. If you have really seriously abandoned that ideal (very few people I know have), then knock yourself out!
Why would it be considered important to maintain a feeling of unified agency?
Is this a serious question? Everything in our society, from laws to social conventions, is based on unified agency.
The consequentialist view of rationality as expressed here seems to be based on the notion of unified agency of people (the notion of a single utility function is only coherent for unified agents).
It’s fine if you don’t want to maintain unified agency, but it’s obviously an important concept for a lot of people. I have not met a single person who truly has abandoned this concept in their life, interactions with others, etc. The conventional view is someone without unified agency has demons to be cast out (“my name is Legion, for we are many.”)
By “agency”, are you referring to physical control of the body? As far as I can tell, the process of “switching” (allowing the tulpa to control the host’s body temporarily) is a very rare process which is a good deal more difficult than just creating a tulpa, and which many people who have tulpas cannot do at all even if they try.
Welp, look at that, I just found this thread after finishing up a long comment on the subject in an older thread. Go figure. (By the way, I do recommend reading that entire discussion, which included some actual tulpas chiming in).
A fairly obvious reason is that to generate a tulpa you need to screw up you mind in a sufficiently radical fashion. And once you do that, you may not be able to unfuck it back to normal.
I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that creating tulpas is basically self-induced schizophrenia. I don’t think schizophrenia is fun.
This is a concern I share. However...
This is the worst argument in the world.
I don’t think so, it can be rephrased tabooing emotional words. I am not trying to attach some stigma of mental illness, I’m pointing out that tulpas are basically a self-inflicted case of what the medical profession calls dissociative identity disorder and that it has significant mental costs.
Taylor et al. claim that although people who exhibit the illusion of independent agency do score higher than the population norm on a screening test of dissociative symptoms, the profile on the most diagnostic items is different from DID patients, and scores on the test do not predict IIA:
Could you describe the relevant mental costs that you would expect as a sideeffect of creating a tulpa?
Loss of control over your mind.
What does that mean?
An entirely literal reading of that phrase.
So you mean that you are something that’s separate from your mind? If so, what’s you and how does it control the mind?
Your mind is a very complicated entity. It has been suggested that looking at it as a network (or an ecology) of multiple agents is a more useful view than thinking about it as something monolithic.
In particular, your reasoning consciousness is very much not the only agent in your mind and is not the only controller. An early example of such analysis is Freud’s distinction between the id, the ego, and the superego.
Usually, though, your conscious self has sufficient control in day-to-day activities. This control breaks down, for example, under severe emotional stress. Or it can be subverted (cf. problems with maintaining diets). The point is that it’s not absolute and you can have more of it or less of it. People with less are often described as having “poor impulse control” but that’s not the only mode. Addiction would be another example.
So what I mean here is that the part of your mind that you think of as “I”, the one that does conscious reasoning, will have less control over yourself.
So you mean having less willpower and impulse control?
Not only, I mean a wider loss of control.
For example someone who is having hallucinations is usually powerless to stop them. She lost control and it’s not exactly an issue of willpower.
If you’re scared your body dumped a lot of adrenaline in your blood and you are shaking, your hands are trembling and you can’t think straight. You’re on the verge of losing control and again it’s not really a matter of controlling your impulses.
My understanding is that in the case of tulpas, the hallucinations are voluntary and can be stopped and started at will.
While you make your tulpa, you may also want to investigate whether you are a reincarnated Nazi or a good reptilian.
I really want to downvote this for being mean.
Except that I laughed so hard I spit coffee out my nose.
Well, OK, I sort of want to downvote this for making me spit coffee out my nose, too.
But now I no longer trust my impartiality.