If you have a model that requires strenuous effort or lengthy struggle to induce change in a human being, that’s a pretty good indication there’s something seriously wrong with your model.
Credible only in so far as “one can consistently induce change in a human being without strenuous effort or lengthy struggle” is, and I don’t think the latter is anything like obviously right. On the face of it, it seems obviously wrong: people often do require effort and struggle to change, and evolutionarily speaking that seems like what one should expect. (You don’t want random other people to be able to change your behaviour too easily, and easy self-modification is liable to make for too-easy modification by others.)
… my own “miracle” techniques …
You remind us frequently about what miraculous techniques you have. So it seems like by now you should be a walking miracle, a paragon of well-adjusted Winning. And yet, it doesn’t seem all that rare for you to post something saying “I just discovered another idiotic bug in my mental functioning. So I bypassed the gibson using my self-transcending transcendence-transmogrification method, and I’m better now.” To my cynical eye, there seems to be some tension here.
Again, think of priming! Priming is EASY, not hard.
OK, so there are some ways, commonly harmful but maybe sometimes exploitable for good, in which our mental states can be messed with non-rationally for a shortish period. Remind me, please, how that is supposed to be good evidence that we can consistently change our behaviours, motivations, etc., in ways we actually want to, with lasting effect?
On the face of it, it seems obviously wrong: people often do require effort and struggle to change, and evolutionarily speaking that seems like what one should expect.
Yes… and no. See below:
(You don’t want random other people to be able to change your behaviour too easily, and easy self-modification is liable to make for too-easy modification by others.)
Exactly. The conscious mind is both an offensive weapon (for persuading others) and a defense against persuasion. Separating conscious/social (“far”) beliefs from action-driving (“near”) beliefs allows the individual to get along with the group while remaining unconvinced enough to continue acting impulsively for their own benefit under low-supervision circumstances.
In other words, willpower works better when you’re being watched… which is exactly what we’d expect.
The offense/defense machinery evolved with or after language; initially language probably worked directly on the “near” system, which led to the possibility of exploitation via persuasion… and an ensuing arms race of intelligence driven by the need for improved persuasive ability and improved skepticism, balanced by the benefit of remaining able to be truly convinced of things for which sufficient sensory (“near”) evidence is available.
You remind us frequently about what miraculous techniques you have. So it seems like by now you should be a walking miracle, a paragon of well-adjusted Winning. And yet, it doesn’t seem all that rare for you to post something saying “I just discovered another idiotic bug in my mental functioning. So I bypassed the gibson using my self-transcending transcendence-transmogrification method, and I’m better now.” To my cynical eye, there seems to be some tension here.
Also, it’s important to bear in mind that knowing how to change something, knowing what to change, and knowing what to change it to, are all different skills. I’ve known methods for the first for quite some time now, and the last year or two I’ve focused more on the second. This year, I’ve finally started making some serious progress on the third one as well, which is actually part of understanding the second. (I.e., if you know where you’re going, it’s easier to know what’s not there yet.)
For example, Dweck’s work on fixed vs. growth mindsets: that stuff isn’t a matter of global beliefs in a literal sense. Each area of your life that you perceive as “fixed” may be a distinct belief on the emotional level, so each one needs to be changed as it’s encountered. In the month or so since I read her book, I’ve identified over half a dozen such mindsets: intelligence, time, task granularity, correctness, etc… each of which was a distinct “belief” at the emotional level regarding its “fixed”-ness.
Changing each one was “magical” in the sense that it opened up a range of choices that wasn’t available to me before… but I couldn’t simply read her book and decide, “woohoo, I will change all my fixed mindsets to growth ones”. The brain does not have a “view source” button; you cannot simply “list” all your beliefs on the basis of an abstract pattern like fixedness vs. growthness, or ones that involve supernatural thinking, or any other non-sensory abstractions. (Abstractions are in the “far” system, not the “near” one.)
OK, so there are some ways, commonly harmful but maybe sometimes exploitable for good, in which our mental states can be messed with non-rationally for a shortish period. Remind me, please, how that is supposed to be good evidence that we can consistently change our behaviours, motivations, etc., in ways we actually want to, with lasting effect?
We are constantly self-priming. Techniques that work, work because they change the data we prime ourselves with.
When you discovered that Santa Claus didn’t exist, did you try to stay up late to see him any more, or did your behavior change immediately, with lasting effect?
Basically, you stopped priming yourself with the thoughts that generated those behaviors, because your brain was no longer predicting certain events to occur. It is our never-ending stream of automatically-generated internal predictions that is the main internal source of priming. Change that prediction stream, and you change the behavior.
External methods of change work by forcing new predictions; internal methods (including CBT, NLP, hypnosis, etc.) work by manipulating the internal representations that are used to generate the predictions.
And yet, it doesn’t seem all that rare for you to post something saying “I just discovered another idiotic bug in my mental functioning. So I bypassed the gibson using my self-transcending transcendence-transmogrification method, and I’m better now.” To my cynical eye, there seems to be some tension here.
As a programmer, I will charitably note that it’s not uncommon for a more serious bug to mask other more subtle ones; fixing the big one is still good, even if the program may look just as badly broken afterwards. Judging from his blog, he’s doing well enough for himself, and if he was in a pretty bad state to begin with his claims may be justified. There’s a difference between “I fixed the emotional hang-up that was making this chore hard to do” and “I’ve fixed a crippling, self-reinforcing terror of failure that kept me from doing anything with my life”.
That said, there is a lack of solid evidence, and the grandiosity of the claims suggests brilliant insight or crackpottery in some mixture—but then, the same could be said of Eliezer, and he’s clearly won many people over with his ideas.
As a programmer, I will charitably note that it’s not uncommon for a more serious bug to mask other more subtle ones; fixing the big one is still good, even if the program may look just as badly broken afterwards.
And one of the unfortunate things about the human architecture is that the more global a belief/process is, the more invisible it is… which is rather the opposite of what happens in normal computer programming. That makes high-level errors much harder to spot than low-level ones.
First year or so, I spent way too much time dealing with “hangups making this chore hard to do”, and not realizing that the more important hangups are about why you think you need to do them in the first place. So it has been taking a while to climb the abstraction tree.
For another thing, certain processes are difficult to spot because they’re cyclical over a longer time period. I recently realized that I was addicted to getting insight into problems, when it wasn’t really necessary to understand them in order to fix them, even at the relatively shallow level of understanding I usually worked with. In effect, insight was just a way of convincing myself to “lower the anti-persuasion shields”.
The really crazy/annoying thing is I keep finding evidence that other people have figured ALL of this stuff out before, but either couldn’t explain it or convince anybody else to take it seriously. (That doesn’t make me question the validity of what I’ve found, but it does make me question whether I’ll be able to explain/convince any more successfully than the rest did.)
That said, there is a lack of solid evidence, and the grandiosity of the claims suggests brilliant insight or crackpottery in some mixture
Heh, you think mine are grandiose, you should hear the claims that other people make for what are basically the same techniques! I’m actually quite modest. ;-)
“That said, there is a lack of solid evidence, and the grandiosity of the claims suggests brilliant insight or crackpottery in some mixture—but then, the same could be said of Eliezer, and he’s clearly won many people over with his ideas.”
Precisely the point. We’re not interested in how to attract people to doctrines (or at least I’m not), but in determining what is true and finding ever-better ways to determine what is true.
The popularity of some idea is absolutely irrelevant in itself. We need evidence of coherence and accuracy, not prestige, in order to reach intelligent conclusions.
The popularity of some idea is absolutely irrelevant in itself.
Compelling, but false. Ideas’ popularity not only contributes network effects to their usefulness (which might be irrelevant by your criteria), but it also provides evidence that they’re worth considering.
You remind us frequently about what miraculous techniques you have. So it seems like by now you should be a walking miracle, a paragon of well-adjusted Winning. And yet, it doesn’t seem all that rare for you to post something saying “I just discovered another idiotic bug in my mental functioning. So I bypassed the gibson using my self-transcending transcendence-transmogrification method, and I’m better now.” To my cynical eye, there seems to be some tension here.
Are you saying that he displays bad behavior because he keeps fixing himself? I thought that was a good thing.
With more relevance:
Credible only in so far as “one can consistently induce change in a human being without strenuous effort or lengthy struggle” is, and I don’t think the latter is anything like obviously right. On the face of it, it seems obviously wrong: people often do require effort and struggle to change.
I agree with your statement but only in the sense that a individual person will require effort and struggle to change in regards most magical treatments but may respond quickly to a particular treatment. To throw in my own personal experience, people change pretty quickly once you find the trick that works on them.
Are you saying that he displays bad behavior because he keeps fixing himself?
No, of course not. I’m saying that if you have miraculous brain-fixing techniques and deploy them as effectively as you know how to on yourself for years, then after those years you should surely (1) be conspicuously much happier / better adjusted / more rational / more productive than everyone else, and (2) not still need fixing all the time.
Now, of course, I don’t know for sure that Philip isn’t Winning much more than all the rest of us who haven’t become Clear by getting rid of our body thetans—oh, excuse me, wrong form of miraculous psychological fixing—so maybe it’s just natural cynicism that makes me doubt it. Philip, what say you? Is your brain much better than the rest of ours now?
No, of course not. I’m saying that if you have miraculous brain-fixing techniques and deploy them as effectively as you know how to on yourself for years, then after those years you should surely (1) be conspicuously much happier / better adjusted / more rational / more productive than everyone else, and (2) not still need fixing all the time.
Yes, of course, because we all know that if you have a text substitution tool like ‘sed’, you should be able to fix all the bugs in a legacy codebase written over a period of 30-some years by a large number of people, even though you have no ability to list the contents of that codebase, in just a couple of years working part-time, while you’re learning about the architecture and programming language used. Yeah, that should be a piece of cake.
Oh yeah, and there are lots of manuals available, but we can’t tell you which ones sound sensible but were actually written by idiots who don’t know what they’re talking about, and which ones sound like they were written by lunatic channelers but actually give good practical information.
Plus, since the code you’re working on is your own head, you get to deal with compiler bugs and bugs in the debugger. Glorious fun! I highly recommend it. Not.
It certainly doesn’t help that I started out from a more f’d up place than most of my clients. I’ve had a few clients who’ve gotten one session with me or attended one workshop who then considered themselves completely fixed, and others that spent only a few months with me before deciding they were good to go.
It also doesn’t help that you can’t see your own belief frames as easily as you can see the frames of others. It’s easy to be a coach or guru to someone else. Ridiculously so, compared to doing it to yourself.
See, the thing is that you don’t just say “I’ve got some ways of tweaking how my brain works. They aren’t very good, and I don’t really have any understanding of what I’m doing, but I find this interesting.” (Which would be the equivalent of “I’ve got a text-substitution tool, and maybe there might be some way of using it to fix this undocumented 30-year-old ball of mud whose code I can’t read”.)
Which is not all that surprising, given that you’re trying to make a living from helping people fix their brains, and you wouldn’t get many clients by saying “I don’t really have any more idea what I’m doing than some newbie wannabe hacker trying to wrangle the source code for Windows with no tools more powerful than sed”. But I really don’t think you should both claim that you understand brains and know how to fix them and you have “miracle” techniques and so on and so forth, and protest as soon as that’s questioned “oh, but really it’s like trying to work on an insanely complicated pile of legacy software with only crappy tools”.
See, the thing is that you don’t just say “I’ve got some ways of tweaking how my brain works. They aren’t very good, and I don’t really have any understanding of what I’m doing, but I find this interesting.” (Which would be the equivalent of “I’ve got a text-substitution tool, and maybe there might be some way of using it to fix this undocumented 30-year-old ball of mud whose code I can’t read”.)
Actually, I do say that; few of my blog posts do much else besides describe some bug I found, what I did to fix it, and throw in some tips about the pitfalls involved.
But I really don’t think you should both claim that you understand brains and know how to fix them and you have “miracle” techniques and so on and so forth, and protest as soon as that’s questioned “oh, but really it’s like trying to work on an insanely complicated pile of legacy software with only crappy tools”.
If I told you I had a miracle tool called a “wrench”, that made it much easier to turn things, but said you had to find which pipes or bolts to turn with it, and whether they needed to be tightened or loosened, would you say that that was a contradiction? Would you expect that having a wrench would instantly make you into a plumber, or an expert on a thousand different custom-built steam engines? That makes no sense.
Computer programmers have the same problem: what their clients perceive as “simple” vs. “difficult/miracle” is different from what is actually simple or a miracle for the programmer. Sometimes they’re the same, and sometimes not.
In the same way, many things that people on this forum consider “simple” changes can in fact be mind-bogglingly complicated to implement, while other things that they consider to be high-end Culture-level transhumanism are fucking trivial.
Funny story: probably the only reason I’m here is because in Eliezer’s work I recognized a commonality: the effort to escape the mind-projection fallacy. In his case, it was such projections applied to AI, but in my case, it’s such projections applied to self. As long as you think of your mind in non-reductionistic terms, you’re not going to have a useful map for change purposes.
(Oh, and by the way, I never claimed to “fix brains”—that’s your nomenclature. I change the contents of brains to fix bugs in people’s behavior. Brains aren’t broken, or at least aren’t fixable. They just have some rather nasty design limitations on the hardware level that contribute to the creation of bugs on the software level.)
I think this discussion is getting too lengthy and off-topic, so I shall be very brief. (I’ll also remark: I’m not actually quite as cynical about your claims as I am probably appearing here.)
If I told you I had a miracle tool called a “wrench” [...]
If you told me you had a miracle tool called a wrench, and an immensely complicated machine with no supporting documentation, whose workings you didn’t understand, and that you were getting really good results by tweaking random things with the wrench (note: they’d better be random things, because otherwise your analogy with an inexperienced software developer attacking an unmanageable pile of code that s/he can’t even see doesn’t work) … why, then, I’d say “Put that thing down and back away slowly before you completely fuck something up with it”.
I never claimed to “fix brains”—that’s your nomenclature.
Yes, that’s my nomenclature (though you did say “the code you’re working on is your own head”...), and I’m sorry if it bothers you. Changes to the “contents of brains”, IIUC, are mostly made by changing the actual brain a bit; the software/hardware distinction is nowhere near as clean as it is with digital computers.
(note: they’d better be random things, because otherwise your analogy with an inexperienced software developer attacking an unmanageable pile of code that s/he can’t even see doesn’t work)
It’s not that you can’t see the code at all, it’s that you can’t list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.
Such single changes sometimes generalize broadly, if you happen to hit a “function” that’s used by a lot of different things. But as with any legacy code base, it’s hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.
I’d say “Put that thing down and back away slowly before you completely fuck something up with it”.
Well, when I started down this road, I was desperate enough that the risk of frying something was much less than the risk of not doing something. Happily, I can now say that the brain is a lot more redundant—even at the software level—than we tend to think. It basically uses a, “when in doubt, use brute force” approach to computation. It’s inelegant in one sense, but VERY robust -- massively robust compared to any human-built hardware OR software.
It’s not that you can’t see the code at all, it’s that you can’t list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.
Such single changes sometimes generalize broadly, if you happen to hit a “function” that’s used by a lot of different things. But as with any legacy code base, it’s hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.
While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand. Roughly half of my job is fixing other people’s “fixes” because they really had no concept of what was happening or how to use the tools in the box correctly.
While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand.
Brain code doesn’t crash, and the brain isn’t capable of locking in a tight loop for very long; there are plenty of hardware-level safeguards that are vastly better than anything we’ve got in computers. Remember, too, that brains have to be able to program themselves, so the system is inherently both simple and robust.
In fact, brains weren’t designed for conscious programming as such. What “mind hacking” essentially consists of is deliberately directing the brain to information that convinces it to make its own programming changes, in the same way that it normally updates its programming—e.g. by noticing that something is no longer true, a mistake in classification has been made, etc. (The key being that these changes have to be accomplished at the “near” thinking level, which operates primarily on simple sensory/emotional patterns, rather than verbal abstractions.)
In a sense, to make a change at all, you have to convince the brain that what you are asking it to change to will produce better results than what it’s already doing. (Again, in “near”, sensory terms.) Otherwise, it won’t “take” in the first place, or else it will revert to the old programming or generate new programming once you get it “in the field”.
I don’t mean you have to convince the person, btw; I mean you have to convince the brain. Meaning, you need to give it options that lead to a prediction of improved results in the specific context you’re modifying. In a sense, it’d be like talking an AI into changing its source code; you have to convince it that the change is consistent with its existing high-level goals.
It isn’t exactly like that, of course—all these things are all just metaphors. There isn’t really anything there to “convince”, it’s just that what you add into your memory won’t become the preferred response unless it meets certain criteria, relative to the existing options.
Truth be told, though, most of my work tends to be deleting code, not adding it, anyway. Specifically, removing false predictions of danger, and thereby causing other response options to bump up in the priority queue for that context.
For example, suppose you have an expert system that has a rule like “give up because you’re no good at it”, and that rule has a higher priority than any of the rules for performing the actual task. If you go in and just delete that rule, you will have what looks like a miraculous cure: the system now starts working properly. Or, if it still has bugs, they get ironed out through the normal learning process, not by you hacking individual rules.
I suppose what I’m trying to say is that there isn’t anything I’m doing that brains can’t or don’t already do on their own, given the right input. The only danger in that, is if you say, motivated yourself to do something dangerous without actually knowing how to do that thing safely. And people do that all the time anyway.
If you have miraculous brain-fixing techniques and deploy them [...] on yourself for years, then after those years you should surely [...] (2) not still need fixing all the time.
I think my only response to what you said is that some things need fixing forever since the perfect picture is being viewed from a fuzzy perspective. Personally, I doubt that the self-improvement ladder ever ends.
I agree with your statement but only in the sense that a individual person will require effort and struggle to change in regards most magical treatments but may respond quickly to a particular treatment. To throw in my own personal experience, people change pretty quickly once you find the trick that works on them.
I would further extend that to suggest that in fact, there is an essential nature to all tricks that work, and that the key is only that you get a person to actually DO that trick, without running other mental programs at the same time. I am finding that if I (in the coaching context) simply push someone through a process, and ruthlessly deny them the opportunity to digress, disbelieve, dispute, etc., then they will get a result that they were previously failing to obtain, due to being distracted by their own skepticism. (Not in the sense of doubting me or the technique, but doubting their own ability to do the technique, or to actually change, etc.)
This leads me to believe that most of the difference between schools of self-help is largely persuasive in function: you have to first convince a person to lower their anti-persuasion shields in order to get them to actually carry out any effective self-persuasion. Because in effect, self-management equals self-persuasion.
Interesting. Is the link you provided a good start along that topic? Is there a better introductory place? The specific request:
I would like to learn more about disabling anti-persuasion shields as related to running (or not running) mental “programs”. As in, which mental programs are anti-persuasion shields and is disabling anti-persuasion shields similar to disabling other mental programs.
Is the link you provided a good start along that topic?
It’s not bad. I linked to a page that can be read from the top, to give a good intro to the idea of “suggestion” vs. “autosuggestion”, but really most of the book is quite good. Although it was written almost 100 years ago, Coue’s book has some pretty stunning insights into what actually works. There are just a few points that I would add to what he says in the book.
First, he doesn’t address the distinctions between verbal and sensory imagination, commanding and questioning. Second, he doesn’t say much to address the issue of dealing with existing beliefs and responses.
The first omission results in people mindlessly repeating phrases or “affirmations” and thinking they are doing autosuggestion—they are not. Imagination must be used, and by imagination, I mean not intentional visualization (which would be counterproductively invoking what he calls the “will”), but rather the passive contemplation or musing on an idea, like “what would it be like if...?” Or “how good will it be when...?” (These are leading questions, of course, but then, that’s the point: to lead your imagination to respond on its own.)
The second omission is that when you attempt to imagine what something is like, your internal response may be a feeling or idea that it is impossible, impractical, nonsensical, a bad idea, that you can’t do it, or some other form of interference.
At this point, it’s really only necessary to wonder what it would be like if the desired thing were already had, anyway. In other words, one acknowledges the response, but does not treat it as if it were true. You then repeat the attempt at inquiry. This is how you bypass the shields, as it were.
Is disabling anti-persuasion shields similar to disabling other mental programs?
Yes and no. It used to be that I spent all my time (and encouraged others to spend theirs) on modifying the memories that induced the kind of critical responses and negative predictions that stopped them from doing things. What I have begun wondering only today, is whether it might not be simpler just to bypass such blocks and not give them any credence to start with.
In other words, it has occurred to me that maybe it is not the initial negative response that’s an issue for people being blocked; maybe it’s just their response to that response. In other words, person A gets a negative response to the idea of doing something, and then responds to that by giving up or feeling like it’s useless. Whereas person B might get the same initial negative response, and then respond to it by imagining how good the result is going to be. Paradoxically, the more negative responses person B gets, the greater their motivation will become.
So, I’m currently self-experimenting with that idea—of focusing on the 2nd order responses to blocks instead of the 1st order blocks themselves. If it works, it should be a big increase in efficiency, since the 2nd-order responses are more likely to be system-global, meaning fewer program changes needed to effect system-wide change.
But that’s still to be tested. Right now, I’ve just noticed that bypassing blocks by simply ignoring the 1st-order response is quite possible. I’ve done it with various things today and it has worked quite well so far.
I would like to learn more about disabling anti-persuasion shields as related to running (or not running) mental “programs”.
To disable your own shields, you just refrain from internal critique and stay focused on whatever process of autosuggestion you’re undertaking. Suspend disbelief, in other words.
Think of autosuggestion as requiring a sterile internal environment. If you think something like “I don’t know how to do this” while trying to imagine something, you will be priming yourself with “not knowing how to do it”!
Remember, just seeing words to do with “old” made people walk more slowly… if you pipe stronger messages into your own head, you will get stronger results.
What makes it work is not “belief” but experience without disbelief. After all, priming can occur without conscious notice—if it were consciously noticed, the person might choose to disregard it.
But since you don’t choose to disregard your own beliefs about what is possible or what you can do, you (as Coue says) “imagine that you cannot, and of course you cannot”.
However, if you do disbelieve your interrupting beliefs, and allow yourself to contemplate the thing you want to believe or do without disbelief, then you will successfully “autosuggest” something.
(See also Coue on imagination vs. will—if you think of the will as conscious/verbal/directed thought, and the imagination as subconscious/sensory/wondering thought, then what he says will make sense.)
It’s not bad. I linked to a page that can be read from the top, to give a good intro to the idea of “suggestion” vs. “autosuggestion”, but really most of the book is quite good.
Thanks. I will probably respond after processing the information in your post and your book, so head’s up for a reply in the deep future. :)
Credible only in so far as “one can consistently induce change in a human being without strenuous effort or lengthy struggle” is, and I don’t think the latter is anything like obviously right. On the face of it, it seems obviously wrong: people often do require effort and struggle to change, and evolutionarily speaking that seems like what one should expect. (You don’t want random other people to be able to change your behaviour too easily, and easy self-modification is liable to make for too-easy modification by others.)
You remind us frequently about what miraculous techniques you have. So it seems like by now you should be a walking miracle, a paragon of well-adjusted Winning. And yet, it doesn’t seem all that rare for you to post something saying “I just discovered another idiotic bug in my mental functioning. So I bypassed the gibson using my self-transcending transcendence-transmogrification method, and I’m better now.” To my cynical eye, there seems to be some tension here.
OK, so there are some ways, commonly harmful but maybe sometimes exploitable for good, in which our mental states can be messed with non-rationally for a shortish period. Remind me, please, how that is supposed to be good evidence that we can consistently change our behaviours, motivations, etc., in ways we actually want to, with lasting effect?
Yes… and no. See below:
Exactly. The conscious mind is both an offensive weapon (for persuading others) and a defense against persuasion. Separating conscious/social (“far”) beliefs from action-driving (“near”) beliefs allows the individual to get along with the group while remaining unconvinced enough to continue acting impulsively for their own benefit under low-supervision circumstances.
In other words, willpower works better when you’re being watched… which is exactly what we’d expect.
The offense/defense machinery evolved with or after language; initially language probably worked directly on the “near” system, which led to the possibility of exploitation via persuasion… and an ensuing arms race of intelligence driven by the need for improved persuasive ability and improved skepticism, balanced by the benefit of remaining able to be truly convinced of things for which sufficient sensory (“near”) evidence is available.
Only if you don’t get that belief systems aren’t always global. Some beliefs are more global than others.
Also, it’s important to bear in mind that knowing how to change something, knowing what to change, and knowing what to change it to, are all different skills. I’ve known methods for the first for quite some time now, and the last year or two I’ve focused more on the second. This year, I’ve finally started making some serious progress on the third one as well, which is actually part of understanding the second. (I.e., if you know where you’re going, it’s easier to know what’s not there yet.)
For example, Dweck’s work on fixed vs. growth mindsets: that stuff isn’t a matter of global beliefs in a literal sense. Each area of your life that you perceive as “fixed” may be a distinct belief on the emotional level, so each one needs to be changed as it’s encountered. In the month or so since I read her book, I’ve identified over half a dozen such mindsets: intelligence, time, task granularity, correctness, etc… each of which was a distinct “belief” at the emotional level regarding its “fixed”-ness.
Changing each one was “magical” in the sense that it opened up a range of choices that wasn’t available to me before… but I couldn’t simply read her book and decide, “woohoo, I will change all my fixed mindsets to growth ones”. The brain does not have a “view source” button; you cannot simply “list” all your beliefs on the basis of an abstract pattern like fixedness vs. growthness, or ones that involve supernatural thinking, or any other non-sensory abstractions. (Abstractions are in the “far” system, not the “near” one.)
We are constantly self-priming. Techniques that work, work because they change the data we prime ourselves with.
When you discovered that Santa Claus didn’t exist, did you try to stay up late to see him any more, or did your behavior change immediately, with lasting effect?
Basically, you stopped priming yourself with the thoughts that generated those behaviors, because your brain was no longer predicting certain events to occur. It is our never-ending stream of automatically-generated internal predictions that is the main internal source of priming. Change that prediction stream, and you change the behavior.
External methods of change work by forcing new predictions; internal methods (including CBT, NLP, hypnosis, etc.) work by manipulating the internal representations that are used to generate the predictions.
As a programmer, I will charitably note that it’s not uncommon for a more serious bug to mask other more subtle ones; fixing the big one is still good, even if the program may look just as badly broken afterwards. Judging from his blog, he’s doing well enough for himself, and if he was in a pretty bad state to begin with his claims may be justified. There’s a difference between “I fixed the emotional hang-up that was making this chore hard to do” and “I’ve fixed a crippling, self-reinforcing terror of failure that kept me from doing anything with my life”.
That said, there is a lack of solid evidence, and the grandiosity of the claims suggests brilliant insight or crackpottery in some mixture—but then, the same could be said of Eliezer, and he’s clearly won many people over with his ideas.
And one of the unfortunate things about the human architecture is that the more global a belief/process is, the more invisible it is… which is rather the opposite of what happens in normal computer programming. That makes high-level errors much harder to spot than low-level ones.
First year or so, I spent way too much time dealing with “hangups making this chore hard to do”, and not realizing that the more important hangups are about why you think you need to do them in the first place. So it has been taking a while to climb the abstraction tree.
For another thing, certain processes are difficult to spot because they’re cyclical over a longer time period. I recently realized that I was addicted to getting insight into problems, when it wasn’t really necessary to understand them in order to fix them, even at the relatively shallow level of understanding I usually worked with. In effect, insight was just a way of convincing myself to “lower the anti-persuasion shields”.
The really crazy/annoying thing is I keep finding evidence that other people have figured ALL of this stuff out before, but either couldn’t explain it or convince anybody else to take it seriously. (That doesn’t make me question the validity of what I’ve found, but it does make me question whether I’ll be able to explain/convince any more successfully than the rest did.)
Heh, you think mine are grandiose, you should hear the claims that other people make for what are basically the same techniques! I’m actually quite modest. ;-)
“That said, there is a lack of solid evidence, and the grandiosity of the claims suggests brilliant insight or crackpottery in some mixture—but then, the same could be said of Eliezer, and he’s clearly won many people over with his ideas.”
Precisely the point. We’re not interested in how to attract people to doctrines (or at least I’m not), but in determining what is true and finding ever-better ways to determine what is true.
The popularity of some idea is absolutely irrelevant in itself. We need evidence of coherence and accuracy, not prestige, in order to reach intelligent conclusions.
Compelling, but false. Ideas’ popularity not only contributes network effects to their usefulness (which might be irrelevant by your criteria), but it also provides evidence that they’re worth considering.
Are you saying that he displays bad behavior because he keeps fixing himself? I thought that was a good thing.
With more relevance:
I agree with your statement but only in the sense that a individual person will require effort and struggle to change in regards most magical treatments but may respond quickly to a particular treatment. To throw in my own personal experience, people change pretty quickly once you find the trick that works on them.
No, of course not. I’m saying that if you have miraculous brain-fixing techniques and deploy them as effectively as you know how to on yourself for years, then after those years you should surely (1) be conspicuously much happier / better adjusted / more rational / more productive than everyone else, and (2) not still need fixing all the time.
Now, of course, I don’t know for sure that Philip isn’t Winning much more than all the rest of us who haven’t become Clear by getting rid of our body thetans—oh, excuse me, wrong form of miraculous psychological fixing—so maybe it’s just natural cynicism that makes me doubt it. Philip, what say you? Is your brain much better than the rest of ours now?
Yes, of course, because we all know that if you have a text substitution tool like ‘sed’, you should be able to fix all the bugs in a legacy codebase written over a period of 30-some years by a large number of people, even though you have no ability to list the contents of that codebase, in just a couple of years working part-time, while you’re learning about the architecture and programming language used. Yeah, that should be a piece of cake.
Oh yeah, and there are lots of manuals available, but we can’t tell you which ones sound sensible but were actually written by idiots who don’t know what they’re talking about, and which ones sound like they were written by lunatic channelers but actually give good practical information.
Plus, since the code you’re working on is your own head, you get to deal with compiler bugs and bugs in the debugger. Glorious fun! I highly recommend it. Not.
It certainly doesn’t help that I started out from a more f’d up place than most of my clients. I’ve had a few clients who’ve gotten one session with me or attended one workshop who then considered themselves completely fixed, and others that spent only a few months with me before deciding they were good to go.
It also doesn’t help that you can’t see your own belief frames as easily as you can see the frames of others. It’s easy to be a coach or guru to someone else. Ridiculously so, compared to doing it to yourself.
See, the thing is that you don’t just say “I’ve got some ways of tweaking how my brain works. They aren’t very good, and I don’t really have any understanding of what I’m doing, but I find this interesting.” (Which would be the equivalent of “I’ve got a text-substitution tool, and maybe there might be some way of using it to fix this undocumented 30-year-old ball of mud whose code I can’t read”.)
Which is not all that surprising, given that you’re trying to make a living from helping people fix their brains, and you wouldn’t get many clients by saying “I don’t really have any more idea what I’m doing than some newbie wannabe hacker trying to wrangle the source code for Windows with no tools more powerful than sed”. But I really don’t think you should both claim that you understand brains and know how to fix them and you have “miracle” techniques and so on and so forth, and protest as soon as that’s questioned “oh, but really it’s like trying to work on an insanely complicated pile of legacy software with only crappy tools”.
Actually, I do say that; few of my blog posts do much else besides describe some bug I found, what I did to fix it, and throw in some tips about the pitfalls involved.
If I told you I had a miracle tool called a “wrench”, that made it much easier to turn things, but said you had to find which pipes or bolts to turn with it, and whether they needed to be tightened or loosened, would you say that that was a contradiction? Would you expect that having a wrench would instantly make you into a plumber, or an expert on a thousand different custom-built steam engines? That makes no sense.
Computer programmers have the same problem: what their clients perceive as “simple” vs. “difficult/miracle” is different from what is actually simple or a miracle for the programmer. Sometimes they’re the same, and sometimes not.
In the same way, many things that people on this forum consider “simple” changes can in fact be mind-bogglingly complicated to implement, while other things that they consider to be high-end Culture-level transhumanism are fucking trivial.
Funny story: probably the only reason I’m here is because in Eliezer’s work I recognized a commonality: the effort to escape the mind-projection fallacy. In his case, it was such projections applied to AI, but in my case, it’s such projections applied to self. As long as you think of your mind in non-reductionistic terms, you’re not going to have a useful map for change purposes.
(Oh, and by the way, I never claimed to “fix brains”—that’s your nomenclature. I change the contents of brains to fix bugs in people’s behavior. Brains aren’t broken, or at least aren’t fixable. They just have some rather nasty design limitations on the hardware level that contribute to the creation of bugs on the software level.)
I think this discussion is getting too lengthy and off-topic, so I shall be very brief. (I’ll also remark: I’m not actually quite as cynical about your claims as I am probably appearing here.)
If you told me you had a miracle tool called a wrench, and an immensely complicated machine with no supporting documentation, whose workings you didn’t understand, and that you were getting really good results by tweaking random things with the wrench (note: they’d better be random things, because otherwise your analogy with an inexperienced software developer attacking an unmanageable pile of code that s/he can’t even see doesn’t work) … why, then, I’d say “Put that thing down and back away slowly before you completely fuck something up with it”.
Yes, that’s my nomenclature (though you did say “the code you’re working on is your own head”...), and I’m sorry if it bothers you. Changes to the “contents of brains”, IIUC, are mostly made by changing the actual brain a bit; the software/hardware distinction is nowhere near as clean as it is with digital computers.
It’s not that you can’t see the code at all, it’s that you can’t list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.
Such single changes sometimes generalize broadly, if you happen to hit a “function” that’s used by a lot of different things. But as with any legacy code base, it’s hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.
Well, when I started down this road, I was desperate enough that the risk of frying something was much less than the risk of not doing something. Happily, I can now say that the brain is a lot more redundant—even at the software level—than we tend to think. It basically uses a, “when in doubt, use brute force” approach to computation. It’s inelegant in one sense, but VERY robust -- massively robust compared to any human-built hardware OR software.
While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand. Roughly half of my job is fixing other people’s “fixes” because they really had no concept of what was happening or how to use the tools in the box correctly.
Brain code doesn’t crash, and the brain isn’t capable of locking in a tight loop for very long; there are plenty of hardware-level safeguards that are vastly better than anything we’ve got in computers. Remember, too, that brains have to be able to program themselves, so the system is inherently both simple and robust.
In fact, brains weren’t designed for conscious programming as such. What “mind hacking” essentially consists of is deliberately directing the brain to information that convinces it to make its own programming changes, in the same way that it normally updates its programming—e.g. by noticing that something is no longer true, a mistake in classification has been made, etc. (The key being that these changes have to be accomplished at the “near” thinking level, which operates primarily on simple sensory/emotional patterns, rather than verbal abstractions.)
In a sense, to make a change at all, you have to convince the brain that what you are asking it to change to will produce better results than what it’s already doing. (Again, in “near”, sensory terms.) Otherwise, it won’t “take” in the first place, or else it will revert to the old programming or generate new programming once you get it “in the field”.
I don’t mean you have to convince the person, btw; I mean you have to convince the brain. Meaning, you need to give it options that lead to a prediction of improved results in the specific context you’re modifying. In a sense, it’d be like talking an AI into changing its source code; you have to convince it that the change is consistent with its existing high-level goals.
It isn’t exactly like that, of course—all these things are all just metaphors. There isn’t really anything there to “convince”, it’s just that what you add into your memory won’t become the preferred response unless it meets certain criteria, relative to the existing options.
Truth be told, though, most of my work tends to be deleting code, not adding it, anyway. Specifically, removing false predictions of danger, and thereby causing other response options to bump up in the priority queue for that context.
For example, suppose you have an expert system that has a rule like “give up because you’re no good at it”, and that rule has a higher priority than any of the rules for performing the actual task. If you go in and just delete that rule, you will have what looks like a miraculous cure: the system now starts working properly. Or, if it still has bugs, they get ironed out through the normal learning process, not by you hacking individual rules.
I suppose what I’m trying to say is that there isn’t anything I’m doing that brains can’t or don’t already do on their own, given the right input. The only danger in that, is if you say, motivated yourself to do something dangerous without actually knowing how to do that thing safely. And people do that all the time anyway.
Ah, that makes sense.
I think my only response to what you said is that some things need fixing forever since the perfect picture is being viewed from a fuzzy perspective. Personally, I doubt that the self-improvement ladder ever ends.
But I understand and concede the point.
I would further extend that to suggest that in fact, there is an essential nature to all tricks that work, and that the key is only that you get a person to actually DO that trick, without running other mental programs at the same time. I am finding that if I (in the coaching context) simply push someone through a process, and ruthlessly deny them the opportunity to digress, disbelieve, dispute, etc., then they will get a result that they were previously failing to obtain, due to being distracted by their own skepticism. (Not in the sense of doubting me or the technique, but doubting their own ability to do the technique, or to actually change, etc.)
This leads me to believe that most of the difference between schools of self-help is largely persuasive in function: you have to first convince a person to lower their anti-persuasion shields in order to get them to actually carry out any effective self-persuasion. Because in effect, self-management equals self-persuasion.
Interesting. Is the link you provided a good start along that topic? Is there a better introductory place? The specific request:
I would like to learn more about disabling anti-persuasion shields as related to running (or not running) mental “programs”. As in, which mental programs are anti-persuasion shields and is disabling anti-persuasion shields similar to disabling other mental programs.
It’s not bad. I linked to a page that can be read from the top, to give a good intro to the idea of “suggestion” vs. “autosuggestion”, but really most of the book is quite good. Although it was written almost 100 years ago, Coue’s book has some pretty stunning insights into what actually works. There are just a few points that I would add to what he says in the book.
First, he doesn’t address the distinctions between verbal and sensory imagination, commanding and questioning. Second, he doesn’t say much to address the issue of dealing with existing beliefs and responses.
The first omission results in people mindlessly repeating phrases or “affirmations” and thinking they are doing autosuggestion—they are not. Imagination must be used, and by imagination, I mean not intentional visualization (which would be counterproductively invoking what he calls the “will”), but rather the passive contemplation or musing on an idea, like “what would it be like if...?” Or “how good will it be when...?” (These are leading questions, of course, but then, that’s the point: to lead your imagination to respond on its own.)
The second omission is that when you attempt to imagine what something is like, your internal response may be a feeling or idea that it is impossible, impractical, nonsensical, a bad idea, that you can’t do it, or some other form of interference.
At this point, it’s really only necessary to wonder what it would be like if the desired thing were already had, anyway. In other words, one acknowledges the response, but does not treat it as if it were true. You then repeat the attempt at inquiry. This is how you bypass the shields, as it were.
Yes and no. It used to be that I spent all my time (and encouraged others to spend theirs) on modifying the memories that induced the kind of critical responses and negative predictions that stopped them from doing things. What I have begun wondering only today, is whether it might not be simpler just to bypass such blocks and not give them any credence to start with.
In other words, it has occurred to me that maybe it is not the initial negative response that’s an issue for people being blocked; maybe it’s just their response to that response. In other words, person A gets a negative response to the idea of doing something, and then responds to that by giving up or feeling like it’s useless. Whereas person B might get the same initial negative response, and then respond to it by imagining how good the result is going to be. Paradoxically, the more negative responses person B gets, the greater their motivation will become.
So, I’m currently self-experimenting with that idea—of focusing on the 2nd order responses to blocks instead of the 1st order blocks themselves. If it works, it should be a big increase in efficiency, since the 2nd-order responses are more likely to be system-global, meaning fewer program changes needed to effect system-wide change.
But that’s still to be tested. Right now, I’ve just noticed that bypassing blocks by simply ignoring the 1st-order response is quite possible. I’ve done it with various things today and it has worked quite well so far.
To disable your own shields, you just refrain from internal critique and stay focused on whatever process of autosuggestion you’re undertaking. Suspend disbelief, in other words.
Think of autosuggestion as requiring a sterile internal environment. If you think something like “I don’t know how to do this” while trying to imagine something, you will be priming yourself with “not knowing how to do it”!
Remember, just seeing words to do with “old” made people walk more slowly… if you pipe stronger messages into your own head, you will get stronger results.
What makes it work is not “belief” but experience without disbelief. After all, priming can occur without conscious notice—if it were consciously noticed, the person might choose to disregard it.
But since you don’t choose to disregard your own beliefs about what is possible or what you can do, you (as Coue says) “imagine that you cannot, and of course you cannot”.
However, if you do disbelieve your interrupting beliefs, and allow yourself to contemplate the thing you want to believe or do without disbelief, then you will successfully “autosuggest” something.
(See also Coue on imagination vs. will—if you think of the will as conscious/verbal/directed thought, and the imagination as subconscious/sensory/wondering thought, then what he says will make sense.)
Thanks. I will probably respond after processing the information in your post and your book, so head’s up for a reply in the deep future. :)