I agree with most of what you say here, but I think you’re over-emphasizing the idea that search deals with unknowns whereas control deals with knows.
There’s uncertainty in both approaches, but it is dealt with differently. In controls, you’ll often use kalman filters to estimate the relevant states. You might not know your exact state because there is noise on all your sensors, and you may have uncertainty in your estimate of the amount of noise, but given your best estimates of the variance, you can calculate the one best estimate of your actual state.
There’s still nothing to search for in the sense of “using our model of our system, try different kalman filter gains and see what works best”, because the math already answered that for you definitively. If you’re searching in the real world (i.e. actually trying different gains and seeing what works best), that can help, but only because you’re getting more information about what your noise distributions are actually like. You can also just measure that directly and then do the math.
With search over purely simulated outcomes, you’re saying essentially “I have uncertainty over how to do the math”, while in control theory you’re essentially saying “I don’t”.
Perhaps a useful analogy would be that of numerical integration vs symbolic integration. You can brute force a decent enough approximation of any integral just by drawing a bunch of little trapezoids and summing them up, and a smart highschooler can write the program to do it. Symbolic integration is much “harder”, but can often give exact solutions and isn’t so hard to compute once you know how to do it.
Why not both?
You can do both. I’m not trying to argue that doing the math and calculating the optimal answer is always the right thing to do (or even feasible/possible).
In the real world, I often do sorta “search through gains” instead of trying to get my analysis perfect or model my meta-uncertainty. Just yesterday, for example, we had some overshoot on the linear actuator we’re working on. Trying to do the math would have been extremely tedious and I likely would have messed it up anyway, but it took about two minutes to just change the values and try it until it worked well. It’s worth noting that “searching” by actually doing experiments is different than “searching” by running simulations, but the latter can make sense too—if engineer time doing control theory is expensive, laptop time running simulations is cheap, and the latter can substitute for the former to some degree.
The point I was making was that the optimal solution is still going to be what control theory says, so if it’s important to you to have the rightest answer with the fewest mistakes, you move away from searching and towards the control theory textbook—not the other way around.
Most of your post is describing situations where you can’t easily solve a control problem with a direct rule, so you spin up a search based on a model of the situation.
I don’t follow this part.
I would agree that a search process in which the cost of evaluation goes to infinity becomes purely a control process: you can’t perform any filtering of possibilities based on evaluation, so, you have to output one possibility and try to make it a good one (with no guarantees).
This is backwards, actually. “Control” isn’t the crummy option you have to resort to when you can’t afford to search. Searching is what you have to resort to when you can’t do control theory.
When your Jacuzzi is at 60f and you want it at 102f, there are a lot of possible heating profiles you could try out. However, you know that no combination of “on off off on on off off on” is going to surprise you by giving a better result than simply leaving the heater on when it’s too cold and off when it’s too hot. Control theory actually can guarantee the optimal results, and with some simple assumptions it’s exactly what it seems like it’d be. Guided missiles do get more complicated than this with all the inertias and significant measurement noise and moving target and all that, but the principle remains the same: compute the best estimate of where you stand relative to the trajectory you want to be on (where “trajectory” includes things like the angular rates of your control surfaces), and then steer your trajectory towards that. There’s just nothing left to search for when you already know the best thing to do.
The reason we ever need to search is because it’s not always obvious when our actions are bringing us towards or away from our desired trajectory. “Searching” is performing trial and error by simulating forward in time until you realize “nope, this leads to a bad outcome” and backing up to before you “made” the mistake and trying something else. For example if you’re trying to cook a meal you might have to get all the way to the finished product before you realized that you started out with too much of one of your ingredients. However, this is a result of not knowing the composition you’re looking for and how your inputs affect it. Once you understand the objective, the process and actuators, and how things project into the future, you know your best guess of where to go at each step. If the water is too cold, you simply turn the heater on.
Searching, then, isn’t just something we do when projecting forward and evaluating outcomes is cheap. It’s what we do when analyzing the problem and building an understanding of how our inputs affect our trajectories (i.e. control theory) is expensive. Or difficult, or impossible.
Or perhaps better put, searching is for when we haven’t yet found what we want and how to get there. Control systems are what we implement once we know.
Not every important concepts has implications which are immediately obvious, and it’s generally worth making space for things which are true even when you can’t yet find the implications. It’s also worth making the post.
That said, one of the biggest implications I draw from this concept is that of “seeking ’no’s”. If you want a “yes”, then often what you can do is go out of your way to make “no” super easy to say, so that the only reason they won’t say “yes” is because “yes” isn’t actually true/in their best interests. A trivial example might be that if you want someone to help you unload your moving truck, giving them the out “I know you’ve got other things you need to do, so if you’re busy I can just hire some people to help” will make it easier to commit to a “yes” and not feel resentful for being asked favors.
More subtly, if you’re interested in “showing someone that they’re wrong”, often it more effective to drop the goal entirely and instead focus on where you might be wrong. If you can ask things with genuine curiosity and intent to learn, people become much more open to sharing their true objections and then noticing when their views may not add up.
“Seeking ’no’s” is a concept that applies everywhere though, and most people don’t do it nearly enough.
You’re right that “feelings are information, not numbers to maximize” and that hiding a user’s posts is often not a good solution because of this.
I don’t think Christian is making this mistake though.
When someone is suffering from an injury they cannot heal, there are two problems, not one. The first is the injury itself — the broken leg, the loss of a relationship, whatever it may be. The second is that incessant alarm saying “THIS IS BAD THIS IS BAD THIS IS BAD” even when there’s nothing you can do.
If you want to help someone in this situation, it’s important to distinguish (and help them distinguish) between the two problem and come to agreement about which one it is that you should be trying to solve: are we trying to fix the injury here, or are we just trying to become more comfortable with the fact that we’re injured? Even asking this question can literally transform the sensation of pain, if the resulting reflection concludes “yeah, there’s nothing else to do about this injury” and “yeah, actually the sensation of pain itself isn’t a problem”.
Earlier in this discussion, Vanessa said “I feel X”, and the response she got was taking the problem to be about the “X” part, and arguing that X is not true. This is a great and satisfying response so long as the perceived problem is definitely “X” and not at all “I feel”. The response wasn’t satisfying though, and she responded by saying that she thought “I feel” was enough to be worth saying.
Since it has already been said that “if the problem is X, we can discuss whether X is actually true, and solve it if it is”, Christian’s contribution was to add “and if it’s not that you think X is actually true and just want help with your feelings, here’s a way that can help”. It’s helpful in the case where Vanessa decides “yes, the problem is primarily the feeling itself, which is maladaptive here”, and it’s also helpful in clarifying (to her and to others) that if she isn’t interested in taking the nerve block, her objection must be a factual claim about X itself, which can then be dealt with as we deal with factual claims (without special regards to feelings, which have been decided to be “not the problem”).
It’s not the most warm and welcoming way to deal with feelings (which may or may not reflect accurate/perceived as accurate upon reflection information), but not every space has to be warm and welcoming. There is a risk of conflating “it helps build community to help people manage their feelings” with “catering to feelings takes precedence over recognizing fact”, and that’s a nasty failure mode to fall into. If we want to manage that rule with a hard and fast “no emotional labor will be supplied here, you must manage your feelings in your own time”, that is a valid approach. And if there is a real threat of that conflation taking over, it’s probably the right one. However, there are better (more pleasant, welcoming/community building, and yes, truth-finding) methods to that we can play with once we’re comfortable that we’re safe from feelings becoming a negative utility monster problem. It’s just that in order to play with them safely, we must be very clear about the distinction between “I feel X, and this is valid evidence which you need to deal with” and “I feel X, and this is my problem, which I would appreciate assistance with even though you’re obviously not obligated to fix it for me”.
[...]but when the whole point of my comment was that jimmy ignored Mary’s substantive point I think it’s obnoxious to then ignore my substantive point about Mary’s substantive point being ignored.
FWIW, “jimmy ignored Mary’s substantive point” is both uncharitable and untrue, and both “making uncharitable and untrue statements as if they were uncontested fact” and “stating that you find things obnoxious in cases where people might disagree about what is appropriate instead of offering an argument as to why it shouldn’t be done” stand out as far more obnoxious to me.
I normally would just ignore it (because again, I think saying “I think that’s obnoxious” is generally obnoxious and unhelpful) but given your comment you’ll probably either find the feedback helpful or else it’ll help you change your mind about whether it’s helpful to call out things one finds to be obnoxious :P
The exact phrasing isn’t important, but conveying the right message is. As Zvi and Ruby note, that “being”/”doing”/etc part is important. “You’re dumb” is not an acceptable alternative because it does not mean the same thing. “Your argument is bad” is also unacceptable because it also means something completely different.
“Your argument is bad” only means “your argument is bad”, and it is possible to go about things in a perfectly reasonable way and still have bad arguments sometimes. It is completely different than a situation where someone is failing to notice problems in their arguments which would be obvious to them if they weren’t engaging in motivated cognition and muddying their own thinking. An inability to think well is quite literally what “dumb” is, and “being dumb” is a literal description of what they’re doing, not a sloppy or motivated attempt to say or pretend to be saying something else.
As far as “then why does it always come out that way”, besides the fact that “you’re being dumb” is far quicker to say than the more neutral “you’re engaging in motivated cognition”, in my experience it doesn’t always or even usually come out that way — and in fact often doesn’t come out at all, which was kinda the point of my original comment.
When it does take that form, there are often good reasons which go beyond “¼ the syllables” and are completely above board, explicit, and agreed upon by both parties. Counter-signalling respect and affection is perhaps the clearest example.
There are examples of people doing it poorly or with hostile and dishonest intent, of course, but the answer to “why do those people do it that way” is a very different question than what was asked.
This is not the test for whether a statement has meaning. If I say “this vaccine you’re getting does not cause autism”, that would be meaningful even if it’s not sometimes false when applied to other vaccines. It has meaning whenever “this vaccine causes autism” describes a different world than “this vaccine does not cause autism”.
It may not convey any information to you if you already know “there are no vaccines about which that statement is false”, but not everyone shares that certainty, and the people who don’t might benefit from reassurance.
This definitely depends on you being right about the “this vaccine doesn’t cause autism” thing, of course. You have to be able to honestly and justifiably state “this vaccine does not cause autism”, as encouraging people to take vaccines under false or unjustified premises is bad. You have to maintain openness to checking the data with them and changing your own mind if you do not find what you expect to find, because if you’ve closed your mind to the data not only does that make your job of persuasion harder, it makes your job of actually being reliably right harder. I’d even go so far as to say that not only should you be willing to put your money where your mouth is, you should even be able to do it *without flinching*. This means being able to put yourself in their shoes and actually experience “okayness” yourself.
Yes, if you can’t do all of these things then you should do something about it before assuring them that it’s okay. However, if you have good reason to believe that the statement is always true, that just means “figure out how to do all these things” is the thing you do about it before assuring them that it’s okay.
The precise phrasing isn’t important, and often “growls” do work. The important part is in knowing that you can safely express your criticisms unfiltered and they’ll be taken for what they’re worth.
To offer a single counterexample, my wife describes herself as being sickeningly nurturing when together with one of her closest friends.
I don’t think they’re mutually exclusive. My response in close relationships tends to be both extra combative and extra nurturing, depending on the context.
The extra combativeness comes from common knowledge of respect, as has already been discussed. The extra nurturing is more interesting, and there are multiple things going on.
Telling people when they’re being dumb and having them listen can be important. If those paths haven’t been carved yet, it can be important to say “this is dumb” and prove that you can be reliably right when you say things like that. Doing that productively isn’t trivial, and the fight to get your words respected at full value can get in the way of nurturing. In my close relationships where I can simply say “you’re being dumb” and have them stop and say “oops, what am I missing?” I sometimes do, but I’m also far more likely to be uninterested in saying that because they’ll figure it out soon enough and I actually am curious why they’re doing something seems so deeply mistaken to me. Just like how security in nurturing can allow combativeness, security in combativeness can allow nurturing.
Another thing is that when people gain trust in you to not shit on them when they’re vulnerable, they start opening up more in places in which nurture is the more appropriate response. In these cases it’s not that I’m being nurturing instead of being combative, it’s that I’m being nurturing instead of not having the interaction at all. Relative to the extreme care that’d need to be taken with someone less close in those areas, that high level of nurturing is still more combative.
The times I was able to get people to do things that they felt were too unlikely to commit to were largely about lowering the emotional costs of failure. The context is a bit different, but it seems likely that some of the same factors apply.
Using “writing HPMoR” as an example, there’s more than one thing failure could be taken to mean. One is “I tested a high risk high reward idea, and it didn’t pan out. I learned something useful about what kinds of things I can’t do (right away, at least), and it still strikes me as having been worth attempting, given what I knew at the time. If I keep trying high risk high reward ideas one of them is likely to pay out, because the idea that I’m limited by what social expectations would see as “modest” isn’t even worth taking seriously”. A completely different thing it could mean is “I was arrogant to think I had a chance at this. I learned nothing on the object level because I already knew I couldn’t do it, but on the meta level I learned that I was wrong to set this aside and hope. In hindsight, it was a mistake that never was worth trying in the first place, and if I keep trying high risk high rewards things I’m just going to keep failing because social expectations of what I’m capable of are *right*”. The people with the latter anticipation are going to be less thrilled about flipping that coin with a 50% chance of success because the other 50% hurts a lot more.
The former mindset *sounds* a lot better, and people are going to want to say “yeah, that one sounds right! I believe *that* one!” even when their private thoughts tend towards the latter mindset. If you try to get someone in the latter category to act like they’re in the former category, you’re going to run into motivation problems. You’re going to hear “You’re right, and I want to… I just can’t find the motivation”.
In order to get people to shift from “failure means I should be less confident and try less” to “failure means this particular one didn’t pan out, and it’s still worth trying more”, you have to be able to engage with (and pass the ‘ideological turing test’ of) their impulses to take failure as indicative of a larger problem. There is definitely a skill to this, and it can be tough when you can plainly see that the right answer is to “just try it”. At the same time, it’s a skill that can be learned and it does work for opening things up for change.
I missed this response because I hadn’t found the “someone has replied to your comment” indicator
The question “is it true” is exactly what informs me when I say “I know this fear to be irrational”. I’ve seen situations in which one person is little more than a burden on another, and is still accepted and even taken care of much like one would do with any given loved one regardless of their practical worth. The failure I’m pointing to is that I can completely understand that line of reason, but my intuitive belief seems to be unaffected by it. The update in information created by this test didn’t cascade down into my intuition, which I think is because my intuition is holding a piece (or set) of stronger beliefs that conflict with this anticipation. There is something arguing a “Yes, but...” where the ‘but’ is still more convincing than the ‘yes’.
Is it that the information “didn’t cascade down” to your intuition, or is it just that your intuition doesn’t find that piece of information as convincing as you think it ought to be?
In general, when you get a “yes, but” (and *especially* when the “but” is explicitly more convincing than the “yes”), focus on the “but”. But what? Yes, you understand that you’ve seen situations where one person sure seems to be little more than a burden and is still accepted, but that part of you still isn’t convinced. Why not? What’s in the “but”?
If I had to take a guess, you probably don’t *want* to be little more than a burden on someone else, even if they still accept you (maybe they *shouldn’t*, even). I know that’s the case with other people, and if you feel the same way it would make sense that “but they’ll accept me anyway” doesn’t feel like it changes anything, no?
I’m not sure I follow you on the idea of lines of retreat. It seems like it a ‘line of retreat’ is moving around an obstacle deemed to difficult rather than through it. It would be useful to accept the obstacle as insurmountable without rigorous testing if you need to move forward before you can complete the testing. But my issue is that if this obstacle is too long, then I’m constantly skirting a more optimal path. It’s like walking around a forest instead of through it because you don’t trust yourself how to survive in the forest. What I’m after right now is how to survive in the forest because I think it will be faster and better in the long term to learn this skill than to become really good at skirting the forest.
I’m not sure I follow you either. Are you saying that you’d rather go forward with convincing yourself of something that you think is true rather than “going around” by making a line of retreat? If so, that’s not really what I’m getting at. I’m not saying “go around instead”, I’m saying “*even if* you want to go forward, the best way to do that when stuck is to open up the option of going around”.
I’ll give you an example. I recently had a client that wanted me to hypnotize him to forget something. I pointed out to him that what he wants is to *believe differently* he actually doesn’t know for sure that the thing he’s asking to forget actually happened—after all it’s possible that I hypnotized him to think it was real to prove a point. He was “yeah, but”ing me by saying stuff like “yeah, I mean, I guess that’s possible, but I don’t think it’s very likely”—and then not taking the idea seriously at all. I picked apart his reasoning and let him know that doing that kind of thing to prove a point is *exactly* the kind of thing I’d do, and that I have indeed done it in the past. Eventually it got down to “yeah, I mean, everything you’re saying makes sense, but I just don’t believe it”.
Seems irrational, no? Like, if you aren’t going to open your mind to evidence, then how do you expect to learn when you’re wrong. If I had doubled down on the wrongness of this decision, it would have pushed him to agreeing with what I’m saying, yet being unable to actually experience the uncertainty that I was pointing him towards. Instead, what I said was “while that may *seem* silly, that’s actually a really good strategy to keep yourself from being manipulated by tricky hypnotists”. I was giving him a line of retreat by saying “we don’t have to do this”, and putting to words his reluctance to let me inspire doubt on such seemingly fundamental things. I didn’t do it because I thought he should *take* it, but because I knew giving him the option would keep him from getting hung up and stuck on it *regardless* of which he felt was the better option. Reminding him that don’t *have to* keep going forward turned out to be a *really quick* way of getting him to accept his passage through the forest. It reminded him that he *wanted* to get his perspective manipulated by me, that his is why he was there, and so he admitted that it really was a serious possibility and took it appropriately seriously and we were back on track.
I hadn’t heard Confidence All The Way Up as a name but I’m familiar with the concept, in some places I have this, and more often than not other people had called it a weakness. That I would too readily dismiss other people’s ideas as “not aligned with the evidence” because I was spending more time developing my own theory than I was about thinking about the implications of the statements of others. >Part of me would think “So now I’m selfish because I don’t care about things that are easily disproven?” and part of me would think “Maybe I didn’t understand what they actually meant.” The second part recently started winning (probably due to a deterioration of a key relationship and not necessarily based on evidence in the strictest sense) and so I’ve been purposefully suppressing Confidence All The Way Up and trying to be a better listener. But I think he has a point that this is a useful way to function, and I would do well to apply it here. I don’t think I’ve sunk into hopelessness, so much as I’ve gotten stuck.
The weakness isn’t “being confident”, it’s in “dismissing the ideas of people who he wants to continue relating to before they agree that he would be right to”.
The question is “does the fact that their ideas are not aligned with the evidence as I see it mean that I should dismiss their views?”, and I think the answer is a pretty strong “no”, in general. You don’t have to object that people’s views aren’t aligned with the evidence just because (in your view) they are not. You don’t have to squash your feeling of confidence to listen once you realize that you can listen for reasons other than “I’m likely wrong”. You can still listen out of a desire to understand where they’re coming from *regardless* of whether they turn out to be righter than you had known. You can refrain from objecting simply be realizing that they don’t (yet) want to hear what you think.
Maybe you *didn’t* understand what they actually meant. Maybe you did, and they just didn’t recognize how much freaking thought you put into making sure you’re right, and taking into account what other people think. I’ve had both happen. Listen because they don’t see eye to eye with you, and you want to figure out how to get there.
So how could I possibly give someone advice on how much apologizing will hurt? If they’re the type of person who takes embarrassment super seriously it will be totally different than if it’s not big deal.
It’s not so much “how much will this hurt” as it is “how much should this hurt”. In other words, “how much does it have to hurt before I reconsider”. In the running case, for example, you can’t know before their run if they’ll experience mild muscle soreness of if they’ll step on a nail. You want them to know that if it feels like they’ve stepped on the nail, this isn’t what you’re talking about, and you shouldn’t try to run through that.
There is a distinction between “this is how intense the sensations might be” and “this is the thing they signify, and how bad it is”. A lot of the subjective experience of “pain” has to do with the meaning attached to it, and the reaction to that meaning.
In jiu jitsu for example, beginners are often not taught heel hooks in part because the sensation of a knee ligament about to rupture doesn’t always stand out as a big deal, and so people will sometimes hurt themselves because they don’t notice the warning signs. At the same time, you can get people screaming in pain once their foot is turned the wrong way because all of a sudden the meaning has changed and they no longer feel “okay”. Other people can have the same thing happen to them and just kinda look at it like “oops, I screwed that up” because they simply aren’t overwhelmed by the idea that their ligaments just tore and their limb isn’t pointing the right way anymore.
When you’re talking to someone who is in pain (or needs to do something which will be painful), there’s two things you want to communicate. One is that it’s okay, even though whatever the bad thing is that happened, and the other is what the bad thing is. When you can do those two things, their entire experience can change dramatically.
The same principles apply to emotional concerns. For example, if someone is going to feel embarrassed by something to a degree which seems appropriate and okay, then all you are going to need to communicate is “Yes, this is going to be embarrassing. It’s okay”. If they’re going to be way more embarrassed than is called for (in your opinion), then you *also* want to be communicating that the damage isn’t as severe (or call for such an extreme aversion) as it seems. It’s the “this sensation means your knee is about to explode” training in reverse. In this case, you aren’t just saying “embarrassment is okay”, you’re also saying “it’s not even that embarrassing”. Be prepared for people to not just take your word on this, of course, but that is the point of contention.
One way to deal with it is to actually paint them a picture of what it’s like from your perspective so that they can see that it’s not that big a deal (if they find your story convincing). Another is to just show awareness that it seems super horrible and worth being embarrassed over, and that you don’t expect them to be convinced, but that you actually don’t think it’s that big a deal, for what it’s worth. If your opinion means something to them, this can still have a significant effect.
As I read it, the advice Alicorn is giving here relates to the part where you don’t want to miss the fact that someone might perceive an embarrassing event as way worse than you do (accurately or not) and then tell them “you should put up with it so that you can do X” without noticing that you might be asking them to endure a much bigger perceived (and maybe real) cost than you actually think it’s worth. For example, you might want to say “I’d probably apologize to them if I were in your shoes. And yeah, it’s kinda gonna suck. I wouldn’t be smiling about it for sure, and I might have a hard time being in a good mood for the rest of the evening, but it’s not like it’s worth traumatizing yourself over. If it feels like something you can’t handle, then that’s fine too. It’s not the end of the world if this person doesn’t get their apology”.
I agree with the idea that it’s important for people to understand their pain when they aren’t going to just flinch from it.
The framing you chose seems odd to me though. Instead of saying “if you’re going to suggest people do something painful, you should present them with a model/make sure they understand” or saying “if someone is suggesting you do something painful, make sure you have a model”, you say “*they* should present *you* with a model”. Are you intending to suggest to your audience that they should feel *entitled* to having a model accompany the initial request, above and beyond the fact that it’s important to understand?
So my question is, now that I am aware of the node, how do I unravel it? My understanding of counter-conditioning relies on specific, actionable behaviors. “Every time I want to eat ice cream I will think about my fitness goal, and instead work out, with enough time and careful planning, my desire for ice cream will be overpowered by my working out habit and I will (virtually) no longer struggle with my desire to eat ice cream.” I’ve had success updating and adjusting other habits with this form, but I’m struggling to apply it to this problem. I fear it’s the nature of the problem itself. “Even my strongest counter-conditioning strategy is too weak to deal with how pathetic I am.”
It looks like the issue is that you want to use your “apply the solution” techniques before you know what the solution is.
If you could know that the fear is silly, then you simply apply the ice cream fix. “Every time I feel myself fearing this, I will remember all the reasons it’s not true and I will feel better”. The reason you haven’t been able to apply that technique seems to be that you’re not actually convinced that the fear is wrong. You say “I fear this” and you state the fear in quotes rather than outright saying it, but you also haven’t said “and I know this is wrong because X”. It sounds like that fear is still just sitting there unaddressed either way.
Generally, the first thing I ask myself in situations like these is “is it true?”. Are my strongest counter-conditioning strategies too weak to deal with how pathetic I am? Must I become useful in as many aspects as possible? Will people really not want anything to do with me if I don’t? Can my presence alone actually be enjoyed, or do I have to constantly raise your ability to help others before they can accept me?
These questions can be kinda hard to answer sometimes, because—especially when phrased this way—it can seem like things are “not allowed” to be one way, and when you’re not allowed to think “no”, then it’s really hard to verify when it’s “yes”. For example, maybe I really don’t want to accept (as more than just “a fear”) that I’m fundamentally not good enough for others acceptance unless I keep leveling up. In that case I’m likely to flinch away from looking at the answer to that question, and that makes it hard to really see and accept when others do accept me.
Rather than trying to force yourself to look at the answers anyway or force yourself to believe what you think is right, I’d focus on leaving yourself a line of retreat to help make it more okay if not everyone wants to spend time with you unless you get better at whatever it is. “Okay, so I don’t know whether it’s true or not, but if it were true that I need to level up before people could accept me, then what would I want to do”. “Get better” probably. Okay, so get better. What else?
Maybe it get’s a bit more complicated. “But I don’t know how to get better if even my strongest counter-conditioning strategies aren’t good enough for my situation?”. Okay, so what’s the line of retreat there? If they aren’t, then what do you do? I dunno, maybe post on LW to see if anyone has any useful input.
Nate Soares has a really good post related to this kind of thing, and is well worth reading. As he says, at some point you do have to bottom out and say “yeah, if I’m that far gone, then I fail and die”. Until that point though, there are a hell of a lot of things you can do to prepare for the various possibilities, and once you map them out the mapping can take the place of the anxieties.
And once the anxieties are gone, you’ll be back to knowing what you want to do, and just having to remember to do it.
Only if forced.
It feels like the same kind of reason that you need to be gentle with your body after running a marathon. I could try to be more specific about what might be going on that makes it difficult to keep it up, but the point is that it seems to be a fundamentally difficult to remain unfatigued and if you don’t slow down when fatigued you’re not going to move very well and are likely to break something.
Are you asking more “why can’t you mentally run unlimited marathons in a row without slowing down” or more “what damage do you risk doing when continuing through ‘mental fatigue’ that makes it something you have to heed?”?
Question for discussion: How would you suggest we use the idea of defense mechanisms in theory or practice?
It’s definitely important to keep from messing up big, but I think it’s often underestimated how much value there is to be had in noticing and changing defensive responses when you aren’t stressed and burned out. When you’re burned out, it’s often tough to figure out what you want to do instead because it means adding another problem to solve.
When you’re more or less “okay” though, defensive responses are so much easier to change because they’re likely not there out of necessity but rather just “hadn’t noticed yet”. If you look closely, they’re still all over the place and the value of non-defensive responses adds up.
The strategy I suggest involves noticing whether you’re being defensive no matter what, asking yourself whether you’re “okay” and can afford to not be defensive, to do it when you can afford it, and when you feel like you can’t afford to do without defensiveness to do it without shame and with an active awareness of what you’re losing, what conditions would cause you to change tactics, and highlighting it for what it is so that it can be contained. This way the easy changes become easier (because you know you always have the option of backing off), and failure becomes easier to recover from (because you’re not digging yourself deeper trying to avoid the inevitable, or failing to prepare for it properly).
Sometimes people are just dumb, and repeatedly do things that don’t seem to accomplish anything because they don’t know how to do anything better (because they don’t understand why they’re doing it in the first place). In other words, yes, there has to be a reason for them doing it, no, it shouldn’t be expected to be a good reason or to stand up to reflection.
In my personal experience, “I’m feeling cranky at innocent parties because of rough day” feels like a response to having less cognitive resources to spend on whatever is being asked of me. It’s the kind of thing where if it’s not too bad, “hey, I’m really not up to this. I had a rough day and need some space or gentle handling” would feel like an attractive alternative. However sometimes even coming up with that is difficult, so the temptation is to take the easy option of lashing out which communicates the same thing (“either give me my space or walk on egg shells, because I don’t want to deal with more shit when my plate is already full”) in a much more hostile manner. “Is it worth the costs of being hostile?” is the relevant question, but people often run into limits of just being overwhelmed and not being able to actually compute all the answers before picking a choice and running with it.
Does that help answer your question, or am I trying to explain the wrong part?
Back when I was first getting into hypnosis, we talked about my experiments with hypnosis and all the terrifying possibilities that they implied. Even though I’d expect you’d have taken basically the same stance even without those conversations, I imagine it is still a significant contributing factor towards your take on hypnosis, and so I feel compelled to note that I no longer feel this way about it.
To be clear, I don’t think anything we talked about is “wrong”, and the fact that the uncertainty mostly resolved on the “less scary” side isn’t very reassuring. I still can’t think of any circumstance with any hypnotist that I would allow them to “hypnotize” me, in the central meaning of the word, and I do still think people are insufficiently afraid of being hypnotized. That stuff is all more or less the same.
The big difference is that now recognize more of how “responding hypnotically” is a really important part of both learning and relating to people, and that it’s possible to do it without risking falling into any of the obvious traps that enable the scary bad possibilities. “Engage critical faculties, keep in mind evidence, develop models, etc” yes. Do that and “Listen to the voice. Respond and receive. Be open to the update, etc”—to the extent that you can do that without losing track of the former (and work to increase this extent as much as you can).
I don’t even think it’s always crazy to trade off some control for quicker learning, so long as this decision itself is made very carefully with full input of critical faculties, you understand the potential traps, and the person guiding you really can be verified to be worthy of the required trust, etc.
However it’s not necessary either. I’ve gotten better at it myself without sacrificing my need for control, and I have a very “control freaky” friend who is also figuring out how to respond hypnotically without giving up any control, and has gotten some really cool results from it. It’s taken her four years to be able to accept half the suggestions a good hypnotic subject can do in five minutes, but on the upside since she is deciding for herself which things to accept hypnotically, not only does she not expose herself to unnecessary risk, she’s able to more efficiently spot what would be useful to her in a normal conversation without anyone having to lean on it as if it were an actual hypnotic suggestion.
I guess it’s kinda like exploring caves that have a lot of goodies. Just make sure you know your way out.
You’re doing a little slight of hand by throwing all “avoiding pain” in one large bucket (and then deciding that you want to keep some of it), but then instead of analyzing “avoiding fear” as applied to one specific threat (and then deciding that it’s “irrational”). You could just as easily say “no, I don’t approve of feeling pain when I need to kick an important game winning goal. It’s high order vs low order”, or “I absolutely approve of keeping my fears, because they also protect me from real threats”.
I don’t find the distinction to be useful, except in modeling how other people relate to their own impulses. Even when people tell me that their fear is “irrational” and that they want it thrown out, I treat it more like the way you refer to the pain aversion case, and it works.
For example, my friend was telling me about her “irrational fear of heights” that she wanted gone, so I had her climb up a rock wall over concrete and had her try to hold the frame that there was zero risk and that the fear was entirely irrational while I kept pointing at all the failure modes and asking her to explain how she knew that wouldn’t happen. This forced her to take the fear seriously, and once she did she was able to integrate it into her decision making process more efficiently and therefore able to be less paralyzed by fear when rock climbing and without throwing out any of the valuable information that the fear has. Similarly, there are times when you can look at the pain and decide that it’s not necessary and watch it melt away into ticklish sensations or nothingness (and then kick the ball without wincing or anticipating badness).
In both cases, I’d look at it as a signal that there’s value unaccounted for in the decision you’re wanting to make, and once you properly account for it, all conflict and discomfort vanishes.