Find a nonrational aspect of your nature that is hindering you right now.
Determine privately to fix it.
Set a short deadline. Do the necessary work.
Write it up on LW at the deadline. Whether or not it worked.
I would add a step 1.5 or 2.5 -- define in advance what criteria you will use to determine “whether or not it worked”. Ideally, select criteria that are based on your automatic responses in the relevant context, rather than what you can do about the problem when you’re focused and paying attention.
Otherwise, you run an extremely high risk of false-positive “success” reports. (And I speak from experience.)
Yes, totally agreed. Be precise, define a goal that’s both reachable and testable.
“Fix the automatic response” is an interesting criterion. Am I right you’re saying “it doesn’t count if you can only do it with a special effort?” That’s an interestingly subtle point. The improvement has to be pervasive in your life. It agrees with my preference for a private intent—you can’t always rely on a gun to your head to make you work at peak ability.
But contrariwise, it’s true that the way you learn stuff in general is to do it many many times deliberately, and it gets cached and you can do it automatically. So fixing an improvement in automation could take a long while, longer than a one shot quick commitment.
I wonder what would be the best criterion that would capture the ideal of ongoing use even if not yet automation?
Am I right you’re saying “it doesn’t count if you can only do it with a special effort?”
Doing it with effort is fine; needing to make an effort to do (or remember to do) it in the first place is not, or you’re going to forget as soon as it drifts out of your sphere of attention/interest.
But contrariwise, it’s true that the way you learn stuff in general is to do it many many times deliberately, and it gets cached and you can do it automatically.
How many times do you have to practice non-belief in Santa Claus, before you stop trying to stay up and watch the chimney?
Some learning works better fast than slow. Case in point, btw, the book “The Four-Day Win” presents a very strong case that it only takes us four days of something to get used to it and treat it as habitual, not the 21-30 days propounded by most self-help material.
The catch of course is that it has to be something that you’re not demotivated by, and doesn’t conflict with other goals of yours. But then, if it has one of those problems, 21-30 days won’t make it a “real” habit, either. In effect, 21-30 days is just a survivor-bias test: if you manage to make it that long without giving up, you probably didn’t have any conflicts or demotivations, so you “succeeded”. Woohoo. And if you didn’t make it that far, then obviously you didn’t do it long enough to turn it into a habit, so it’s your fault.
That’s why I think automation improvement via extended-duration repetition is a joke. If your automatic response to something is negative, doing it over and over will only change the response if the response had something to do with reality in the first place.
Whereas, for example, if the real reason you don’t exercise is because you believe only shallow people do it, then actually exercising won’t change that belief, no matter how long you do it!
At best, it will convince you that you’re shallow. Or, more likely, you will make a big deal to yourself about how much of a struggle it is and how much you hate it, because you need to justify that you’re not shallow for doing it.
Bleah. Anyway, this entire rat’s nest of self-confusion is why I emphasize testing automatic response as a success criterion: automatic responses are repeatable, and they are the result you’re really after in the first place! (Who wants to have MORE things to personally, consciously, monitor and control in their life?)
Good call! Though I’d say that for those who know what they’re doing, that’s part of step 1 and/or 2. But actually writing it down might be essential.
A tangential note: I am currently so biased against anecdotal evidence that I read “And I speak from experience” as making this comment less convincing. Surely that’s overcorrecting?
I am currently so biased against anecdotal evidence
You mean, as opposed to personal experience? ISTM that all evidence is either personal experience or anecdotal, i.e., something that someone else told you about.
I read “And I speak from experience” as making this comment less convincing. Surely that’s overcorrecting?
I don’t know. What’s your experience with that? ;-)
More seriously: I wonder, does that mean you now believe that you don’t run a risk of false-positives if you don’t define your criteria in advance? How does that match your own experience?
That is, does someone else having an experience actually make your own experience less valid? (It wouldn’t surprise me, btw, I’ve believed much weirder things with worse side-effects before.)
No, as opposed to empirical data with some of the usual bias-correction measures like proper sampling.
That is, does someone else having an experience actually make your own experience less valid?
No, just internally flagged the comment as “not worthwhile” because it relied upon anecdotes where clearly data would be more appropriate. But a comment with no such mention should not be more valuable, so this seems to be an overcorrection.
No, as opposed to empirical data with some of the usual bias-correction measures like proper sampling.
My point is that so-called “empirical data” is also either anecdotal or experiential. If you didn’t collect it yourself, it’s still an anecdote. (Not that all anecdotes come from equally reliable sources, of course—just pointing out that it’s a non-useful/false dichotomy to divide the world into “anecdotes” and “data”.)
No, just internally flagged the comment as “not worthwhile” because it relied upon anecdotes where clearly data would be more appropriate.
WTF? Seriously, how in the seven hells is an experience not data?
It’s a common-enough and useful distinction (I might grant that it’s something of a false dichotomy, but I think that’s besides the point). Just to put a point on it:
“My doctor told me I had two weeks to live, so my church prayed for me to get better, and my cancer went away!”
Tells you effectively nothing about the efficacy of prayer. Multiplying this into a thousand anecdotes also tells you effectively nothing about the efficacy of prayer. By contrast, putting together a good study about the efficacy of prayer with the appropriate controls, tells you quickly that prayer isn’t better than chance.
Heh, this reminds me of some study I read about here (or on OB), about how when you give people a fact, they tend to first accept it before evaluating whether it’s true, and that if you distract them before they have time to evaluate it, they’ll “remember it as true”.
Seems like something similar is going on here, where by default you don’t check how reliable a piece of information is, unless you’re prompted by something that makes you think about it.
Asking for good bias-correction is an absurd standard of evidence. You don’t ask that of most information you use. Moreover, I bet you’re very biased on when you think to apply this standard.
It’s not entirely clear what pjeby means. If it’s just self-experimentation, it’s basically a single anecdote and not terribly useful. But I assume that he’s talking about his clients, still a biased sample, but as good as it’s going to get.
It’s not entirely clear what pjeby means. If it’s just self-experimentation, it’s basically a single anecdote and not terribly useful.
The supreme irony of this train of thought is that my original suggestion was for people to apply good evidentiary standards to their self-experiments. So we are now debating whether I have a good standard of evidence for recommending the use of good standards of evidence. ;-)
But I assume that he’s talking about his clients, still a biased sample, but as good as it’s going to get.
Sort of. I noticed that if I didn’t define what I was testing before I tested it, it was easy to end up thinking I’d changed when I hadn’t. And I tend to notice that when my clients aren’t moving forward in their personal change efforts, it’s usually because they’re straying off-process, most commonly in not defining what they are changing and sticking to that definition until they produce a result. (As opposed to deciding midstream that “something else” is the problem.)
“not terribly useful” was wrong. It should have been something more like “not generalizable to other people.” We certainly agreed with your standard of evidence, but there’s a big gap between a failure mode likely enough to be worth adding steps to fix and an “extremely high risk.”
This post makes it sounds like there’s a lot of room for confirmation bias, but that doesn’t bother me so much; in particular, it is a lot better than if it’s just you.
I would add a step 1.5 or 2.5 -- define in advance what criteria you will use to determine “whether or not it worked”. Ideally, select criteria that are based on your automatic responses in the relevant context, rather than what you can do about the problem when you’re focused and paying attention.
Otherwise, you run an extremely high risk of false-positive “success” reports. (And I speak from experience.)
Yes, totally agreed. Be precise, define a goal that’s both reachable and testable.
“Fix the automatic response” is an interesting criterion. Am I right you’re saying “it doesn’t count if you can only do it with a special effort?” That’s an interestingly subtle point. The improvement has to be pervasive in your life. It agrees with my preference for a private intent—you can’t always rely on a gun to your head to make you work at peak ability.
But contrariwise, it’s true that the way you learn stuff in general is to do it many many times deliberately, and it gets cached and you can do it automatically. So fixing an improvement in automation could take a long while, longer than a one shot quick commitment.
I wonder what would be the best criterion that would capture the ideal of ongoing use even if not yet automation?
Doing it with effort is fine; needing to make an effort to do (or remember to do) it in the first place is not, or you’re going to forget as soon as it drifts out of your sphere of attention/interest.
How many times do you have to practice non-belief in Santa Claus, before you stop trying to stay up and watch the chimney?
Some learning works better fast than slow. Case in point, btw, the book “The Four-Day Win” presents a very strong case that it only takes us four days of something to get used to it and treat it as habitual, not the 21-30 days propounded by most self-help material.
The catch of course is that it has to be something that you’re not demotivated by, and doesn’t conflict with other goals of yours. But then, if it has one of those problems, 21-30 days won’t make it a “real” habit, either. In effect, 21-30 days is just a survivor-bias test: if you manage to make it that long without giving up, you probably didn’t have any conflicts or demotivations, so you “succeeded”. Woohoo. And if you didn’t make it that far, then obviously you didn’t do it long enough to turn it into a habit, so it’s your fault.
That’s why I think automation improvement via extended-duration repetition is a joke. If your automatic response to something is negative, doing it over and over will only change the response if the response had something to do with reality in the first place.
Whereas, for example, if the real reason you don’t exercise is because you believe only shallow people do it, then actually exercising won’t change that belief, no matter how long you do it!
At best, it will convince you that you’re shallow. Or, more likely, you will make a big deal to yourself about how much of a struggle it is and how much you hate it, because you need to justify that you’re not shallow for doing it.
Bleah. Anyway, this entire rat’s nest of self-confusion is why I emphasize testing automatic response as a success criterion: automatic responses are repeatable, and they are the result you’re really after in the first place! (Who wants to have MORE things to personally, consciously, monitor and control in their life?)
Good call! Though I’d say that for those who know what they’re doing, that’s part of step 1 and/or 2. But actually writing it down might be essential.
A tangential note: I am currently so biased against anecdotal evidence that I read “And I speak from experience” as making this comment less convincing. Surely that’s overcorrecting?
You mean, as opposed to personal experience? ISTM that all evidence is either personal experience or anecdotal, i.e., something that someone else told you about.
I don’t know. What’s your experience with that? ;-)
More seriously: I wonder, does that mean you now believe that you don’t run a risk of false-positives if you don’t define your criteria in advance? How does that match your own experience?
That is, does someone else having an experience actually make your own experience less valid? (It wouldn’t surprise me, btw, I’ve believed much weirder things with worse side-effects before.)
No, as opposed to empirical data with some of the usual bias-correction measures like proper sampling.
No, just internally flagged the comment as “not worthwhile” because it relied upon anecdotes where clearly data would be more appropriate. But a comment with no such mention should not be more valuable, so this seems to be an overcorrection.
My point is that so-called “empirical data” is also either anecdotal or experiential. If you didn’t collect it yourself, it’s still an anecdote. (Not that all anecdotes come from equally reliable sources, of course—just pointing out that it’s a non-useful/false dichotomy to divide the world into “anecdotes” and “data”.)
WTF? Seriously, how in the seven hells is an experience not data?
It’s a common-enough and useful distinction (I might grant that it’s something of a false dichotomy, but I think that’s besides the point). Just to put a point on it:
“My doctor told me I had two weeks to live, so my church prayed for me to get better, and my cancer went away!”
Tells you effectively nothing about the efficacy of prayer. Multiplying this into a thousand anecdotes also tells you effectively nothing about the efficacy of prayer. By contrast, putting together a good study about the efficacy of prayer with the appropriate controls, tells you quickly that prayer isn’t better than chance.
Heh, this reminds me of some study I read about here (or on OB), about how when you give people a fact, they tend to first accept it before evaluating whether it’s true, and that if you distract them before they have time to evaluate it, they’ll “remember it as true”.
Seems like something similar is going on here, where by default you don’t check how reliable a piece of information is, unless you’re prompted by something that makes you think about it.
Interesting bug! I probably have it too …
Asking for good bias-correction is an absurd standard of evidence. You don’t ask that of most information you use. Moreover, I bet you’re very biased on when you think to apply this standard.
It’s not entirely clear what pjeby means. If it’s just self-experimentation, it’s basically a single anecdote and not terribly useful. But I assume that he’s talking about his clients, still a biased sample, but as good as it’s going to get.
The supreme irony of this train of thought is that my original suggestion was for people to apply good evidentiary standards to their self-experiments. So we are now debating whether I have a good standard of evidence for recommending the use of good standards of evidence. ;-)
Sort of. I noticed that if I didn’t define what I was testing before I tested it, it was easy to end up thinking I’d changed when I hadn’t. And I tend to notice that when my clients aren’t moving forward in their personal change efforts, it’s usually because they’re straying off-process, most commonly in not defining what they are changing and sticking to that definition until they produce a result. (As opposed to deciding midstream that “something else” is the problem.)
“not terribly useful” was wrong. It should have been something more like “not generalizable to other people.” We certainly agreed with your standard of evidence, but there’s a big gap between a failure mode likely enough to be worth adding steps to fix and an “extremely high risk.”
This post makes it sounds like there’s a lot of room for confirmation bias, but that doesn’t bother me so much; in particular, it is a lot better than if it’s just you.