a thing i’ve noticed rat/autistic people do (including myself): one very easy way to trick our own calibration sensors is to add a bunch of caveats or considerations that make it feel like we’ve modeled all the uncertainty (or at least, more than other people who haven’t). so one thing i see a lot is that people are self-aware that they have limitations, but then over-update on how much this awareness makes them calibrated. one telltale hint that i’m doing this myself is if i catch myself saying something because i want to demo my rigor and prove that i’ve considered some caveat that one might think i forgot to consider
i’ve heard others make a similar critique about this as a communication style which can mislead non-rats who are not familiar with the style, but i’m making a different claim here that one can trick oneself.
it seems that one often believes being self aware of a certain limitation is enough to correct for it sufficiently to at least be calibrated about how limited one is. a concrete example: part of being socially incompetent is not just being bad at taking social actions, but being bad at detecting social feedback on those actions. of course, many people are not even aware of the latter. but many are aware of and acknowledge the latter, and then act as if because they’ve acknowledged a potential failure mode and will try to be careful towards avoiding it, that they are much less susceptible to the failure mode than other people in an otherwise similar reference class.
one variant of this deals with hypotheticals—because hypotheticals often can/will never be evaluated, this allows one to get the feeling that one is being epistemically virtuous and making falsifiable predictions, without ever actually getting falsified. for example, a statement “if X had happened, then i bet we would see Y now” has prediction vibes but is not actually a prediction. this is especially pernicious when one fails but says “i failed but i was close, so i should still update positively on what i did.” while not always a bad idea, there’s a bias-variance tradeoff here, where doing this more often reduces variance but increases bias. i find that cases where i thought i was close but later realized i was actually far off the mark are sufficiently common that this isn’t an imaginary concern.
another variant is i think we are much less susceptible to some forms of brainworms/ideology, and are also much better at understanding the mechanisms behind brainworms and identifying them in others, so we over-update on our own insusceptibility to brainworms (despite evidence from the reference class of rationalists that seems to suggest at least as much as genpop if not higher levels of obvious-cult-forming). however, it’s just that we are suscpetible to different types of brainworks as normies.
another variant is introspective ability. i think we are probably better in some sense at self-introspection, in the sense that we are better at noticing certain kinds of patterns in our own behavior and developing models for those patterns. but i’ve also come to believe that this kind of modeling has huge blind spots, and leads many to believe they have a much greater degree of mastery over their own minds than they actually do. however, the feeling that one is aware of the possibility of one having blind spots and being aware of what they often look like in other people can lead to overconfidence about whether one would notice these blindspots in themself.
i feel like the main way i notice these things is by noticing them in other people over long periods of knowing them, and then noticing that my actions are actually deeply analogous to theirs in some way. it also helps to notice non-rats not falling into the same pitfalls sometimes.
i’m not sure how to fix this. merely being aware of it probably is not sufficient. probably the solution is not to stop thinking about one’s own limitations, but rather to add some additional cogtech on top. my guess is there is probably valuable memetic technology out there that especially wise people use but which most people, rat or not, don’t use. also, difficult-to-fake feedback from reality seems important.
a related thing that I will mention here so that I don’t have to write a separate post about it:
although updating on evidence is a good thing, it is bad to think “I have updated on evidence, therefore I am now more right than others”. maybe you just had to update more than others because you started from an especially stupid prior, so the fact that you updated more than others doesn’t mean that you are now closer to the truth.
as a silly example, imagine a group of people believing that 2+2=4, and an unlucky guy who believes that 2+2=7. after being exposed to lots of evidence, the latter updates to believing that 2+2=5, because 7 is obviously too much.
now it is tempting for the unlucky guy to conclude “I did a lot of thinking about math, and I have changed my mind as a result. those other guys, they haven’t changed their minds at all, they are just stuck with their priors. they should update too, and then we can all arrive to the correct conclusion that 2+2=5”.
This might be more about miscalibration in perceived relevance of technical exercises inspired by some question. A directly mostly irrelevant exercise that juggles details can be useful, worth doing and even sharing, but mostly for improving model-building intuition and developing good framings in the long term rather than for answering the question that inspired it, especially at a technical level.
So an obvious mistake would be to treat such an exercise as evidence that the person doing/sharing it considers it directly relevant for answering the question at a technical level. This mistake can even be made by that same person, but also expecting others to make the mistake about that person might echo in that person behaving as if making it themselves. So someone would do the exercises for the right reasons, then implicitly expect others to think that the person thinks that the exercises are relevant, and implicitly conclude that the exercises actually are relevant, by this invalid echo argument.
one very easy way to trick our own calibration sensors is to add a bunch of caveats or considerations that make it feel like we’ve modeled all the uncertainty (or at least, more than other people who haven’t). so one thing i see a lot is that people are self-aware that they have limitations, but then over-update on how much this awareness makes them calibrated
Agree, and well put. I think the language of “my best guess” “it’s plausible that” etc. can be a bit thought-numbing for this and other reasons. It can function as plastic bubble wrap around the true shape of your beliefs, preventing their sharp corners from coming into contact with reality. Thoughts coming into contact with reality is good, so sometimes I try to deliberately strip away my precious caveats when I talk.
I most often to this when writing or speaking to think, not to communicate, since by doing this you pay the cost of not communicating your true confidence level which can of course be bad.
it seems that one often believes being self aware of a certain limitation is enough to correct for it sufficiently to at least be calibrated about how limited one is...and then act as if because they’ve acknowledged a potential failure mode and will try to be careful towards avoiding it, that they are much less susceptible to the failure mode than other people in an otherwise similar reference class.
I don’t follow. If I know I don’t “handle” spicy food well, so I avoid eating it. Then I’m not acting as if I’m less susceptible to spicy food because I’ve acknowledged it. Or are you talking about the proverbial example of someone who drives after getting tipsy, but believes because they’re more “careful” they’re safe-enough?
As for brainworms—I’m not familiar with that term but can guess it’s some kind of faddish toxic behaviour (I’m struggling to think of a concrete example, perhaps the use of bromides and platitudes in conversation like “keep your chin up” in lieu of tailored comfort and discourse?) - but what might be an example of a rat-brainworm and an analogous normie brain worm?
I think thinking as a self-reflective process can be quite limited. It is at a certain level of coarse graining that is higher (at least for me) than doing something like feeling or pre-cognitive intuitions and tendencies.
So, I’ll say the boring thing which is basically meditation could be that cogtech as it allows you to increase the precision of your self-reflective microscope and allows you to see other things than the higher coarse graining of self-reflective thought allows you to see. Now, I’m sure that one still falls for a bunch of failure modes there as well since it can be very hard to see what is wrong with a system from within the system itself. It’s just that the mistakes become less coarse grained and that they come from another perspective.
In my own experience there are different states of being, one is from the thinking perspective, another is from a perspective of non-thinking awareness. The thinking perspective thinks it’s quite smart and takes things very seriously and the aware perspective sees this and thinks it’s quite endearing and the thinking part then takes that in and reflects on that it’s ironically ignorant. The thinking part tracks externalities and through the aware part is able to drop it because it finds itself ignorant? I used to only have the thinking part and that created lots of loops and cognitive strain and suffering because I got stuck in certain beliefs?
I think this deep belief of knowing that I’m very cognitively limited in terms of my perspective and frame allows me to hold beliefs about the world and my self a lot more loosely than I was able to hold them before? Life is a lot more vibrant and relaxing as a consequence as it is a lot easier to be wrong and it is actually a delight to be proven wrong. I would say this in the past but I wouldn’t emotionally feel it and as I heard someone say “Meditation is the practice of taking what you think into what you feel”.
a thing i’ve noticed rat/autistic people do (including myself): one very easy way to trick our own calibration sensors is to add a bunch of caveats or considerations that make it feel like we’ve modeled all the uncertainty (or at least, more than other people who haven’t). so one thing i see a lot is that people are self-aware that they have limitations, but then over-update on how much this awareness makes them calibrated. one telltale hint that i’m doing this myself is if i catch myself saying something because i want to demo my rigor and prove that i’ve considered some caveat that one might think i forgot to consider
i’ve heard others make a similar critique about this as a communication style which can mislead non-rats who are not familiar with the style, but i’m making a different claim here that one can trick oneself.
it seems that one often believes being self aware of a certain limitation is enough to correct for it sufficiently to at least be calibrated about how limited one is. a concrete example: part of being socially incompetent is not just being bad at taking social actions, but being bad at detecting social feedback on those actions. of course, many people are not even aware of the latter. but many are aware of and acknowledge the latter, and then act as if because they’ve acknowledged a potential failure mode and will try to be careful towards avoiding it, that they are much less susceptible to the failure mode than other people in an otherwise similar reference class.
one variant of this deals with hypotheticals—because hypotheticals often can/will never be evaluated, this allows one to get the feeling that one is being epistemically virtuous and making falsifiable predictions, without ever actually getting falsified. for example, a statement “if X had happened, then i bet we would see Y now” has prediction vibes but is not actually a prediction. this is especially pernicious when one fails but says “i failed but i was close, so i should still update positively on what i did.” while not always a bad idea, there’s a bias-variance tradeoff here, where doing this more often reduces variance but increases bias. i find that cases where i thought i was close but later realized i was actually far off the mark are sufficiently common that this isn’t an imaginary concern.
another variant is i think we are much less susceptible to some forms of brainworms/ideology, and are also much better at understanding the mechanisms behind brainworms and identifying them in others, so we over-update on our own insusceptibility to brainworms (despite evidence from the reference class of rationalists that seems to suggest at least as much as genpop if not higher levels of obvious-cult-forming). however, it’s just that we are suscpetible to different types of brainworks as normies.
another variant is introspective ability. i think we are probably better in some sense at self-introspection, in the sense that we are better at noticing certain kinds of patterns in our own behavior and developing models for those patterns. but i’ve also come to believe that this kind of modeling has huge blind spots, and leads many to believe they have a much greater degree of mastery over their own minds than they actually do. however, the feeling that one is aware of the possibility of one having blind spots and being aware of what they often look like in other people can lead to overconfidence about whether one would notice these blindspots in themself.
i feel like the main way i notice these things is by noticing them in other people over long periods of knowing them, and then noticing that my actions are actually deeply analogous to theirs in some way. it also helps to notice non-rats not falling into the same pitfalls sometimes.
i’m not sure how to fix this. merely being aware of it probably is not sufficient. probably the solution is not to stop thinking about one’s own limitations, but rather to add some additional cogtech on top. my guess is there is probably valuable memetic technology out there that especially wise people use but which most people, rat or not, don’t use. also, difficult-to-fake feedback from reality seems important.
a related thing that I will mention here so that I don’t have to write a separate post about it:
although updating on evidence is a good thing, it is bad to think “I have updated on evidence, therefore I am now more right than others”. maybe you just had to update more than others because you started from an especially stupid prior, so the fact that you updated more than others doesn’t mean that you are now closer to the truth.
as a silly example, imagine a group of people believing that 2+2=4, and an unlucky guy who believes that 2+2=7. after being exposed to lots of evidence, the latter updates to believing that 2+2=5, because 7 is obviously too much.
now it is tempting for the unlucky guy to conclude “I did a lot of thinking about math, and I have changed my mind as a result. those other guys, they haven’t changed their minds at all, they are just stuck with their priors. they should update too, and then we can all arrive to the correct conclusion that 2+2=5”.
This might be more about miscalibration in perceived relevance of technical exercises inspired by some question. A directly mostly irrelevant exercise that juggles details can be useful, worth doing and even sharing, but mostly for improving model-building intuition and developing good framings in the long term rather than for answering the question that inspired it, especially at a technical level.
So an obvious mistake would be to treat such an exercise as evidence that the person doing/sharing it considers it directly relevant for answering the question at a technical level. This mistake can even be made by that same person, but also expecting others to make the mistake about that person might echo in that person behaving as if making it themselves. So someone would do the exercises for the right reasons, then implicitly expect others to think that the person thinks that the exercises are relevant, and implicitly conclude that the exercises actually are relevant, by this invalid echo argument.
Agree, and well put. I think the language of “my best guess” “it’s plausible that” etc. can be a bit thought-numbing for this and other reasons. It can function as plastic bubble wrap around the true shape of your beliefs, preventing their sharp corners from coming into contact with reality. Thoughts coming into contact with reality is good, so sometimes I try to deliberately strip away my precious caveats when I talk.
I most often to this when writing or speaking to think, not to communicate, since by doing this you pay the cost of not communicating your true confidence level which can of course be bad.
I don’t follow. If I know I don’t “handle” spicy food well, so I avoid eating it. Then I’m not acting as if I’m less susceptible to spicy food because I’ve acknowledged it. Or are you talking about the proverbial example of someone who drives after getting tipsy, but believes because they’re more “careful” they’re safe-enough?
As for brainworms—I’m not familiar with that term but can guess it’s some kind of faddish toxic behaviour (I’m struggling to think of a concrete example, perhaps the use of bromides and platitudes in conversation like “keep your chin up” in lieu of tailored comfort and discourse?) - but what might be an example of a rat-brainworm and an analogous normie brain worm?
I think thinking as a self-reflective process can be quite limited. It is at a certain level of coarse graining that is higher (at least for me) than doing something like feeling or pre-cognitive intuitions and tendencies.
So, I’ll say the boring thing which is basically meditation could be that cogtech as it allows you to increase the precision of your self-reflective microscope and allows you to see other things than the higher coarse graining of self-reflective thought allows you to see. Now, I’m sure that one still falls for a bunch of failure modes there as well since it can be very hard to see what is wrong with a system from within the system itself. It’s just that the mistakes become less coarse grained and that they come from another perspective.
In my own experience there are different states of being, one is from the thinking perspective, another is from a perspective of non-thinking awareness. The thinking perspective thinks it’s quite smart and takes things very seriously and the aware perspective sees this and thinks it’s quite endearing and the thinking part then takes that in and reflects on that it’s ironically ignorant. The thinking part tracks externalities and through the aware part is able to drop it because it finds itself ignorant? I used to only have the thinking part and that created lots of loops and cognitive strain and suffering because I got stuck in certain beliefs?
I think this deep belief of knowing that I’m very cognitively limited in terms of my perspective and frame allows me to hold beliefs about the world and my self a lot more loosely than I was able to hold them before? Life is a lot more vibrant and relaxing as a consequence as it is a lot easier to be wrong and it is actually a delight to be proven wrong. I would say this in the past but I wouldn’t emotionally feel it and as I heard someone say “Meditation is the practice of taking what you think into what you feel”.