I think this assumes an arbitrarily high level of epistemic hygiene which would be possible for a fully rational agent but less so for an actual human. Choosing to pass by bullies daily who yell insults seems like a bad idea because they’ll keep trying different strategies until they hit a nerve—I would chose to not walk next to them daily as a precaution, even though on a good day I can reason with myself that they are compromised agents with little information about me and data coming from them is not true etc etc… In some way, choosing between not updating on any information they give or not hearing any information that they give is just blocking the channel at different points, but I’d argue that not hearing is a much more solid defence, and saying “just don’t update on any information from them bro” is both unrealistic and slightly victim blame-y. So I agree with you in principle and with “spherical cows in vacuum” rules, but I think it’s just not reasonable to think like this.
I think this assumes an arbitrarily high level of epistemic hygiene which would be possible for a fully rational agent but less so for an actual human.
I’m human. I exist. And these are the standards that I hold myself to, despite not being a “fully rational agent”.
Taken at face value, you’re using explicitly dehumanizing language which invalidates my experience, but I don’t feel like that has to be a problem, do you? I don’t see any reason it would be unrealistic for me to instead notice whether I exist, and whether I actually hold myself to the standards I think I do.
In some way, choosing between not updating on any information they give or not hearing any information that they give is just blocking the channel at different points,
“Not updating” is exactly opposite of the ideal I’m highlighting. You don’t actually have to block the channel at all. That’s where it gets fun :)
but I’d argue that not hearing is a much more solid defence,
The thing is, if you’re walking a different way in order to avoid hearing, you already know.
You already know they’re going to say it, and you already know that you can’t laugh it off as untrue. You not only avoid having to hear them, you have to avoid hearing yourself. And that’s quite hard to do. It’s a very brittle strategy.
That’s on top of the fact that avoidance makes it hard to orient to things when they turn out to be real problems.
and saying “just don’t update on any information from them bro” is both unrealistic and slightly victim blame-y. So I agree with you in principle and with “spherical cows in vacuum” rules, but I think it’s just not reasonable to think like this.
So there’s a big difference between “His problems come from his refusal to update” and “I should tell him that he should update”—let alone throwing the “just” word at them, and blaming them if they don’t. The latter does not follow from the former.
Because exhorting and blaming presuppose that they should “just update”, as in “ignore your reasons for not updating and update”—and that’s foolish to do because those reasons are often legitimate and if you can only justify updating on the object level by not updating on the meta level that’s a bad bet.
It is very often the case that people choose not to update because the update as-currently-interpreted is a trojan horse sneaking in harmfully false conclusions, such that “just updating” would leave them more wrong in the ways that matter. The post where I shared a transcript helping someone through his chronic pain issue shows this in all the details, if you’re interested. The short version is that his problem did stem from a refusal to listen to his pain and update, and taking this seriously despite the predictable-ineffectiveness of “just update bro” was critical for helping him find a resolution, but if he had “just” updated without first disentangling the truth in “Yo, I’m fucked up” from the not-necessarily-true in “I’m forever a cripple now, and my life is basically over”, he would have been doing himself a disservice.
That said, don’t sleep on “Just update, bro” as an intervention. Stuff like that can be surprisingly effective in the cases where you actually know it to be safe and worthwhile and people are willing to set down their “I’ve tried, and I can’t!” and find out. There was a funny one a couple months ago where a friend was struggling with Raynaud’s syndrome and so was gonna ask me for help… only to realize that I was just gonna say “So stop?”, and that she would, so she just stopped. She literally went from “I’m trying and I can’t, empirically proven!” to “okay” and succeeding before she could even tell me she had a problem.
I agree that this sort of epistemic hygiene isn’t the norm yet. And that saying “just update” undersells just how hard it is to figure out what to do with the information we have. At the same time, this isn’t some unattainable ideal that can’t be realized, or that can’t be realized except for a few. The labeling of updating as “unrealistic for actual humans” does more to stop people from reaching it than the object level work ever did.
I think if someone gave advice on weightlifting that included lifting 100kg deadlifts it is fare to note that this is bad advice for the average person, regardless of whether there exists a group of people that can do it. I am happy for you that you hold yourself to that standard, and so do I, but when you give advice that mostly only you can adhere to without a caveat of “btw, this requires a high level of epistemic hygiene” I feel the need to add the caveat.
I agree that there is a class of problems solved by actually updating. I also think epistemic learned helplessness is a valid strategy when in an adversarial environment, which is likely to try to sneak Trojan horse beliefs (or just bad beliefs!) all the time. I am curious to hear if you think this sentence is true:
“Under a repeated pressure of false information, you are better off hearing the information and choosing to reject it, than you are if you closed the channel”
I can understand the arguments for this being true, including practicing your skills at rejecting lies etc; but my felt sense is that most people who aren’t close-to-perfect reasoners are actually worse off following this principle as a rule of thumb because the exposure to untruth will make them worse off both in the short run and epistemically, making it harder for them to course-correct.
I also feel that if there was a recording saying untruths that I could play as background noise, I would be better off just not playing it—unless I am trying to understand the speaker or practice truth-detection or something, I am just better off not exposing my brain to a barrage of falsehoods lest one pass the scanner into my meat-brain unrejected. I can feel that when I spend a lot of time in an environment (e.g. nationalists), for some time after my brain autocompletes things more in their direction and I have to consciously fight it. Fighting it is good, but I change who I am based on what I think and keeping myself submersed in a hostile environment may end with me being someone I do not want to be on reflection.
I think if someone gave advice on weightlifting that included lifting 100kg deadlifts it is fare to note that this is bad advice for the average person, regardless of whether there exists a group of people that can do it.
100kg deadlift sounds like a fair analogy. I agree that there are a lot of people who shouldn’t be attempting 100kg deadlifts willy nilly.
At the same time, if I were to visit my cousin’s strongman gym, where he congregates with like minded individuals over a shared goal of becoming unusually strong… and they were talking about how deadlifting 100kg isn’t realistic for actual humans… then something has gone very wrong. That’s not actually a strongman gym. That’s a Planet Fitness LARPing as a strongman gym.
I am curious to hear if you think this sentence is true:
“Under a repeated pressure of false information, you are better off hearing the information and choosing to reject it, than you are if you closed the channel”
I’d say that’s generally false. Taken at face value the former is just a waste of time, but as likely to be interpreted that’s a bit like saying “Should I shoot this apple off my wife’s head, or just not put it on her head in the first place”. Most people are not capable of “choosing to reject” things that they describe as “false”, basically at all and certainly not with any reliability whatsoever. The people who might actually be able to pull it off with some safety would be those who are very aware of the possibility of failure, and would be framing it as “Should I attempt this” rather than “should I do this”. And then wouldn’t ask because the answer is just “no”.
I am just better off not exposing my brain to a barrage of falsehoods lest one pass the scanner into my meat-brain unrejected. I can feel that when I spend a lot of time in an environment (e.g. nationalists), for some time after my brain autocompletes things more in their direction and I have to consciously fight it. Fighting it is good, but I change who I am based on what I think and keeping myself submersed in a hostile environment may end with me being someone I do not want to be on reflection.
I disagree with the idea that fighting it is good.
I’ll give a silly example to illustrate. Say you tell me I’m a bad person because I don’t have a belly button.
I could respond by exclaiming “Oh no! I’m a bad person!? My wife is going to leave me!”, but that’d be silly because I do have a belly button (I’m pretty sure).
I could respond with “You don’t know me! Opinion rejected!”, which would be less bad in this case.
But I could also respond “You think I don’t have a belly button??? Lolwut!?”
I don’t have to reject things that aren’t convincing, because they’re just not convincing. I can let them sit there, toothless, and nothing bad will happen. And one can do this with far less silly insults too, so long as they’re actually knowably false.
The “tooth” that actually motivates a rejection is that I might find it convincing, in which case… maybe that’s because there’s some truth there? If I think I know that to not be the case, then why am I not convinced?
Responding “Opinion rejected, you don’t know me” is cognitively cheap, but that’s because it shirks the justification step. It’s the same move as flinching from uncomfortable truths. It might be right. We can come up with all sorts of arguments for why it’s right. But we can do that for false conclusions too. “I am merely practicing epistemic hygiene, you are rationalizing to shield your ego from uncomfortable truths”
It’s a lot more work to develop answers so solid that we no longer feel insecure about these things, because there’s a lot of information to process before we can know where we’re going to end up if/when we engage openly with everything available. We have to actually process the information, and “fighting” what we deem “wrong beliefs” is delaying payment on the loan we’ve taking out.
but my felt sense is that most people who aren’t close-to-perfect reasoners are actually worse off following this principle as a rule of thumb because the exposure to untruth will make them worse off both in the short run and epistemically, making it harder for them to course-correct.
This reminds me of gun safety, where the stance you’re taking is analogous to “Guns are dangerous! If you act like there’s no danger, people are gonna shoot themselves!”.
Which is totally correct.
And at the same time, recognition of this completely changes the game. Because when you recognize “One wrong move, and I could die”, holding a gun in one’s hand is sobering. When you have that fear you make sure you don’t make one wrong move, and don’t act until you know for sure that what you’re about to do isn’t going to get someone hurt. And I think people generally recognize this, at least as it applies to updating—when people get insulted, they tend to be defensive. When they hear “wrong” politics, they usually make up reasons to not engage in good faith.
So yes, one wrong move is too many. Yes, you have to be careful. And, between ignoring the fear and avoiding the scary thing, there is a middle path where you slowly and carefully learn how to handle firearms without posing risk to yourself or others. Do we know 100% sure that the next move you make won’t harm someone? Okay, then let’s not do it yet until we do.
That’s the move that I think is almost always available, and seldom seen and taken. Let’s not act until we know how to act safely, but also, let’s act. Because refusing to pick up a gun isn’t safe when you need a gun to protect yourself. And you need to update in order to protect yourself from false beliefs, for damn sure. I generally trust people when they believe they’re not prepared to “just update”, and generally see a lack of recognition that they could actively prepare and then update.
Of course, there are times when we can’t manage that either. I’m coming off of one of those times now, actually, hence the delay in response. But this is actually pretty rare I think, relative to “You could handle that risk if you made a point to, and doing it cautiously would be way safer than what you’re doing”. Should I run into something challenging before I can manage, if I’m honest I think I have to admit “I’m just not that emotionally competent right now” instead of trying to implicitly claim “I’m in a position to know whether your opinion is worth rejecting”.
Once I get a little slack, it would need to be something I get back to, if I want to be unusually capable of updating towards true beliefs. Same if I found that talking to nationalists/flat earthers/etc had more teeth than I thought it ought to. Not as a practice in “rejecting lies”, but in building a foundation that is robust to the information they offer, because it’s already been integrated and accounted for. And in practicing the skill of getting there, by taking in new information without over or under correcting.
I think this assumes an arbitrarily high level of epistemic hygiene which would be possible for a fully rational agent but less so for an actual human. Choosing to pass by bullies daily who yell insults seems like a bad idea because they’ll keep trying different strategies until they hit a nerve—I would chose to not walk next to them daily as a precaution, even though on a good day I can reason with myself that they are compromised agents with little information about me and data coming from them is not true etc etc… In some way, choosing between not updating on any information they give or not hearing any information that they give is just blocking the channel at different points, but I’d argue that not hearing is a much more solid defence, and saying “just don’t update on any information from them bro” is both unrealistic and slightly victim blame-y. So I agree with you in principle and with “spherical cows in vacuum” rules, but I think it’s just not reasonable to think like this.
Thanks for engaging
I’m human. I exist. And these are the standards that I hold myself to, despite not being a “fully rational agent”.
Taken at face value, you’re using explicitly dehumanizing language which invalidates my experience, but I don’t feel like that has to be a problem, do you? I don’t see any reason it would be unrealistic for me to instead notice whether I exist, and whether I actually hold myself to the standards I think I do.
“Not updating” is exactly opposite of the ideal I’m highlighting. You don’t actually have to block the channel at all. That’s where it gets fun :)
The thing is, if you’re walking a different way in order to avoid hearing, you already know.
You already know they’re going to say it, and you already know that you can’t laugh it off as untrue. You not only avoid having to hear them, you have to avoid hearing yourself. And that’s quite hard to do. It’s a very brittle strategy.
That’s on top of the fact that avoidance makes it hard to orient to things when they turn out to be real problems.
So there’s a big difference between “His problems come from his refusal to update” and “I should tell him that he should update”—let alone throwing the “just” word at them, and blaming them if they don’t. The latter does not follow from the former.
Because exhorting and blaming presuppose that they should “just update”, as in “ignore your reasons for not updating and update”—and that’s foolish to do because those reasons are often legitimate and if you can only justify updating on the object level by not updating on the meta level that’s a bad bet.
It is very often the case that people choose not to update because the update as-currently-interpreted is a trojan horse sneaking in harmfully false conclusions, such that “just updating” would leave them more wrong in the ways that matter. The post where I shared a transcript helping someone through his chronic pain issue shows this in all the details, if you’re interested. The short version is that his problem did stem from a refusal to listen to his pain and update, and taking this seriously despite the predictable-ineffectiveness of “just update bro” was critical for helping him find a resolution, but if he had “just” updated without first disentangling the truth in “Yo, I’m fucked up” from the not-necessarily-true in “I’m forever a cripple now, and my life is basically over”, he would have been doing himself a disservice.
That said, don’t sleep on “Just update, bro” as an intervention. Stuff like that can be surprisingly effective in the cases where you actually know it to be safe and worthwhile and people are willing to set down their “I’ve tried, and I can’t!” and find out. There was a funny one a couple months ago where a friend was struggling with Raynaud’s syndrome and so was gonna ask me for help… only to realize that I was just gonna say “So stop?”, and that she would, so she just stopped. She literally went from “I’m trying and I can’t, empirically proven!” to “okay” and succeeding before she could even tell me she had a problem.
I agree that this sort of epistemic hygiene isn’t the norm yet. And that saying “just update” undersells just how hard it is to figure out what to do with the information we have. At the same time, this isn’t some unattainable ideal that can’t be realized, or that can’t be realized except for a few. The labeling of updating as “unrealistic for actual humans” does more to stop people from reaching it than the object level work ever did.
Thanks for replying!
I think if someone gave advice on weightlifting that included lifting 100kg deadlifts it is fare to note that this is bad advice for the average person, regardless of whether there exists a group of people that can do it. I am happy for you that you hold yourself to that standard, and so do I, but when you give advice that mostly only you can adhere to without a caveat of “btw, this requires a high level of epistemic hygiene” I feel the need to add the caveat.
I agree that there is a class of problems solved by actually updating. I also think epistemic learned helplessness is a valid strategy when in an adversarial environment, which is likely to try to sneak Trojan horse beliefs (or just bad beliefs!) all the time. I am curious to hear if you think this sentence is true:
“Under a repeated pressure of false information, you are better off hearing the information and choosing to reject it, than you are if you closed the channel”
I can understand the arguments for this being true, including practicing your skills at rejecting lies etc; but my felt sense is that most people who aren’t close-to-perfect reasoners are actually worse off following this principle as a rule of thumb because the exposure to untruth will make them worse off both in the short run and epistemically, making it harder for them to course-correct.
I also feel that if there was a recording saying untruths that I could play as background noise, I would be better off just not playing it—unless I am trying to understand the speaker or practice truth-detection or something, I am just better off not exposing my brain to a barrage of falsehoods lest one pass the scanner into my meat-brain unrejected. I can feel that when I spend a lot of time in an environment (e.g. nationalists), for some time after my brain autocompletes things more in their direction and I have to consciously fight it. Fighting it is good, but I change who I am based on what I think and keeping myself submersed in a hostile environment may end with me being someone I do not want to be on reflection.
100kg deadlift sounds like a fair analogy. I agree that there are a lot of people who shouldn’t be attempting 100kg deadlifts willy nilly.
At the same time, if I were to visit my cousin’s strongman gym, where he congregates with like minded individuals over a shared goal of becoming unusually strong… and they were talking about how deadlifting 100kg isn’t realistic for actual humans… then something has gone very wrong. That’s not actually a strongman gym. That’s a Planet Fitness LARPing as a strongman gym.
I’d say that’s generally false. Taken at face value the former is just a waste of time, but as likely to be interpreted that’s a bit like saying “Should I shoot this apple off my wife’s head, or just not put it on her head in the first place”. Most people are not capable of “choosing to reject” things that they describe as “false”, basically at all and certainly not with any reliability whatsoever. The people who might actually be able to pull it off with some safety would be those who are very aware of the possibility of failure, and would be framing it as “Should I attempt this” rather than “should I do this”. And then wouldn’t ask because the answer is just “no”.
I disagree with the idea that fighting it is good.
I’ll give a silly example to illustrate. Say you tell me I’m a bad person because I don’t have a belly button.
I could respond by exclaiming “Oh no! I’m a bad person!? My wife is going to leave me!”, but that’d be silly because I do have a belly button (I’m pretty sure).
I could respond with “You don’t know me! Opinion rejected!”, which would be less bad in this case.
But I could also respond “You think I don’t have a belly button??? Lolwut!?”
I don’t have to reject things that aren’t convincing, because they’re just not convincing. I can let them sit there, toothless, and nothing bad will happen. And one can do this with far less silly insults too, so long as they’re actually knowably false.
The “tooth” that actually motivates a rejection is that I might find it convincing, in which case… maybe that’s because there’s some truth there? If I think I know that to not be the case, then why am I not convinced?
Responding “Opinion rejected, you don’t know me” is cognitively cheap, but that’s because it shirks the justification step. It’s the same move as flinching from uncomfortable truths. It might be right. We can come up with all sorts of arguments for why it’s right. But we can do that for false conclusions too. “I am merely practicing epistemic hygiene, you are rationalizing to shield your ego from uncomfortable truths”
It’s a lot more work to develop answers so solid that we no longer feel insecure about these things, because there’s a lot of information to process before we can know where we’re going to end up if/when we engage openly with everything available. We have to actually process the information, and “fighting” what we deem “wrong beliefs” is delaying payment on the loan we’ve taking out.
This reminds me of gun safety, where the stance you’re taking is analogous to “Guns are dangerous! If you act like there’s no danger, people are gonna shoot themselves!”.
Which is totally correct.
And at the same time, recognition of this completely changes the game. Because when you recognize “One wrong move, and I could die”, holding a gun in one’s hand is sobering. When you have that fear you make sure you don’t make one wrong move, and don’t act until you know for sure that what you’re about to do isn’t going to get someone hurt. And I think people generally recognize this, at least as it applies to updating—when people get insulted, they tend to be defensive. When they hear “wrong” politics, they usually make up reasons to not engage in good faith.
So yes, one wrong move is too many. Yes, you have to be careful. And, between ignoring the fear and avoiding the scary thing, there is a middle path where you slowly and carefully learn how to handle firearms without posing risk to yourself or others. Do we know 100% sure that the next move you make won’t harm someone? Okay, then let’s not do it yet until we do.
That’s the move that I think is almost always available, and seldom seen and taken. Let’s not act until we know how to act safely, but also, let’s act. Because refusing to pick up a gun isn’t safe when you need a gun to protect yourself. And you need to update in order to protect yourself from false beliefs, for damn sure. I generally trust people when they believe they’re not prepared to “just update”, and generally see a lack of recognition that they could actively prepare and then update.
Of course, there are times when we can’t manage that either. I’m coming off of one of those times now, actually, hence the delay in response. But this is actually pretty rare I think, relative to “You could handle that risk if you made a point to, and doing it cautiously would be way safer than what you’re doing”. Should I run into something challenging before I can manage, if I’m honest I think I have to admit “I’m just not that emotionally competent right now” instead of trying to implicitly claim “I’m in a position to know whether your opinion is worth rejecting”.
Once I get a little slack, it would need to be something I get back to, if I want to be unusually capable of updating towards true beliefs. Same if I found that talking to nationalists/flat earthers/etc had more teeth than I thought it ought to. Not as a practice in “rejecting lies”, but in building a foundation that is robust to the information they offer, because it’s already been integrated and accounted for. And in practicing the skill of getting there, by taking in new information without over or under correcting.