I was just watching this infamous interview w/ Patrick Moore where he seems to be doing some sort of epistemic mode switch (the “weed killer” interview)[0]
Moore appears to go from “it’s safe to drink a cup of glyphosate” to (being offered the chance to do that) “of course not / I’m not stupid”.
This switching between what seems to be a tribal-flavored belief (glyphosate is safe) and a self-protecting belief (glyphosate is dangerous) is what I’d like to call an epistemic mode-switch. In particular, it’s a contradiction in beliefs, that’s really only obvious if you can get both modes to be near each other (in time/space/whatever).
In the rationality community, it seems good to:
Admit this is just a normal part of human reasoning. We probably all do this to some degree some times, and
It seems like there are ways we can confront this by getting the modes near each other.
I think one of the things thats going on here is a short-term self-preservation incentive is a powerful tool for forcing yourself to be clear about your beliefs. In particular, it seems good at filtering/attenuating beliefs that are just tribal signaling.
This suggests that if you can get people to use this kind of short-term self-preservation incentive, you can probably get them to report more calibrated and consistent beliefs.
I think this is one of the better functions of “bet your beliefs”. Summarizing what I understand “bet your beliefs” to be: There is a norm in the rationality community of being able to challenge peoples beliefs by asking them to form bets on them (or forming and offering them bets on them) -- and taking/refusing bets is seen as evidence of admission of beliefs.
Previously I’ve mostly just thought of this as a way of increasing evidence that someone believes what they say they do. If them saying “I believe X” is some small amount of evidence, then them accepting a bet about X is more evidence.
However, now I see that there’s potentially another factor at play. By forcing them to consider short-term losses, you can induce an epistemic mode-switch away from signaling beliefs towards self-preservation beliefs.
It’s possible this is already what people thought “bet your beliefs” was doing, and I’m just late to the party.
Caveat: the rest of this is just pontification.
It seems like a bunch of the world has a bunch of epistemic problems. Not only are there a lot of obviously bad and wrong beliefs, they seem to be durable and robust to evidence that they’re bad and wrong.
Maybe this suggests a particular kind remedy to epistemological problems, or at the very least “how can I get people to consider changing their mind”—by setting up situations that trigger short-term self-preservation thinking.
A failure mode for “betting your beliefs” is developing an urge to reframe your hypotheses as beliefs, which harms the distinction. It’s not always easy/possible/useful to check hypotheses for relevance to reality, at least until much later in their development, so it’s important to protect them from being burdened with this inconvenience. It’s only when a hypothesis is ready for testing (which is often immediately), or wants to be promoted to a belief (probably as an element of an ensemble), that making predictions becomes appropriate.
Moore appears to go from “it’s safe to drink a cup of glyphosate” to (being offered the chance to do that) “of course not / I’m not stupid”.
There seem to be two different concepts being conflated here. One is “it will be extremely unlikely to cause permanent injury”, while the other is “it will be extremely unlikely to have any unpleasant effects whatsoever”. I have quite a few personal experiences with things that are the first but absolutely not the second, and would fairly strenuously avoid going through them again without extremely good reasons.
“Bet Your Beliefs” as an epistemic mode-switch
I was just watching this infamous interview w/ Patrick Moore where he seems to be doing some sort of epistemic mode switch (the “weed killer” interview)[0]
Moore appears to go from “it’s safe to drink a cup of glyphosate” to (being offered the chance to do that) “of course not / I’m not stupid”.
This switching between what seems to be a tribal-flavored belief (glyphosate is safe) and a self-protecting belief (glyphosate is dangerous) is what I’d like to call an epistemic mode-switch. In particular, it’s a contradiction in beliefs, that’s really only obvious if you can get both modes to be near each other (in time/space/whatever).
In the rationality community, it seems good to:
Admit this is just a normal part of human reasoning. We probably all do this to some degree some times, and
It seems like there are ways we can confront this by getting the modes near each other.
I think one of the things thats going on here is a short-term self-preservation incentive is a powerful tool for forcing yourself to be clear about your beliefs. In particular, it seems good at filtering/attenuating beliefs that are just tribal signaling.
This suggests that if you can get people to use this kind of short-term self-preservation incentive, you can probably get them to report more calibrated and consistent beliefs.
I think this is one of the better functions of “bet your beliefs”. Summarizing what I understand “bet your beliefs” to be: There is a norm in the rationality community of being able to challenge peoples beliefs by asking them to form bets on them (or forming and offering them bets on them) -- and taking/refusing bets is seen as evidence of admission of beliefs.
Previously I’ve mostly just thought of this as a way of increasing evidence that someone believes what they say they do. If them saying “I believe X” is some small amount of evidence, then them accepting a bet about X is more evidence.
However, now I see that there’s potentially another factor at play. By forcing them to consider short-term losses, you can induce an epistemic mode-switch away from signaling beliefs towards self-preservation beliefs.
It’s possible this is already what people thought “bet your beliefs” was doing, and I’m just late to the party.
Caveat: the rest of this is just pontification.
It seems like a bunch of the world has a bunch of epistemic problems. Not only are there a lot of obviously bad and wrong beliefs, they seem to be durable and robust to evidence that they’re bad and wrong.
Maybe this suggests a particular kind remedy to epistemological problems, or at the very least “how can I get people to consider changing their mind”—by setting up situations that trigger short-term self-preservation thinking.
[0] Context from 33:50 here:
A failure mode for “betting your beliefs” is developing an urge to reframe your hypotheses as beliefs, which harms the distinction. It’s not always easy/possible/useful to check hypotheses for relevance to reality, at least until much later in their development, so it’s important to protect them from being burdened with this inconvenience. It’s only when a hypothesis is ready for testing (which is often immediately), or wants to be promoted to a belief (probably as an element of an ensemble), that making predictions becomes appropriate.
Oh yeah like +100% this.
Creating an environment where we can all cultivate our weird hunches and proto-beliefs while sharing information and experience would be amazing.
I think things like “Scout Mindset” and high baselines of psychological safety (and maybe some of the other phenomenological stuff) help as well.
If we have the option to create these environments instead, I think we should take that option.
If we don’t have that option (and the environment is a really bad epistemic baseline) -- I think the “bet your beliefs” does good.
There seem to be two different concepts being conflated here. One is “it will be extremely unlikely to cause permanent injury”, while the other is “it will be extremely unlikely to have any unpleasant effects whatsoever”. I have quite a few personal experiences with things that are the first but absolutely not the second, and would fairly strenuously avoid going through them again without extremely good reasons.
I’m sure you can think of quite a few yourself.