This is a pretty complex epistemic/social situation. I care a lot about our community having some kind of good process of aggregating information, allowing individuals to integrate it, and update, and decide what to do with it.
I think a lot of disagreements in the comments here and on EAF stem from people having an implicit assumption that the conversation here is about “should [any particular person in this article] be socially punished?”. In my preferred world, before you get to that phase there should be at least some period focused on “information aggregation and Original Seeing.”
It’s pretty tricky, since in the default, world, “social punishment?” is indeed the conversation people jump to. And in practice, it’s hard to have words just focused on epistemic-evaluation without getting into judgment, or without speech acts being “moves” in a social conflict.
But, I think it’s useful to at least (individually) inhabit the frame of “what is true, here?” without asking questions like “what do those truths imply?”.
With that in mind, some generally useful epistemic advice that I think is relevant here:
Try to have Multiple Hypotheses
It’s useful to have at least two, and preferably three, hypotheses for what’s going on in cases like this. (Or, generally whenever you’re faced with a confusing situation where you’re not sure what’s true). If you only have one hypothesis, you may be tempted to shoehorn evidence into being evidence for/against that hypothesis, and you may be anchored on it.
If you have at least two hypotheses (and, like, “real ones”, that both seem plausible to you), I find it easier to take in new bits of data, and then ask “okay, how would this fit into two different plausible scenarios”? which activates my “actually check” process.
I think three hypotheses is better than two because two can still end up in a “all the evidence ways in on a one-dimensional spectrum”. Three hypotheses a) helps you do ‘triangulation’, and b) helps remind you to actually do the “what frame should I be having here? what are other additional hypotheses that I might not have thought of yet?”
Multiple things can be going on at once
If two people have a conflict, it could be the case that one person is at-fault, or both people are at-fault, or neither (i.e. it was a miscommunication or something).
If one person does an action, it could be true, simultaneously, that:
They are somewhat motivated by [Virtuous Motive A]
They are somewhat motivated by [Suspicious Motive B]
They are motivated by [Random Innocuous Motive C]
I once was arguing with someone, and they said “your body posture tells me you aren’t even trying to listen to me or reason correctly, you’re just trying to do a status monkey smackdown and put me in my place.” And, I was like “what? No, I have good introspective access and I just checked whether I’m trying to make a reasoned argument. I can tell the difference between doing The Social Monkey thing and the “actually figure out the truth” thing.”
What I later realized is that I was, like, 65% motivated by “actually wanna figure out the truth”, and like 25% motivated by “socially punish this person” (which was a slightly different flavor of “socially punish” then, say, when I’m having a really tribally motivated facebook fight, so I didn’t recognize it as easily).
Original Seeing vs Hypothesis Evaluation vs Judgment
OODA Loops include four steps: Observe, Orient, Decide, Act
Often people skip over steps. They think they’ve already observed enough and don’t bother looking for new observations. Or it doesn’t even occur to them to do that explicitly. (I’ve noticed that I often skip to the orient step, where I figure out about “how do I organize my information? what sort of decision am I about to decide on?”, and not actually do the observe step, where I’m purely focused on gaining raw data.
When you’ve already decided on a schema-for-thinking-about-a-problem, you’re more likely to take new info that comes in and put it in a bucket you think you already understand.
They are both different from “evaluating which hypothesis is true”
They are both different from “deciding what to do, given Hypothesis A is true”
Which is in turn different from “actually taking actions, given that you’ve decided what to do.”
I have a sort of idealistic dream that someday, a healthy rationalist/EA community could collectively be capable of raising hypotheses, without people anchoring on them, and people share information in a way you robustly trust won’t get automatically leveraged into a conflict/political move. I don’t think we’re close enough to that world to advocate for it in-the-moment, but I do think it’s still good practice for people individually to be spending at least some of their time in node the OODA loop, and tracking which node they’re currently focusing on.
This section is begging for a reference to Duncan’s post on Split and Commit.
IIRC Duncan has also written lots of other stuff about topics like how to assess accusations, community health stuff, etc. Though I’m somewhat skeptical to which extent his recommendations can be implemented by fallible humans with limited time and energy.
I agree, there is the possibility that both sides are somewhat unscrupulous and not entirely forthright.
At best it could be because the environment/stress/etc. is causing them to behave like this, at worst it’s because they have delusions of grandeur without the substance to back that up.
I’m going to have to work the phrase “delusions of grandeur without the substance to back that up” into my repertoire. Sort of like Churchill’s comment about Clement Atlee: “A modest man with much to be modest about.”
This is a pretty complex epistemic/social situation. I care a lot about our community having some kind of good process of aggregating information, allowing individuals to integrate it, and update, and decide what to do with it.
I think a lot of disagreements in the comments here and on EAF stem from people having an implicit assumption that the conversation here is about “should [any particular person in this article] be socially punished?”. In my preferred world, before you get to that phase there should be at least some period focused on “information aggregation and Original Seeing.”
It’s pretty tricky, since in the default, world, “social punishment?” is indeed the conversation people jump to. And in practice, it’s hard to have words just focused on epistemic-evaluation without getting into judgment, or without speech acts being “moves” in a social conflict.
But, I think it’s useful to at least (individually) inhabit the frame of “what is true, here?” without asking questions like “what do those truths imply?”.
With that in mind, some generally useful epistemic advice that I think is relevant here:
Try to have Multiple Hypotheses
It’s useful to have at least two, and preferably three, hypotheses for what’s going on in cases like this. (Or, generally whenever you’re faced with a confusing situation where you’re not sure what’s true). If you only have one hypothesis, you may be tempted to shoehorn evidence into being evidence for/against that hypothesis, and you may be anchored on it.
If you have at least two hypotheses (and, like, “real ones”, that both seem plausible to you), I find it easier to take in new bits of data, and then ask “okay, how would this fit into two different plausible scenarios”? which activates my “actually check” process.
I think three hypotheses is better than two because two can still end up in a “all the evidence ways in on a one-dimensional spectrum”. Three hypotheses a) helps you do ‘triangulation’, and b) helps remind you to actually do the “what frame should I be having here? what are other additional hypotheses that I might not have thought of yet?”
Multiple things can be going on at once
If two people have a conflict, it could be the case that one person is at-fault, or both people are at-fault, or neither (i.e. it was a miscommunication or something).
If one person does an action, it could be true, simultaneously, that:
They are somewhat motivated by [Virtuous Motive A]
They are somewhat motivated by [Suspicious Motive B]
They are motivated by [Random Innocuous Motive C]
I once was arguing with someone, and they said “your body posture tells me you aren’t even trying to listen to me or reason correctly, you’re just trying to do a status monkey smackdown and put me in my place.” And, I was like “what? No, I have good introspective access and I just checked whether I’m trying to make a reasoned argument. I can tell the difference between doing The Social Monkey thing and the “actually figure out the truth” thing.”
What I later realized is that I was, like, 65% motivated by “actually wanna figure out the truth”, and like 25% motivated by “socially punish this person” (which was a slightly different flavor of “socially punish” then, say, when I’m having a really tribally motivated facebook fight, so I didn’t recognize it as easily).
Original Seeing vs Hypothesis Evaluation vs Judgment
OODA Loops include four steps: Observe, Orient, Decide, Act
Often people skip over steps. They think they’ve already observed enough and don’t bother looking for new observations. Or it doesn’t even occur to them to do that explicitly. (I’ve noticed that I often skip to the orient step, where I figure out about “how do I organize my information? what sort of decision am I about to decide on?”, and not actually do the observe step, where I’m purely focused on gaining raw data.
When you’ve already decided on a schema-for-thinking-about-a-problem, you’re more likely to take new info that comes in and put it in a bucket you think you already understand.
Original Seeing is different from “organizing information”.
They are both different from “evaluating which hypothesis is true”
They are both different from “deciding what to do, given Hypothesis A is true”
Which is in turn different from “actually taking actions, given that you’ve decided what to do.”
I have a sort of idealistic dream that someday, a healthy rationalist/EA community could collectively be capable of raising hypotheses, without people anchoring on them, and people share information in a way you robustly trust won’t get automatically leveraged into a conflict/political move. I don’t think we’re close enough to that world to advocate for it in-the-moment, but I do think it’s still good practice for people individually to be spending at least some of their time in node the OODA loop, and tracking which node they’re currently focusing on.
This section is begging for a reference to Duncan’s post on Split and Commit.
IIRC Duncan has also written lots of other stuff about topics like how to assess accusations, community health stuff, etc. Though I’m somewhat skeptical to which extent his recommendations can be implemented by fallible humans with limited time and energy.
I agree, there is the possibility that both sides are somewhat unscrupulous and not entirely forthright.
At best it could be because the environment/stress/etc. is causing them to behave like this, at worst it’s because they have delusions of grandeur without the substance to back that up.
I’m going to have to work the phrase “delusions of grandeur without the substance to back that up” into my repertoire. Sort of like Churchill’s comment about Clement Atlee: “A modest man with much to be modest about.”