the whole “trauma” thing is too much like Bulverism for my taste.
I had to look up “Bulverism”. I think you’re saying I’m maybe bringing in a frame that automatically makes people who disagree with me wrong because they’re subject to the thing I’m talking about. Yes?
I’m pretty sure I don’t mean that. My hope is that we can name some reasonably clear signs of what world we’d expect to see if I’m loosely right vs. simply wrong, and then check back when we discover which world we’re in in the future.
It might matter to point out when people’s reactions claim to be inconsistent with the trauma model but in fact are consistent with it. But that’s not to make it unfalsifiable. Quite the opposite: it’s to make its falsification effective instead of illusory. I think that’s quite different from Bulverism at a skim.
I also had to look it up and got interested in testing whether or how it could apply.
Here’s an explanation of Bulverism that suggests a concrete logical form of the fallacy:
Person 1 makes argument X.
Person 2 assumes person 1 must be wrong because of their Y (e.g. suspected motives, social identity, or other characteristic associated with their identity).
Therefore, argument X is flawed or not true.
Here’s a possible assignment for X and Y that tries to remain rather general:
X = Doom is plausible because …
Y = Trauma / Fear / Fixation
Why would that be a fallacy? Whether an argument is true or false, depends on the structure and content of the argument, but not on the source of the argument (genetic fallacy), and not on a property of the source that gets equated with being wrong (circular reasoning). Whether an argument for doom is true, does not depend on who is arguing for it, and being traumatized does not automatically imply being wrong.
Here’s another possible assignment for X and Y that tries to be more concrete. To be able to do so, “Person 1” is also replaced by more than one person, now called “Group 1″:
X (from AI 2027) = A takeover by an unaligned superintelligence by 2030 is plausible because …
Y (from the post) = “lots of very smart people have preverbal trauma” and “embed that pain such that it colors what reality even looks like at a fundamental level”, so “there’s something like a traumatized infant inside such people” and “its only way of “crying” is to paint the subjective experience of world in the horror it experiences, and to use the built-up mental edifice it has access to in order to try to convey to others what its horror is like”.
From looking at this, I think the post suggests a slightly stronger logical form that extends 3:
Group 1 makes argument X.
Person 2 assumes group 1 must be wrong because of their Y (e.g. suspected motives, social identity, or other characteristic associated with their identity).
Therefore, argument X is flawed or not true AND group 1 can’t evaluate its truth value because of their Y.
From this, I think one can see that not only Bulverism makes the model a bit suspicious, but two additional aspects come into play:
If Group 1 is the LessWrong community, then there are also people outside of it that predict that there’s an existential risk from AI and that timelines might be short. How can argument X from these people become wrong by Group 1 entering the stage, and would it still be true if Group 1 was doing something else?
I think it’s fair to say that in 3 an aspect is introduced that’s adjacent to gaslighting, i.e. manipulating someone into questioning their perception of reality. Even if it’s in a well-meaning way, since some people’s perception of reality is indeed flawed and they might profit from becoming aware of it, the way it is weaved into the argument doesn’t seem that benign anymore. I suppose that might be the source of some people getting annoyed by the post.
I had to look up “Bulverism”. I think you’re saying I’m maybe bringing in a frame that automatically makes people who disagree with me wrong because they’re subject to the thing I’m talking about. Yes?
I’m pretty sure I don’t mean that. My hope is that we can name some reasonably clear signs of what world we’d expect to see if I’m loosely right vs. simply wrong, and then check back when we discover which world we’re in in the future.
It might matter to point out when people’s reactions claim to be inconsistent with the trauma model but in fact are consistent with it. But that’s not to make it unfalsifiable. Quite the opposite: it’s to make its falsification effective instead of illusory. I think that’s quite different from Bulverism at a skim.
Let me know if I’ve missed your point.
I also had to look it up and got interested in testing whether or how it could apply.
Here’s an explanation of Bulverism that suggests a concrete logical form of the fallacy:
Person 1 makes argument X.
Person 2 assumes person 1 must be wrong because of their Y (e.g. suspected motives, social identity, or other characteristic associated with their identity).
Therefore, argument X is flawed or not true.
Here’s a possible assignment for X and Y that tries to remain rather general:
X = Doom is plausible because …
Y = Trauma / Fear / Fixation
Why would that be a fallacy? Whether an argument is true or false, depends on the structure and content of the argument, but not on the source of the argument (genetic fallacy), and not on a property of the source that gets equated with being wrong (circular reasoning). Whether an argument for doom is true, does not depend on who is arguing for it, and being traumatized does not automatically imply being wrong.
Here’s another possible assignment for X and Y that tries to be more concrete. To be able to do so, “Person 1” is also replaced by more than one person, now called “Group 1″:
X (from AI 2027) = A takeover by an unaligned superintelligence by 2030 is plausible because …
Y (from the post) = “lots of very smart people have preverbal trauma” and “embed that pain such that it colors what reality even looks like at a fundamental level”, so “there’s something like a traumatized infant inside such people” and “its only way of “crying” is to paint the subjective experience of world in the horror it experiences, and to use the built-up mental edifice it has access to in order to try to convey to others what its horror is like”.
From looking at this, I think the post suggests a slightly stronger logical form that extends 3:
Group 1 makes argument X.
Person 2 assumes group 1 must be wrong because of their Y (e.g. suspected motives, social identity, or other characteristic associated with their identity).
Therefore, argument X is flawed or not true AND group 1 can’t evaluate its truth value because of their Y.
From this, I think one can see that not only Bulverism makes the model a bit suspicious, but two additional aspects come into play:
If Group 1 is the LessWrong community, then there are also people outside of it that predict that there’s an existential risk from AI and that timelines might be short. How can argument X from these people become wrong by Group 1 entering the stage, and would it still be true if Group 1 was doing something else?
I think it’s fair to say that in 3 an aspect is introduced that’s adjacent to gaslighting, i.e. manipulating someone into questioning their perception of reality. Even if it’s in a well-meaning way, since some people’s perception of reality is indeed flawed and they might profit from becoming aware of it, the way it is weaved into the argument doesn’t seem that benign anymore. I suppose that might be the source of some people getting annoyed by the post.