I also had to look it up and got interested in testing whether or how it could apply.
Here’s an explanation of Bulverism that suggests a concrete logical form of the fallacy:
Person 1 makes argument X.
Person 2 assumes person 1 must be wrong because of their Y (e.g. suspected motives, social identity, or other characteristic associated with their identity).
Therefore, argument X is flawed or not true.
Here’s a possible assignment for X and Y that tries to remain rather general:
X = Doom is plausible because …
Y = Trauma / Fear / Fixation
Why would that be a fallacy? Whether an argument is true or false, depends on the structure and content of the argument, but not on the source of the argument (genetic fallacy), and not on a property of the source that gets equated with being wrong (circular reasoning). Whether an argument for doom is true, does not depend on who is arguing for it, and being traumatized does not automatically imply being wrong.
Here’s another possible assignment for X and Y that tries to be more concrete. To be able to do so, “Person 1” is also replaced by more than one person, now called “Group 1″:
X (from AI 2027) = A takeover by an unaligned superintelligence by 2030 is plausible because …
Y (from the post) = “lots of very smart people have preverbal trauma” and “embed that pain such that it colors what reality even looks like at a fundamental level”, so “there’s something like a traumatized infant inside such people” and “its only way of “crying” is to paint the subjective experience of world in the horror it experiences, and to use the built-up mental edifice it has access to in order to try to convey to others what its horror is like”.
From looking at this, I think the post suggests a slightly stronger logical form that extends 3:
Group 1 makes argument X.
Person 2 assumes group 1 must be wrong because of their Y (e.g. suspected motives, social identity, or other characteristic associated with their identity).
Therefore, argument X is flawed or not true AND group 1 can’t evaluate its truth value because of their Y.
From this, I think one can see that not only Bulverism makes the model a bit suspicious, but two additional aspects come into play:
If Group 1 is the LessWrong community, then there are also people outside of it that predict that there’s an existential risk from AI and that timelines might be short. How can argument X from these people become wrong by Group 1 entering the stage, and would it still be true if Group 1 was doing something else?
I think it’s fair to say that in 3 an aspect is introduced that’s adjacent to gaslighting, i.e. manipulating someone into questioning their perception of reality. Even if it’s in a well-meaning way, since some people’s perception of reality is indeed flawed and they might profit from becoming aware of it, the way it is weaved into the argument doesn’t seem that benign anymore. I suppose that might be the source of some people getting annoyed by the post.
I also had to look it up and got interested in testing whether or how it could apply.
Here’s an explanation of Bulverism that suggests a concrete logical form of the fallacy:
Person 1 makes argument X.
Person 2 assumes person 1 must be wrong because of their Y (e.g. suspected motives, social identity, or other characteristic associated with their identity).
Therefore, argument X is flawed or not true.
Here’s a possible assignment for X and Y that tries to remain rather general:
X = Doom is plausible because …
Y = Trauma / Fear / Fixation
Why would that be a fallacy? Whether an argument is true or false, depends on the structure and content of the argument, but not on the source of the argument (genetic fallacy), and not on a property of the source that gets equated with being wrong (circular reasoning). Whether an argument for doom is true, does not depend on who is arguing for it, and being traumatized does not automatically imply being wrong.
Here’s another possible assignment for X and Y that tries to be more concrete. To be able to do so, “Person 1” is also replaced by more than one person, now called “Group 1″:
X (from AI 2027) = A takeover by an unaligned superintelligence by 2030 is plausible because …
Y (from the post) = “lots of very smart people have preverbal trauma” and “embed that pain such that it colors what reality even looks like at a fundamental level”, so “there’s something like a traumatized infant inside such people” and “its only way of “crying” is to paint the subjective experience of world in the horror it experiences, and to use the built-up mental edifice it has access to in order to try to convey to others what its horror is like”.
From looking at this, I think the post suggests a slightly stronger logical form that extends 3:
Group 1 makes argument X.
Person 2 assumes group 1 must be wrong because of their Y (e.g. suspected motives, social identity, or other characteristic associated with their identity).
Therefore, argument X is flawed or not true AND group 1 can’t evaluate its truth value because of their Y.
From this, I think one can see that not only Bulverism makes the model a bit suspicious, but two additional aspects come into play:
If Group 1 is the LessWrong community, then there are also people outside of it that predict that there’s an existential risk from AI and that timelines might be short. How can argument X from these people become wrong by Group 1 entering the stage, and would it still be true if Group 1 was doing something else?
I think it’s fair to say that in 3 an aspect is introduced that’s adjacent to gaslighting, i.e. manipulating someone into questioning their perception of reality. Even if it’s in a well-meaning way, since some people’s perception of reality is indeed flawed and they might profit from becoming aware of it, the way it is weaved into the argument doesn’t seem that benign anymore. I suppose that might be the source of some people getting annoyed by the post.