Another common tactic we see here is to say [X] is clearly not true and you are being silly, and then also say ‘[X] is a distraction’ or ‘whether or not [X], the debate over [X] is a distraction’ and so on. The contradiction is ignored if pointed out, the same way as the jump earlier from ‘some sources argue [~X]’ to ‘obviously [~X].’
In a framing-aware setting, this tactic can be healthy. A framing specifies which concerns are relevant/salient, and a collection of relevant/salient concerns can define a framing. It’s a way of thinking about situations, and the same situation can be understood under multiple different ways of thinking, some a better fit than others for the situation, and some a better way of thinking about this situation for the purposes of some overarching goal.
So what the quote could be saying is that there is some framing [f] that doesn’t see [X] as a relevant/salient consideration. In particular this suggests that any framing [g] that does see [X] as relevant/salient is significantly different from [f] and is probably suboptimal for whatever purposes that [f] is good for. So [X] is a distration in the sense that even thinking about it at all directs you into the framing [g] and away from the framing [f], and that holds whether [X] or [~X] because this is an observation about relevance of [X], not an observation about its truth.
In this context, a proponent of [f] also saying that [~X] or that obviously [~X] might be making a claim about [f]‘s stance on [X] that it takes half-heartedly without considering it relevant/salient, and [f] produces it in a necessarily somewhat uninformed manner, perhaps with mild annoyance at being asked about something irrelevant to [f], and also thereby feeding competing frames that are not [f], that might do worse at [f]’s purpose.
So the error is reporting [f]‘s stance on the out-of-scope [X] (for [f]) with an implied high confidence as a claim about reality. It could well be that [f] does claim [~X] with high confidence (though that’s already suspicious since [X] in not relevant for the way [f] thinks about the world), but this should additionally have huge model uncertaintly error bars when considering that [f] might be producing bad models for considering [X], and by the time you convert credence from [f]’s internal poorly informed impression to a language of someone who considers [X] relevant, there shouldn’t be much certainty left on [~X]. So it’s more a failure of communication than an error of reasoning, though it would become an error of reasoning when a proponent of [f] tries to communicate its conclusions to their own other understandings that are not within the purview of [f], even in taking any real world actions.
Of course, being locked in a framing that lacks lightness is no good, it should be possible for any important [X] to push for either changes in [f] or for development of some other framing [g] that does take [X] seriously. That doesn’t necessarily need to damage the power of [f] or change [f], as specialized framings are very useful, but people shouldn’t be locked into specialized framings, losing ability to look at anything these framings wouldn’t consider. For anything at all, including claims that are obviously false, there is a framing waiting to develop that does consider it as relevant/salient.
In a framing-aware setting, this tactic can be healthy. A framing specifies which concerns are relevant/salient, and a collection of relevant/salient concerns can define a framing. It’s a way of thinking about situations, and the same situation can be understood under multiple different ways of thinking, some a better fit than others for the situation, and some a better way of thinking about this situation for the purposes of some overarching goal.
So what the quote could be saying is that there is some framing [f] that doesn’t see [X] as a relevant/salient consideration. In particular this suggests that any framing [g] that does see [X] as relevant/salient is significantly different from [f] and is probably suboptimal for whatever purposes that [f] is good for. So [X] is a distration in the sense that even thinking about it at all directs you into the framing [g] and away from the framing [f], and that holds whether [X] or [~X] because this is an observation about relevance of [X], not an observation about its truth.
In this context, a proponent of [f] also saying that [~X] or that obviously [~X] might be making a claim about [f]‘s stance on [X] that it takes half-heartedly without considering it relevant/salient, and [f] produces it in a necessarily somewhat uninformed manner, perhaps with mild annoyance at being asked about something irrelevant to [f], and also thereby feeding competing frames that are not [f], that might do worse at [f]’s purpose.
So the error is reporting [f]‘s stance on the out-of-scope [X] (for [f]) with an implied high confidence as a claim about reality. It could well be that [f] does claim [~X] with high confidence (though that’s already suspicious since [X] in not relevant for the way [f] thinks about the world), but this should additionally have huge model uncertaintly error bars when considering that [f] might be producing bad models for considering [X], and by the time you convert credence from [f]’s internal poorly informed impression to a language of someone who considers [X] relevant, there shouldn’t be much certainty left on [~X]. So it’s more a failure of communication than an error of reasoning, though it would become an error of reasoning when a proponent of [f] tries to communicate its conclusions to their own other understandings that are not within the purview of [f], even in taking any real world actions.
Of course, being locked in a framing that lacks lightness is no good, it should be possible for any important [X] to push for either changes in [f] or for development of some other framing [g] that does take [X] seriously. That doesn’t necessarily need to damage the power of [f] or change [f], as specialized framings are very useful, but people shouldn’t be locked into specialized framings, losing ability to look at anything these framings wouldn’t consider. For anything at all, including claims that are obviously false, there is a framing waiting to develop that does consider it as relevant/salient.