Let me note, as a counterpoint to the above comment, that I agree wholeheartedly with the post’s thesis (as expressed in the last two paragraphs). I just think that the film does not make for a very good illustration of the point. The Feynman anecdote (even if we treat it as semi-fictional itself) is a much better example, because it exhibits the key qualities of a situation where the argument applies most forcefully:
There is a clear objective;
The objective deals with physical reality, not social reality, so maneuvering in social reality can only hinder it, not help;
Everyone involved shares the formal goal of achieving the objective.
In such a case, deploying the objections alluded to in the OP’s second-to-last paragraph is simply a mistake (or else deliberate sabotage, perhaps to further one’s own social aims, to the detriment of the common goal). We might perhaps find plausible justifications (or even good reasons), in everyday life, for considering people’s feelings about true claims, or for behaving in a way that signals recognition of social status, or what have you; but in a case where we’re supposed to be building a working nuclear weapon, or (say) solving AI alignment, it’s radically inappropriate—indeed, quite possibly collectively-suicidal—to carry on such obfuscations.
Let me note, as a counterpoint to the above comment, that I agree wholeheartedly with the post’s thesis (as expressed in the last two paragraphs). I just think that the film does not make for a very good illustration of the point. The Feynman anecdote (even if we treat it as semi-fictional itself) is a much better example, because it exhibits the key qualities of a situation where the argument applies most forcefully:
There is a clear objective;
The objective deals with physical reality, not social reality, so maneuvering in social reality can only hinder it, not help;
Everyone involved shares the formal goal of achieving the objective.
In such a case, deploying the objections alluded to in the OP’s second-to-last paragraph is simply a mistake (or else deliberate sabotage, perhaps to further one’s own social aims, to the detriment of the common goal). We might perhaps find plausible justifications (or even good reasons), in everyday life, for considering people’s feelings about true claims, or for behaving in a way that signals recognition of social status, or what have you; but in a case where we’re supposed to be building a working nuclear weapon, or (say) solving AI alignment, it’s radically inappropriate—indeed, quite possibly collectively-suicidal—to carry on such obfuscations.