Infohazards vs Fork Hazards

I think actual infohazardous information is fairly rare. Far more common is a fork: you have some idea or statement, you don’t know whether it’s true or false (typically leaning false), and you kow that either it’s false or it’s infohazardous. Examples include unvalidated insights about how to build dangerous technologies, and most acausal trade/​acausal blackmail scenarios. Phrased slightly differently: “infohazardous if true”.

If something is wrong/​false, it’s at least mildly bad to spread/​talk about it. (With some exceptions; wrong ideas can sometimes inspire better ones, maybe you want fake nuclear weapon designs to trip up would-be designers, etc). And if something is infohazardous, it’s bad to spread/​talk about it, for an entirely different reason. Taken together, these form a disjunctive argument for not spreading the information.

I think this trips people up when they see how others relate to things that are infohazardous-if-true. When something is infohazardous-if-true (but probably false), peopple bias towards treating it as actually-infohazardous; after all,if it’s false, there’s not much upside in spreading bullshit. Other people seeing this get confused, and think it’s actually infohazardous, or think it isn’t but that the first person thinks it is (and therefore thinks the first person is foolish).

I think this is pretty easily fixed with a slight terminology tweak: simply call thinks “infohazardous if true” rather than “infohazardous” (adjective form), and call them “fork hazards” rather that “infohazards” (noun form). This clarifies that you only believe the conditional, and not the underlying statement.