I think there are subtypes of infohazard, and this has been known for quite a long time. Bostrom’s paper (https://nickbostrom.com/information-hazards.pdf) is only 12 years old, I guess, but that seems like forever.
There are a LOT of infohazards that are not only hazardous if true. There’s a ton of harm in deliberate misinformation, and some pain caused by possibilities that are unpleasant to consider, even if it’s acknowledged they may not occur. Roko’s Basilisk (https://www.lesswrong.com/tag/rokos-basilisk) is an example from our own group.
edit: I further think that un-anchored requests on LW for unstated targets to change their word choices are unlikely to have much impact. It may be that you’re putting this here so you can reference it when you call out uses that seem confusing, in which case I look forward to seeing the reaction.
I think there are subtypes of infohazard, and this has been known for quite a long time. Bostrom’s paper (https://nickbostrom.com/information-hazards.pdf) is only 12 years old, I guess, but that seems like forever.
There are a LOT of infohazards that are not only hazardous if true. There’s a ton of harm in deliberate misinformation, and some pain caused by possibilities that are unpleasant to consider, even if it’s acknowledged they may not occur. Roko’s Basilisk (https://www.lesswrong.com/tag/rokos-basilisk) is an example from our own group.
edit: I further think that un-anchored requests on LW for unstated targets to change their word choices are unlikely to have much impact. It may be that you’re putting this here so you can reference it when you call out uses that seem confusing, in which case I look forward to seeing the reaction.
I read this as an experimental proposal for improvement, not an actively confirmed request for change, FWIW.