Do I understand correctly from your third paragraph that this is based on existing concrete observations of people getting confused by making an inference from the description of something as an infohazard to a connected truth value not intended by the producer of the description? Would it be reasonable to ask in what contexts you’ve seen this, how common it seems to be, or what follow-on consequences were observed?
I’ve seen it happen with Roko’s Basilisk (in both directions: falsely inferring that the basilisk works as-described, and falsely inferring that the person is dumb for thinking that it works as-described). I’ve seen it happen with AGI architecture ideas (falsely inferring that someone is too credulous about AGI architecture ideas, which nearly always turn out to not work).
Do I understand correctly from your third paragraph that this is based on existing concrete observations of people getting confused by making an inference from the description of something as an infohazard to a connected truth value not intended by the producer of the description? Would it be reasonable to ask in what contexts you’ve seen this, how common it seems to be, or what follow-on consequences were observed?
I’ve seen it happen with Roko’s Basilisk (in both directions: falsely inferring that the basilisk works as-described, and falsely inferring that the person is dumb for thinking that it works as-described). I’ve seen it happen with AGI architecture ideas (falsely inferring that someone is too credulous about AGI architecture ideas, which nearly always turn out to not work).