This is indeed a non-standard (and potentially original) viewpoint. The standard view is that it should not even be mentioned, but I think that to never mention such things is to leave people vulnerable to them, and they are natural enough ideas to get re-invented periodically even if never mentioned plus others sometimes mention them, so we need to innoculate ourselves. I also think there is value in the task of overcoming such worries.
It also dovetails with the concern that we don’t have any good coming-of-age rituals.
Insofar as the purity of the jargon matters, I’d say “infohazard” works for the general case and ‘basilisk’ makes more sense to reserve for ‘really bad to even see’.” (But also, basilisk seems more like a colloqial thing rather than something we should formalize anyway)
It depends on the meme in question.
Some are relatively harmless, like The Game, being easy to overcome, and causing minimal suffering to those who don’t.
Some respect the use-mention distinction, like those described in Blit and comp.basilisk FAQ, making it possible to learn and think about them without suffering their effects.
These two don’t really fit the use of “basilisk” I’ve heard (even though the second coined the term, IIRC), because they are not “ideas, knowing about which causes great harm (in expectation)”. You are saying that there are two distinct approaches:
Innoculation: the idea is close enough to omnipresent that someone is very likely run into it (or invent it); for basilisks of this sort, focusing on prevention and treatment is probably best.
Containment: the idea is esoteric, and/or it cannot be treated; for basilisks of this sort, the only solution is to signal-boost the possibility of their existence and to insist on the virtue of silence on any instances actually found.
If we accept the term “basilisk” to include those that should be treated by innoculation (I’m leaning against this, as it de-fangs, so to speak, the term when used to refer to the other sort), then the drowning child argument is a perfect example: it can cause great emotional stress, and you’re likely to run into it if you take any philosophy class, or read any EA material, but there are many ways to defuse the argument, some of which come very naturally to most people.
Obviously, even if I had an example of the latter type, I wouldn’t reference it here, but I think that such things might exist, and there’s value to keeping wary of them.
Following on the BLIT link, we can do something similar now to deep learning networks. We can even make adversarial patches in realspace—a kind of machine basilisk, if you will.