I’ve been thinking about what I’d call memetic black holes: regions of idea-space that have gathered enough mass that they will suck in anything adjacent to them, distorting judgement for believers and skeptics alike.
The UFO topic is, I think, one such memetic black hole. The idea of aliens is so deeply ingrained in our collective psyche that it is very hard to resist the temptation to attach to it any kind of e.g. bizarre aerial observation. Crucially, I think this works both for those who definitely do and those who definitely don’t believe that UFO sightings have actually been alien-related.
For those who do believe, it is irresistible to consider that anything in the vicinity of the memetic black hole is evidence for the concept, almost via Bayes’ rule coupled with not being able to keep track of myriad low-likelihood alternative explanations. This then adds more mass to the black hole and makes the next observation more likely to be attributed to this hypothesis.
Conversely, for those who do not believe, it’s irresistible to discard anything that flies too close to the black hole, as it will get pattern-matched against other false positives that have been previously debunked, coupled again with limitations of memory and processing.
This phenomenon obviously leads to errors in judgement, specifically path-dependency in how we synthesize information, not to mention a weakness to adversarial planting of memetic traps i.e. psyops.
This is an example where framings are useful. An observation can be understood under multiple framings, some of which should intentionally exclude the compelling narratives (framings are not just hypotheses, but contexts where different considerations and inferences are taken as salient). This way, even the observations at risk of being rounded up to a popular narrative can contribute to developing alternative models, which occasionally grow up.
So even if there is a distortionary effect, it doesn’t necessarily need to be resisted, if you additionally entertain other worldviews unaffected by this effect that would also process the same arguments/observations in a different way.
Thank you! Do you have a concrete example to help me better understand what you mean? Presumably the salience and methods that one instinctively chooses are those which we believe are more informative, based on our cumulative experience and reasoning. Isn’t moving away from these also distortionary?
The point is to develop models within multiple framings at the same time, for any given observation or argument (which in practice means easily spinning up new framings and models that are very poorly developed initially). Through the ITT analogy, you might ask how various people would understand the topics surrounsing some observation/argument, which updates they would make, and try to make all of those updates yourself, filing them under those different framings, within the models they govern.
the salience and methods that one instinctively chooses are those which we believe are more informative
So not just the ways you would instinctively choose for thinking about this yourself (which should not be abandoned), but also in addition the ways you normally wouldn’t think about it, including ways you believe that you shouldn’t use. If you are not captured within such frames or models, and easily reassess their sanity as they develop or come into contact with particular situations, that shouldn’t be dangerous, and should keep presenting better-developed options that break you out from the more familiar framings that end up being misguided.
The reason to develop unreasonable frames and models is that it takes time for them to grow into something that can be fairly assessed (or to come into contact with a situation where they help), doing so prematurely can fail to reveal their potential utility. A bit like reading a textbook, where you don’t necessarily have a specific reason to expect something to end up useful (or even correct), but you won’t be able to see for yourself if it’s useful/correct unless you sufficiently study it first.
Conversely, for those who do not believe, it’s irresistible to discard anything that flies too close to the black hole, as it will get pattern-matched against other false positives that have been previously debunked, coupled again with limitations of memory and processing.
Like the boy who cried wolf.
My only worry about this framing it it assumes that the core premise of the black hole has a better-than-chance likelyhood of being the explanation. Sometimes that is the case, sure, any sports fan is probably tired of clickbait headlines about ‘rumours’ of trades of players and teams or “huge announcements” that turn out to be brand extensions like a Tequila . Such they may just discard anything that suggests it. Every once in a while, though, “wow, Lewis Hamilton actually did sign with Ferrari”. (And even then, this is conflating a class with specific instances: the baseline chance of a successful player changing team might be very low, but the hypothesis that this player will move to that team because of XYZ might in isolation be a convincing premise. “UFOs” is a class too, so I see your concern).
I’ve been thinking about what I’d call memetic black holes: regions of idea-space that have gathered enough mass that they will suck in anything adjacent to them, distorting judgement for believers and skeptics alike.
The UFO topic is, I think, one such memetic black hole. The idea of aliens is so deeply ingrained in our collective psyche that it is very hard to resist the temptation to attach to it any kind of e.g. bizarre aerial observation. Crucially, I think this works both for those who definitely do and those who definitely don’t believe that UFO sightings have actually been alien-related.
For those who do believe, it is irresistible to consider that anything in the vicinity of the memetic black hole is evidence for the concept, almost via Bayes’ rule coupled with not being able to keep track of myriad low-likelihood alternative explanations. This then adds more mass to the black hole and makes the next observation more likely to be attributed to this hypothesis.
Conversely, for those who do not believe, it’s irresistible to discard anything that flies too close to the black hole, as it will get pattern-matched against other false positives that have been previously debunked, coupled again with limitations of memory and processing.
This phenomenon obviously leads to errors in judgement, specifically path-dependency in how we synthesize information, not to mention a weakness to adversarial planting of memetic traps i.e. psyops.
This is an example where framings are useful. An observation can be understood under multiple framings, some of which should intentionally exclude the compelling narratives (framings are not just hypotheses, but contexts where different considerations and inferences are taken as salient). This way, even the observations at risk of being rounded up to a popular narrative can contribute to developing alternative models, which occasionally grow up.
So even if there is a distortionary effect, it doesn’t necessarily need to be resisted, if you additionally entertain other worldviews unaffected by this effect that would also process the same arguments/observations in a different way.
Thank you! Do you have a concrete example to help me better understand what you mean? Presumably the salience and methods that one instinctively chooses are those which we believe are more informative, based on our cumulative experience and reasoning. Isn’t moving away from these also distortionary?
The point is to develop models within multiple framings at the same time, for any given observation or argument (which in practice means easily spinning up new framings and models that are very poorly developed initially). Through the ITT analogy, you might ask how various people would understand the topics surrounsing some observation/argument, which updates they would make, and try to make all of those updates yourself, filing them under those different framings, within the models they govern.
So not just the ways you would instinctively choose for thinking about this yourself (which should not be abandoned), but also in addition the ways you normally wouldn’t think about it, including ways you believe that you shouldn’t use. If you are not captured within such frames or models, and easily reassess their sanity as they develop or come into contact with particular situations, that shouldn’t be dangerous, and should keep presenting better-developed options that break you out from the more familiar framings that end up being misguided.
The reason to develop unreasonable frames and models is that it takes time for them to grow into something that can be fairly assessed (or to come into contact with a situation where they help), doing so prematurely can fail to reveal their potential utility. A bit like reading a textbook, where you don’t necessarily have a specific reason to expect something to end up useful (or even correct), but you won’t be able to see for yourself if it’s useful/correct unless you sufficiently study it first.
Like the boy who cried wolf.
My only worry about this framing it it assumes that the core premise of the black hole has a better-than-chance likelyhood of being the explanation. Sometimes that is the case, sure, any sports fan is probably tired of clickbait headlines about ‘rumours’ of trades of players and teams or “huge announcements” that turn out to be brand extensions like a Tequila . Such they may just discard anything that suggests it. Every once in a while, though, “wow, Lewis Hamilton actually did sign with Ferrari”. (And even then, this is conflating a class with specific instances: the baseline chance of a successful player changing team might be very low, but the hypothesis that this player will move to that team because of XYZ might in isolation be a convincing premise. “UFOs” is a class too, so I see your concern).
In my mind, what gives the black hole its mass is just how pervasive of a meme it is. That likely has some correlation with truth, but far from 1.