Trope Dodging

Epistemic status: 2 12 hours of hammering out an idea that’s been in my head for a while. Not too novel, yet a few useful points at the end. Rehashes some ideas from A Human’s Guide to Words.

I think that it is useful to model people as having filters that exist somewhere in their decision making process, filters which ask the question, “Does what I’m about to do match a behaviour trope that I’ve blacklisted?”

If the answer is yes, then the potential action gets vetoed and the brain goes back to the drawing board. If the answer is no, then the potential action gets bumped further up the decision making chain.

The effect of some filters are more visible than others. What many people would call one’s “social filter” is something that normally only vetoes doing or saying something after the potential action has already been brought to your conscious attention. You think about the accurate remark you could make about your colleague’s outfit, but then don’t say it because your filter deemed it rude.

Ugh fields look a lot like filters that get stronger and stronger with time and operate in the no-man’s-land between your opaque subconscious processes and your visible conscious ones.

If you want to be more specific about what it means to “match a trope”, we can replace trope with similarity cluster, and say that a potential action matches the trope if it is within a certain distance of the epicenter of the similarity cluster that a trope references.

For my personal use, I’ve called this sort of filtering Trope Dodging, but if another name seems better, I’m happy to concede.

Here’s what this could look like in day to day life:

Sasha thinks that whining and complaining are just terrible ideas. She grew up with siblings that always complained and who never tried to make their lives better. This left a strong impression on Sasha, and now she has a filter that keeps her from doing things that are too similar to the trope of “being whiny”. She doesn’t even like to discuss with her classmates about the difficulty of the various university course they are enrolled in, because her filter registers that as too close to “being whiny”.

You, being a lovable well read rationalist, can probably guess how these sorts of filters are less than optimal. Even if if you have determined that it really is in your best interest to not do anything that would be a prototypical example of a blacklisted trope, reality is messy enough that you are always going to be filtering out actions that would actually be a great idea, but just happen to be within the scope of your filter.

Hmmmm, actually, if filters you to cut out big swathes of “definitely bad ideas”, might the time saved in reaching conclusions faster outweigh the loss of superior alternatives?

That’s a totally reasonable hypothesis. If you had a filter that only started to get the wrong answer at the very edges of it’s blacklisted trope. You get problems when you have filters that are grossly miscalibrated and filter out a non-trivial amount of useful options.

When you have filters with that broad of a scope, it’s easy to catch something which vaguely matches the aesthetic of a trope, while still being devoid of the “core essence” that caused you to define the trope in the first place. Discussing the difficulty of your classes sort of fits the general aura of “whining and complaining”, but lacks the essential traits of whininess, i.e. a sense of entitlement and a defeatist attitude. There’s nothing inherently wrong with making someone aware that you have wronged them, yet it sort of vaguely smells like “getting angry at people”. I know a lot of people who don’t take time for things like sleep, exercise, and nutrition, because they have such a broad filter against “wasting time”.

In all of these scenarios someone loses out on doing something that could genuinely help them, all because their filters are too broad.

***

So far, you might not be very impressed. Most of what I’ve said could be summed up with, “When people use generalizations to guide their actions, they will often make the wrong decision.” However, I think there are some specific claims I’m now able to make which would have been much harder to make clear without the build up.

Claim 1: By default, one’s filters are likely to be poorly calibrated, and the scope of a filter is proportional to how strongly you feel about the prototypical example of the filter’s trope:

Obviously, people are capable of nuanced views and well calibrated filters, but I think it would be an error to assume that any filters you haven’t given any particular attention to are going to be quality. The initial growth of a filter looks a lot like someone over-steering away from something they decided they didn’t like. If you really don’t like the trope you are trying to avoid, I think that your filter defaults to having an equally large scope.

Claim 2: A poorly calibrated filter does more damage the deeper into your decision making process it lives.

Basically, if a poorly calibrated filter is acting within you range of conscious awareness, then you have decent chance at realizing that something is going on. Being privy to the process of filtering makes it easy to question the quality of the filter.

When you have a filter that acts outside of your conscious awareness, it feels like you just never even think of the ideas that are being filtered. For me, it is very rare for the thought, “Hey, maybe I should ask someone for help” to come to mind, even when it objectively makes sense given the situation. I’ve learned to mostly follow through when someone brings up the idea of getting help, but it’s still something I’m subconsciously filtering against, and it’s hard for the idea to even come to mind.

***

So… given all that, how should one proceed? I don’t have any particular advice on how to calibrate a filter you’ve already identified as a problem. However, combining claim one and two gives some insight into how to seek out these filters. To find poorly calibrated filters that don’t operate inside your conscious awareness, focus on the things you feel most strongly about. What are things that are very similar to things that you hate, but that aren’t actually harmful, and what are things that are similar to things you really love, but that are actually harmful?

Prompt for discussion: Once one finds deeply rooted filters with poor calibration, how should you go about fixing them?