I don’t mind self-help-books-level advice if it pointedly helps me improve my mental hygene. This did.
CuriousMeta
Which is perhaps most efficiently achieved by killing the wisher and returning an arbitrary inanimate object.
People are just really bad at seeing the merits of things they aren’t already in favour of.
I’d consider that an important factor in whether something ends up being an antimeme in a given culture.
In my understanding of the term, the most straightforward definition of antimemecy is “very low cultural infection rate”.
(And implicit in the discussion so far seems to have been a certain expected usefulness of mentioned examples. Maybe we should focus the conversation on things with high expected value and low cultural infection rate / overall prevalence in western culture.)
Example: Dividing the cake according to NEEDS versus CONTRIBUTION (progressive tax, capitalism/socialism,)
Seconded (after working with this concept-handle for a day). This here seems to be the exact key for (dis)solving the way my brain executes self-deception (clinging, attachment, addiction,).
(I’m noticing that in writing this, my brain is fabricating an option that has all the self-work results I envision, without any work required)
Powerful improv metaphor. Powerful post.
Ah, but if we’re immersed in a culture where status and belonging are tied to changing our minds, and we can signal that we’re open to updating our beliefs, then we’re good… as long as we know Goodhart’s Demon isn’t lurking in the shadows of our minds here. But surely it’s okay, right? After all, we’re smart and we know Bayesian math, and we care about truth! What could possibly go wrong?
The trickiness of roles that involve the disidentification with specific roles or the concept of roles in general must not be underestimated. That’s especially true for roles that seem to be opposed to the prevalent social structure.
I’m also reminded of Transactional Analysis. In particular, Games and Life Scripts.
Unlike good confabulations, antimemetic confabulations will make you increasingly uncomfortable. You might even get angry. The distractions feel like being in the brain of a beginner meditator or distractible writer. They make you want to look away.
You can recognize this pattern as an antimemetic signature. People love explaining things. If you feel uncomfortable showing off your knowledge it’s probably because you have something to hide.
That seems useful. Cognitive Dissonance as a cognitive Code Smell.
I’d love to read more on the topic.
A longer list of what LW folk consider to be antimemes would be pretty interesting, too. I like to think I gained some insight from the mention of Lisp and entrepreneurship.
Exciting stuff. This feels like a big puzzle piece I’d been missing. Have you written more about this, somewhere?
~vague gesturing at things I find interesting:-How do different people (different neurotypes? different childhoods? personality types?) differ in the realities they want to share?
-How do shared realities relate to phenomena like extraversion, charisma, autism?
-What’s the significance of creating shared realities by experiencing things together?
Besides, do you use other neglected people-models that are similarly high-yield? Vague gesturing appreciated.
My impression is that in-group status is always, inherently zero-sum.
While the influence/worth distinction may be a relevant one, I think it’d be relative worth that satisfies status-as-social-need.
Praise certainly meets other emotional needs, though, and it may well be rational to have more of it.
To prove it wrong it should be a meme that is complex and difficult to understand.
I’d propose as examples “most stuff taught at university”. Even outside of teaching institutions, complex ideas commonly spread memetically if the incentives for acquiring them are sufficiently visible from the outset. Think Evolutionary Theory, Object-Oriented Programming, or Quantum Physics.
I find that [letting go of the (im)possible worlds where I’m not trapped] helps reframe/dissolve the feeling of trappedness.
However, that kind of letting go often feels like paying a large price. E.g. in case of sensory overload it can feel like giving up on having any sense of control over reality/sensory-input whatsoever.
Does that maybe get at what you were asking?
It all does! Again, thanks for sharing.
Problem: Abyss-staring is aversive, for some (much) more than for others.
In my case, awareness hasn’t removed that roadblock. Psychedelics have, to some degree, but I find it hard to aim them well. MDMA, maybe?
Both, I’d think.
Also this entire post by Duncan Sabien
(@ Tech Executives, Policymakers & Researches)
Back in February 2020, the vast majority of people didn’t see the global event of Covid coming, even though all the signs were there. All it took was a fresh look at the evidence and some honest extrapolation.
Looking at recent AI progress, it seems very possible that we’re in the “February of 2020” of AI.
(original argument by Duncan Sabien, rephrased)
(@ Tech Executives, Policymakers & Researches)
If you genuinely believe that the world is ending in 20 years, but are not visibily affected by this, or considering extreme actions, people may be less likely to believe that you believe what you say you do.
IMO, that’s not the bottleneck. The bottleneck is people thinking you’re insane, which composure mitigates.
“Every paper published is a shot fired in a war”
Epistemic virtue isn’t a good strategy in that war, I suspect. Voicing your true best guesses is disincentivized unless you can prove them.
Yes, that effect on most people is kinda in the nature of antimemes.
In a LW context I wouldn’t paint the picture too black though. The average poster’s epistemic standards are high. High enough to warrant a mindful reader’s second look at the antimemes they’re proposing.
The corresponding discussions would certainly not be frictionless. That doesn’t mean they couldn’t provide some high-value insight to a few people, though.
To me this looks like the stuff LW is all about. I mean, aren’t we looking at low-hanging fruit hidden from vantage points of naive epistemology?