Another common werewolf move is to take advantage of strong norms like epistemic honesty, and use them to drive wedges in a community or push their agenda, while knowing they can’t be called out because doing so would be akin to attacking the community’s norms.
I’ve seen the meme elsewhere in the rationality community that strong and rigid epistemic norms are a good sociopath repellent, and it’s ALMOST right. The truth is that competent sociopaths (in the Venkat Rao sense) are actually great at using rigid norms for their own ends, and are great at using the truth for their own ends as well. The reason it might work well in the rationality community (besides the obvious fact that sociopaths are even better at using lies to their own ends than the truth) is that strong epistemics are very close to what we’re actually fighting for—and remembering and always orienting towards the mission is ACTUALLY an effective first line defense against sociopaths (necessary but not sufficient IMO).
99 times out of a 100, the correct way to remember what we’re fighting for is to push for stronger epistemics above other considerations. I knew that when I made the original post, and I made it knowing I would get pushback for attacking a core value of the community.
However, 1 time out of 100 the correct way to remember what you’re fighting for is to realize that you have to sacrifice a sacred value for the greater good. And when you see someone explicitly pushing the gray area by trying to get you to accept harmful situations by appealing to that sacred value, it’s important to make clear (mostly to other people in the community) that sacrificing that value is an option.
It’s important to distinguish the question of whether, in your own personal decisionmaking, you should ever do things that aren’t maximally epistemically good (obviously, yes); from the question of whether the discourse norms of this website should tolerate appeals to consequences (obviously, no).
I agree it’s important to realize that these things are fundamentally different.
It might be morally right, in some circumstances, to pass off a false mathematical proof as a true one (e.g. in a situation where it is useful to obscure some mathematical facts related to engineering weapons of mass destruction). It’s still a violation of the norms of mathematics, with good reason. And it would be very wrong to argue that the norms of mathematics should change to accommodate people making this (by assumption, morally right) choice.
A better norm of mathematics might be to NOT publish proofs that have obvious negative consequences like enabling weapons of mass destruction, and have a norm that actively disincentivizes people who publish that sort of research.
In other words, a norm might be to basically be epistemically pure, UNLESS the local instrumental considerations outweigh the cost to epistemic climate. This can be rounded down to “have norms about epistemics and break them sometimes,” but only if when someone points at edge cases where the norms are actively harmful, they’re challenged that sometimes the breaking of those norms is perfectly OK.
IE, if someone is using the norms of the community as a weapon, it’s important to point at that the norms are a means to an end, and that the community won’t blindly allow itself to be taken advantage of.
It might just make more sense to give this one up to word inflation and come up with new words. I’ll happily use the denotative vs. enactive language to point to this thing in the future, but I’ll probably have to put a footnote that says something like (what most people in the community refer to as decoupling vs. contextualizing.
It really looks like you’re defending the “appeal to consequences” as a reasonable way to think, and a respectable approach to public epistemology. But that seems so plainly absurd that I have to assume that I’ve misunderstood. What am I missing?
It might be that we just have different definitions of absurd and you’re not missing anything, or it could be that you’re taking an extreme version of what I’m saying.
To wit, my stance is that to ignore the consequences of what you say is just obviously wrong. Even if you hold truth as a very high value, you have to value it insanely more than any other value to never encounter a situation where you’re not compromising other things you value by ignoring the difference you could make by not saying something/lying/being careful about how to phrase things, etc.
Now obviously, you also have to consider the effect this type of thinking/communication has on discourse and the public ability to seek the truth—and once you’ve done that you’re ALREADY thinking about the consequences of what you say and what you allow others to say, and the task at that point is to simply weigh them against each other.
Your points 1,2,3 have noting to do with the epistemic problem of decoupling vs contextualizing,
This is probably because I don’t know what the epistemic problem is. I only know about the linked post, which defines things like this:
Decoupling norms: It is considered eminently reasonable to require your claims to be considered in isolation—free of any context or potential implications. Attempts to raise these issues are often seen as sloppy thinking or attempts to deflect.
Contextualising norms: It is considered eminently reasonable to expect certain contextual factors or implications to be addressed. Not addressing these factors is often seen as sloppy or even an intentional evasion.
… To a contextualiser, decouplers’ ability to fence off any threatening implications looks like a lack of empathy for those threatened, while to a decoupler the contextualiser’s insistence that this isn’t possible looks like naked bias and an inability to think straight”
I sometimes round this off in my head to something like “pure decouplers think arguments should be considered only on their epistemic merits, and pure contextualizers think arguments should be considered only on their instrumental merits”.
There might be another use of decoupling and contextualizing that applies to an epistemic problem, but if so it’s not defined in the canonical article on the site.
My basic read of Zack’s entire post was him saying over and over “Well there might be really bad instrumental effects of these arguments, but you have to ignore that if their epistemics are good.” And my immediate reaction to that was “No I don’t, and that’s a bad norm.”
I’m not sure what your hobby horse is, but I do take objection to the assumption in this post that decoupling norms are the obvious and only correct way to deal with things. The problem with this is that if you actually care about the world, you can’t take arguments in isolation, but have to consider the context in which they are made.
1. It can be perfectly OK for the environment to bring up a topic once, but can make people less likely to want to visit the forum if someone brings it up all the time and tries to twist other people’s posts towards a discussion of their thing. It would be perfectly alright for moderators who didn’t want to drive away their visitors to ask this person to stop.
2. It can be perfectly OK to kick out someone who has a bad reputation that makes important posters unable to post on your website because they don’t want to associate with that person, even IF that person has good behavior.
3. It can be perfectly OK to downvote posts that are well-reasoned, on topic, and not misleading, because you’re worried about the incentives of those posts being highly upvoted.
All of these things are tradeoffs with decoupled conversation obviously, which has its’ own benefits. The website has to decide what values it stands for and will fight for, vs. what it will be flexible on depending on context. What I don’t think is OK is just to ignore context and assume that decoupling is always unambiguously the right call.
That makes sense. I think I was tripped up by your use of the words “is” and “bad”, both of which are ambiguous. Things that might have helped me get your meaning are swapping “is” for “feels”, swapping “bad” for “aversive” or “unpleasant”, and adding the qualifier “for me” or “for many people”.
Of course, if you were under the impression that this is a near universal aversion, it makes less sense to make any of those changes. I suspect that that assumption also underlies the miscommunication of why people didn’t address the “change is aversive” objection in the original post as well—they typical-mind fallacied that change was neutral or good, and you did the reverse.
The old fashioned way I suppose, going by the reputation of the creator and the description they provide. I think readily available reviews are often worth going with a medium that has them though.
I don’t know how a personal value judgement fits in with your talk about a “burden of justification.” Why should someone feel the need to justify against your personal value judgement that change is bad? They simply have a different value judgement than you.
Indeed not, as it is not an assumption at all!
What is it then? You beg the question again by assuming it while trying to show how its not an assumption.
not only anything that got worse, but also the inherent badness of changing something! You “start with a negative score”,
This isn’t an argument, it’s just restating the premise. To see this, just change all instances of “change is bad” to “change is good” in your argument, and notice how they entire thing is still coherent. You start with a positive score for the change, because of the inherent goodness of change, and so on...
I don’t have a specific answer, but I do have two keywords that will get you most of the research
Involuntary Musical Imagery
In addition, I think that the Baby Shark video underwent viral/marketing dynamics apart from its’ inherent catchiness, so any explanation will have to take that into account as well.
would you recommend this to a friend”, but are also guaranteed to yield a truthful answer, because retweeting is an act of recommendation to friends, for which the user is then held accountable.
Note that even in this relatively straightforward case, the meaning of retweets can become conflated, from “I would recommend this to a friend” to “I agree with this”. I sometimes have to be careful about what I retweet because of this confusion, and things that I would otherwise recommend people read I hesitate to retweet because I don’t want people thinking I agree.
Change is bad.
I can certainly see a few reasons why one could have this assumption, but assuming it without arguing it in this case seems to be begging the question.
Connection Theory, from Leverage Research.
If Eliezers goals or beliefs are learned, then it applies. Anything that is learned can be unlearned with memory re-consolidation, although it seems to be particularly effective with emotional learning. An interesting open question is “are human’s born with internal conflicts, or do they only result from subsequent learnings?” After playing around with CT Charting and Core Transformation with many clients, I tend to think the latter, but if the former is true then memory reconsolidation won’t help for those innate conflicts.
In Unlocking the Emotional Brain, Bruce Ecker argues that the same psychological process is involved in all of the processes you mentioned above—memory reconsolidation (which is also the same process that electroconvulsive therapy is accidentally triggering).
According to Ecker, there are 3 steps needed to trigger memory reconsolidation:
1. Reactivate. Re-trigger/re-evoke the target knowledge by presenting salient cues or contexts from the original learning.
2. Mismatch/unlock. Concurrent with reactivation, create an experience that is significantly at variance with the target learning’s model and expectations of how the world functions. This step unlocks synapses and renders memory circuits labile, i.e., susceptible to being updated by new learning.
3. Erase or revise via new learning. During a window of about five hours before synapses have relocked, create a new learning experience that contradicts (for erasing) or supplements (for revising) the labile target knowledge.
There’s some problems to the theory that memory reconsolidation is what’s going on in experiential therapies like focusing, IFS, and exposure therapy, chief among them IMO that in animal studies, reconsolidation needs to happen within hours of creating the original learning (whereas in these therapies it can happen decades later).
However, I’ve found the framework incredibly useful for figuring out the essential and non-essential parts of the therapies mentioned above, for troubleshooting when a shift isn’t happening with coaching clients, and for creating novel techniques and therapies that apply the above 3 steps in the most straightforward way possible.
There’s no such thing as “convincing yourself” if you’re an agent, due to conservation of expected evidence.
Is your claim that the actual way the brain works is close enough to Bayesian updating that this is true?
There is no royal road to knowledge. One has to engage with a book in order to retain not just the conclusions of the book, but also the reasoning that led to the conclusions.
But what if there was? Certainly with hard work and deep reading, you can learn a good amount from books. However, the central point of the piece is that this is not the optimal way to learn. What if with other mediums you can have a lot of this work done for you, learning more material in less time?
Yes, one of the frustrating things is getting criticism that just feels like “this is just not the conversation I want to be having.” I’m trying to discuss how this particular shade of green effects the aesthetics of this particular object, but you’re trying to talk to me about how green doesn’t actually exist, and blue is the only real color. It’s understandable but it’s just frustrating.