“You can’t possibly succeed without [My Pet Issue]”

There’s a particular conversational move that I’ve noticed people making over the past couple years. I’ve also noticed myself making it. The move goes:

“You can’t possibly succeed without X”, where X is whatever principle the person is arguing for.

(Where “succeed” means “have a functioning rationality community /​ have a functioning organization /​ solve friendly AI /​ etc”)

This is not always false. But, I am pretty suspicious of this move.

(I’ve seen it from people from a variety of worldviews. This is not a dig on any one particular faction from local-politics. And again, I do this myself).

When I do the move, my current introspective TAP goes something like: “Hmm. Okay, is this actually true? Is it impossible to succeed without my pet-issue-of-the-day? Upon reflection, obviously not. I legit think it’s harder. There’s a reason I started caring about my pet-issue in the first place. But ‘impossible’ is a word that was clearly generated by my political rationalization mind. How much harder is it, exactly? Why do I believe that?”)

In general, there are incentives (and cognitive biases) to exaggerate the importance of your plans. I think this is partly for political reasons, and partly for motivational reasons – it’s hard to get excited enough about your own plans if you don’t believe they’ll have outsized effects. (A smaller version of this, common on my web development team, is someone saying “if we just implemented Feature X we’re get a 20% improvement on Metric Y”, and the actual answer was we got, like, a 2% improvement, and it was worth it. But, like, the 20% figure was clearly ridiculous).

“It’s impossible” is an easier yellow-flag to notice than “my numbers are bigger than what other people think are reasonable”. But in both cases, I think it’s a useful thing to train yourself to notice, and I think “try to build an explicit quantitative model” is a good immune response. Sometimes the thing is actually impossible, and your model checks out. But I’m willing to bet if you’re bringing this up in a social context where you think an abstract principle is at stake, it’s probably wrong.