Why do all out attacks actually work?
A surprising amount of rationalists agree that people can often do what seems impossible otherwise if they try really, really hard. It’s something on the outset I wouldn’t have expected a community of rationalists to believe—on the surface it seems like pure magical thinking—but I think it’s nevertheless true anyways, because I see a lot of evidence for it. Lots of startup advice is repetitions of this point; if you want to succeed you can’t give up, you have to look for solutions where you don’t think on the outset there are any, optimists do better than pessimists, etc. etc. And anecdotally I’ve found that I have a lot of trouble accurately assessing the limit of my own abilities. I’m usually wrong, not only wrong in the sense that I’m generally inaccurate, but wrong in a specific direction, i.e. I consistently underestimate what I will accomplish when I invoke my inner anime character.
To say this is true of most people is something that should be pretty jarring. It’s not obvious why humans should have a bias toward underestimating themselves in this way. On the other hand, you could also say that this entire phenomenon is all just confirmation and sampling bias. After all, I only ever really find confirmation of the reverse—when I think I can’t do something and I find out I can. Most startups do fail, and the ones that succeed are subject to survivorship bias. Intuitively, however, it feels like this is a more common experience for me than it should be, or at least that my errors in this vein are fairly conspicuous examples of faulty reasoning. It’s not that I can’t think of things I can’t do, persay. It would be difficult for me to take a sudden tumble into the sun and survive with no prep time. It’s that when I am wrong, the errors seem particularly egregious. I don’t know that everyone has this experience, but if my experience is the same for most people, I think have an explanation for it.
One insight I have is that when doing the impossible there aren’t really just two modes, all-out-attack and normal attempts. What I’ve found is that there’s actually a continuum of impossibility goggles. When I was in middle school and my parents wanted me to get to baseball practice, I would, in full honesty, propose the most trivial obstacles as evidence for why that was not going to happen. It was raining, for instance. Our coach cancelled and the team wasn’t going to be there. I don’t think there was a single circumstance, really, where I could not have gone out to the baseball field and just hit balls off the tee by myself, or balls pitched by my Dad. I just didn’t want to do it. The thing is, sometimes in the moment I really did think these things make it “impossible”. If you had prompted me then and asked me if these things were physically impossible, I might have said no, but that didn’t stop me from thinking in the moment they were entirely impractical.
The setting of these exceptionally bad assessments suggests a motivation—I had a third party, my parents, who needed to be convinced that they shouldn’t try to send me out to the park to hit baseballs. A common theme of Robin Hanson’s EitB is that we sometimes lie to ourselves to make it easier to lie to others, and I think that goes a long way in explaining why this occurs. When someone asks you to do something you don’t want to do, and you say “I might if my daughter’s life depended on it, but I won’t do it as it is now”, that’s a much more frustrating objection than “I can’t do it because of x, y, and z obstacles.” Socially, it’s more agreeable to say “I can’t get this done for you because of these bureaucratic checks” than “I could break some laws/call a dozen different offices until it happened, but that’s a lot of effort for a random favor.” Before we choose to solve problems, we scan the solution space to see if solving it is plausible. A lack of motivation makes us scan in a narrower space and compels us to give up quicker than we otherwise might, so that we can honestly claim to others that we can’t see how it can be done. Obviously sometimes we do in full consciousness lie to other people about these things, but sometimes we do not, and just give up.
Even when there’s not a third party that has to be convinced, or has a personal interest in you completing the task you think is impossible, not completing it may reflect badly on you. Which sounds better: my startup could have succeeded but I didn’t have Elon Musk-tier drive, or my startup would have succeeded except for those meddling bureaucrats! If you take a public undertaking, and lose, explaining that it was impossible to accomplish saves some face. It keeps your image as a respectable, formidable person, who just happened to attempt something that no other person could do if they were in your shoes, or faced the resource constraints you had. Explaining to people that maximum effort may have done the job is like explaining to people that you’re not sure that they’re wrong during an argument, you’re just almost positive. The contents of your words are true, but what you signal to the other person is different. Non-rationalists will take your “almost positive” as a sign of confusion or internal doubt, rather than pedantry, just as they will take your “almost maximum effort” as a sign that you couldn’t really muster the effort. And this goes for things you have attempted, as well as things you haven’t attempted. People judge you on your ability to accomplish things you haven’t actually tried, too. If you convincingly mark a large part of the possibility space off as “impossible”, then you can explain your inability to move forward as a product of the problem, rather than a shortage of internal willpower.
In the worst cases, this can be an unconscious effort to convince other people not to try. Publicly failing to solve a technical problem at your engineering company is only worse than failing and then having someone else, or another team, come in and clean house. I wonder if the oft-cited quote about grey-haired scientists declaring something possible or impossible has something to do with this. If you are an eminent scientists who has attempted a problem or worked within a closely related field for your entire life, perhaps there are some emotional reasons to suggest at the end of your career that a target is unassailable. If you, the prestigious Nobelist, couldn’t do it, who is any future researcher or engineer to say they can?