As I ease out into a short sabbatical, I find myself turning back to dig the seeds of my repeated cycle of exhaustion and burnout in the last few years.
Many factors were at play, some more personal that I’m comfortable discussing here. But I have unearthed at least one failure mode that I see reflected and diffracted in others lives, especially people who like me love to think, to make sense, to understand. So that seems worth a blog post, if only to plant a pointer to the problem, and my own way to solve it.
I’ve christened this issue the “true goal fallacy”: the unchecked yet embodied assumption that there is a correct goal in the world, a true essence in need of discovery and revealing.
Case Study: Team Lead Crash
A concrete example: the inciting incident of my first burnout was my promotion to team lead.
In retrospect, my job was to own the goal-setting for the team, and then leading it to accomplish that goal.
But at the time, I felt instead that I was supposed to divine the correct path, the one true way for my team to realize the abstract aims of my company. I pestered my bosses with an excess of questions, interpreted their every words as scripture ciphering the true goal, got confused by the slightest discrepancy. I would chose a goal one week, then sense doubt creeping in, find many reasons for why it was obviously the wrong choice, interpreted my bosses feedback as clearly saying I fucked it up, and ended up switching, starting the loop again. I felt depressed that no one would tell it me straight what the goal was supposed to be, felt terribly, sickeningly guilty for not finding it, for not getting it right, for fucking everything for my team and my company.[1]
Ironically, I remember one of my bosses actually answering the implicit question one day, something along the line of “You ask questions like you believe I know the correct way forward, and I’m just not telling it to you. I don’t.” I don’t remember what I answered, but I know I didn’t get it. Because this point only started to sink in a few weeks ago, a year and half after the facts. In the meanwhile, I burned out badly, asked to not be team lead anymore, and after some time came back to a reasonable, though hobbled, work life.
What I failed to get, even after being told explicitly, was that there was no correct answer. My bosses had delegated the figuring-out to me, yes, but not the figuring-out of some secret hidden truth. Really, they had left to me the work of choosing a decent goal, and enabling my team to tackle it.
It didn’t have to be the right goal, because there was no right goal. Not only was the odds of each potential objective shrouded in uncertainty, but even with more information, there would still be many valid paths. There was no one single approach, only options that traded-off on various axes, like how expensive they were, how easy we could get feedback on them, how much they relied on my team’s strengths and skills…
My mistake was to interpret the team lead situation as an epistemic problem: a problem where I needed more knowledge, more understanding, better models. Where instead it was a decision theoretic problem: I had to take some action, not randomly, but with a dose of arbitrary.
There was no fully correct answer — the only bad answer was not choosing, which was what my guilt and my fretting amounted to, although in a way that felt more legitimate.
Of course, I might have realized that my chosen goal was not a good one after sometimes working on it. But then, I would have gained something: feedback from reality. With that in hand, my next arbitrary goal-setting would be slightly better calibrated, and so on.[2]
Axiology of True Goal Fallacy
In my experience, the true goal fallacy often emerges from the conjunction of two things: a character trait, and a situation.
I’ve already mentioned in passing the character trait: wanting to understand. Or maybe more adequately, needing to understand. I want to really get things. And I enjoy learning new ideas, models, and tricks that help me understand more and better. I expect that many people reading this post will resonate with that.[3]
Unfortunate, this broadly positive trait turns into a liability when set up in a class of situations: ones with an abstract, general goal.
This is the standard state of executives, team leads, independents: nobody prepackages a nice clean goal for you. There is no clear plan, no defined metric, only a vague direction. You might want to solve problem X, or to build a successful product, or to make discoveries about topic Y. You probably have a handful of ideas of what you could do, of goals you could aim for. But it’s unlikely that any so clearly outstrips and outperforms all the others that there is no doubt left, no confusion, no ambiguity.
For the thinking kind of person, it feels natural, sensible even, to double down into thinking hard about the goal. If only you build the right model, take everything into account, it will fall into place. There will be no more confusion, no more ambiguity left. You will be sure.
Of course, that point never happens. Instead, you delay, you constantly second-guess yourself. And then the lack of progress, the lack of final decision, the lack of solution starts to get you, and you become depressed, stressed, guilty. Which makes you double down into obsessing about figuring it out. Just one more day, one more though, one more strategy session.
At this point, you’ve fallen hard for the true goal fallacy.
One reason it was so easy for me to get captured is because the solution is not satisfying to someone who likes and prizes understanding. There is no aha moment of revelation where everything clicks into place. You don’t feel smart, like you caught on one of the secret of the universe. You just pick an option, and you stick with it long enough to get some actual feedback from reality. It doesn’t feel special, or mind-blowing — it just adds up to normality.
Absorbing Ambiguity And Allowing Feedback
On the day where I first realized that the true goal fallacy existed, and that I’d gotten tricked by it repeatedly over the years, I opened Richard Rumelt’s Good Strategy/Bad Strategy at a random page, looking for something to chew on. I got a spot-on illustration of how to tackle the true goal fallacy:
Two years after Kennedy committed the United States to landing a person on the moon, I was working as an engineer at NASA’s Jet PropulsionLaboratory (JPL). There I learned that a good proximate objective’s feasibility does wonders for organizational energy and focus.
One of the main projects at JPL was Surveyor, an unmanned machine that would soft-land on the moon, take measurements and photographs, and, in later missions, deploy a small roving vehicle. The most vexing problem for the Surveyor design team had been that no one knew what the moon’s surface was like. Scientists had worked up three or four theories about how the moon was formed. The lunar surface might be soft, the powdery residue of eons of meteoric bombardment. It might be a nest of needle-sharp crystals. It might be a jumble of large boulders, like a glacial moraine. Would a vehicle sink into powder? Would it be speared on needlelike crystals? Would it wedge between giant boulders? Given this ambiguity about the lunar surface, engineers had a difficult time creating designs for Surveyor. It wasn’t that you couldn’t design a vehicle; it was that you couldn’t defend any one design against someone else’s story about the possible horrors of the lunar surface.
At that time, I worked for Phyllis Buwalda, who directed Future Mission Studies at JPL. Homeschooled on a ranch in Colorado, Phyllis had a tough, practical intellect that could see to the root of a problem. She was best known for her work on a model of the lunar surface. With this specification in place, JPL engineers and subcontractors were able to stop guessing and get to work.
The lunar surface Phyllis described was hard and grainy, with slopes of no more than about fifteen degrees, scattered small stones, and boulders no larger than about two feet across spaced here and there. Looking at this specification for the first time, I was amazed. “Phyllis,” I said, “this looks a lot like the Southwestern desert.”
“Yes, doesn’t it?” she said with a smile.
“But,” I complained, “you really don’t know what the moon is like. Why write a spec saying it is like the local desert?”
“This is what the smoother parts of the earth are like, so it is probably a pretty good guess as to what we’ll find on the moon if we stay away from the mountains.”
“But, you really have no idea what the surface of the moon is like! It could be powder, or jagged needles.…”
“Look,” she said, “the engineers can’t work without a specification. If it turns out to be a lot more difficult than this, we aren’t going to be spending much time on the moon anyway.”
Her lunar specification wasn’t the truth—the truth was that we didn’t know. It was a strategically chosen proximate objective—one the engineers knew how to tackle, so it helped speed the project along. And it was sensible and clever at the same time. You could write a PhD thesis on the options analysis implicit in her insight that if the lunar surface wouldn’t support a straightforward lander, we had more than a design problem—the whole U.S. program of landing a man there would be in deep trouble. Writing the history of Surveyor, Oran W. Nicks said, “The engineering model of the lunar surface actually used for Surveyor design was developed after study of all the theories and information available. Fortunately, this model was prepared by engineers who were not emotionally involved in the generation of scientific theories, and the resulting landing system requirements were remarkably accurate.”
This lunar surface specification absorbed much of the ambiguity in the situation, passing on to the designers a simpler problem. Not a problem easily solved, or to which a solution already existed, but a problem that was solvable. It would take time and effort, but we knew that we could build a machine to land on Phyllis’s moon.
(Richard Rumelt, Good Strategy/Bad Strategy, 2017)
This example perfectly illustrates why the choice, although arbitrary, is so essential in such a situation: someone needs to absorb enough of the ambiguity that people can tackle an actual concrete common goal.
You might absorb the ambiguity for your team, for your company, or just for your future self. But if no one does, it will seep into every plan, every meeting, every decision (or lack thereof).
In addition, the decision itself opens up the possibility of feedback, not only from the world, but from other people too.
In my short stint as a team lead, I couldn’t really use feedback from my bosses, because I couldn’t stick to a goal long enough to start integrating the advice and push back. Worse, because of the very fuzziness of my own goals, any feedback had to infer somewhat what the goal was.
That is, every feedback contained two parts:
Feedback on the goal
Feedback on my strategy to reach that goal
So for every piece of advice I got, I had to reverse engineer what was about the goal and what was about the plan, always losing something in translation. Whereas, if you set a clear goal from the time being, the feedback can focus on whether your plans are the best ones available for that goal.
On Never Being Fully Cured
I know now of the true goal fallacy — it’s in my models, my maps of the world, and I can recognize it when I see it, eventually.
But that doesn’t mean I don’t fall for it. It is still perfectly natural for me to start looking for the correct answer to such decision theoretic questions, only to notice, hours or days afterward, that I got confused again.
My noticeable progress has been that I eventually make the decision, with significantly less suffering that I used to, because I get that this is the actual way out. After some lost time, I do get out of the loop and act. On good days, I might not even start looping.
As Oliver Burkeman has eloquently written about living with all the difficulties of a finite human:
Instead, you choose to put down that impossible burden – and to keep on putting it down when you realize, as you frequently will, that you’ve inadvertently picked it up again.
(Oliver Burkeman, Meditations For Mortals, 2024)
- ^
And that’s not even mentioning the extinction scale of the problem at hand…
- ^
Hence the useful heuristic of choosing goals which have quick falsification.
- ^
Notably, people who tend to act and do things, and rarely try to understand, model, or predict, don’t tend to fall for the true goal fallacy. They have other failure modes.
I sometimes fall into the same trap although never in the work context (probably because my P(job is bullshit) = 100% and P(people are honest, goals are clear, etc. | job is bullshit) is very small).
I do this in EA and LW stuff. I sometimes assume that EA people are smart and not overconfident and I can trust them. I wish more people were sometimes doing something like just straightforwardly telling you dumb stuff, making you believe it and doubt them just to train you.