There’s a certain type of workplace where the only thing that matters is that you, as an individual, produce results that are valuable to the company. Netflix is famously like this (or claims to be); here’s a quote from their culture page:
Succeeding on a dream team is about being effective, not about working hard. Sustained “B” performance, despite an “A” for effort, gets a respectful severance package. Sustained “A” performance, even with a modest level of effort, gets rewarded. Of course, to be great, most of us have to put in considerable effort, but hard work and long hours is not how we measure or talk about a person’s contribution.
There are a lot of ways this can be implemented. The “piece work” version (pay the worker a fixed wage per item produced) is traditional for manufacturing jobs. But for intellectual work, it is far harder to measure results on an individual basis. Let’s take software engineering as an example. Maybe your boss wants you to build some features—those features being shipped would be the result. But there are a lot of different ways you could build the features—you could cut corners and introduce a lot of tech debt, or you could polish them for a long time. How is the boss going to evaluate your results? They would need to specify ahead of time the expected quality of the work, but specifying it precisely enough is about equivalent to doing the job—so that won’t work.
Isaac Lyman at the Stack Overflow blog has a piece about measuring developer productivity. He argues that “interdependencies and nuances of individual work are too complex to be measured by an outside observer,” concluding that the only good way to do it is to measure it at the team level—“does this team consistently produce useful software on a timescale of weeks to months?”
I suspect that of all their talk about measuring results, Netflix is probably (at the individual level) doing something more like what Lyman suggests—using fuzzier methods to evaluate individual performance to decide whether to fire someone. (If anyone reading this works at Netflix, I would like to know whether I am right about this.)
Let’s take a look at individual-evaluation methods. How should we decide if an individual is doing their part on a software engineering team? We can’t measure something like hours worked, or lines of code committed; these measurements create bad incentives and fail in the obvious ways.
My ideal method, which I call “process orientation,” would be to look at the work-related decisions the person made. As many decisions as possible, spanning all aspects of the work life—should we refactor this component now or not? what should we name this method? is this well tested enough to ship? should I spend more time trying to figure out the best way to build this before diving in? how should I give feedback to my coworker about their bad code? should I quit the Slack app so I can focus, or leave it on in case someone needs me?
The next step is to ask, for each decision, if it was the right decision at the time they made it, given their knowledge and abilities. Hold people to the standard of (usually) making good decisions ex ante.
Note that this process does not take into account the outcome of the decisions! In some sense, this is the opposite of results orientation. In results orientation, you look at outcomes and ask “is this good or bad?” and then you reward or punish. In process orientation, you look only at their process of making decisions, and ask “does this process usually produce the best outcomes?”
My colleague Drew always says, “judge the shot while the ball is in the air.” This basketball metaphor is a great one: most shots don’t go in, so it is silly to spend much energy on whether or not a given shot made it. Basketball is all about whether the player positioned themselves appropriately to get open, used good form while shooting, and so on. Everything in human endeavor is like this—you win some, you lose some; what matters is whether you made the best decisions at the time.
Now, process orientation doesn’t mean we never look at the results—it just constrains how you look at them. We look at results to figure out which decision-making processes are best, and over time, that feeds back into evaluating someone’s performance. For example, perhaps we as a company realized that shipping bad code causes problems later on, so we decided code reviews need to happen on time, and communicated that. If the person continues deciding not to prioritize code reviews even after we’ve changed the process, it is likely bad decision making; but maybe they have an overriding reason.
Process orientation also does not mean that the company should bend over backwards to codify processes. Usually, we should limit ourselves to explaining best practices, then trust people’s ability to make good decisions in the moment, because it’s very hard to write down everything that matters in a decision. A prominent example would be COVID vaccine approvals, post trial. The AstraZeneca vaccine is still not approved in the US (as of this writing); but it is known to be safe and effective in the EU. It is not approved yet only because the written-down process doesn’t have a good way to accelerate vaccine approvals in this sort of situation.
I bring up the vaccine example not to get in a dig at the FDA, but to point out that it is easy to take the idea of “process” in the wrong direction. Lots of companies have tons of internal bureaucracy about how to get things done. I think people should not be expected to violate bureaucratic rules in order to succeed at their job, and the best way to achieve that is to not create lots of red tape in the first place. Process orientation is about individual decision-making, not written-down process.
I think one of the games that best illustrates process orientation is high level poker. (Again, not really something I play myself, but I like watching it.) Great poker players all have a very disciplined process for which hands to bet preflop, calculating pot odds and deciding when to call vs. fold. You may go on a long winning or losing streak, but the virtue is sticking to your process—if you get too outcome-oriented, that’s called “going on tilt” and probably means you are about to lose a lot of money.
I’m a manager and I implement this system, and encourage others to do it too. Most feedback I give my reports is based on processes that they haven’t perfected yet. Here are some ideas on ways to do this:
Reviewing notes and decision logs: If I can get someone to write down things they decided (best) or everything they thought was worth noting (noisier, but still useful) then I can try to get inside their head to simulate what I would have done in the same situation.
Habits: It’s very useful to think about what high-leverage habits would help stay on track with their process. For example, writing daily notes, running productive standup meetings, or a checklist for code deployment. This is a very fruitful area to explore; most of my own productivity comes through having good habits. (The Power of Habit by Duhigg is a great book about this.)
Asking and sharing “whys” proactively: When someone asks me for feedback on some problem (“what elements should be on this screen of the app?”) I often have an immediate answer based on my instincts (“it should have the user’s transaction history”). I give them this answer, but then I ask myself why that particular answer, and share that too (“keep in mind how the user normally arrives at this screen and what they are trying to do”); and maybe even another level of why (“principle of least confusion—you want to match the user’s expectation as much as possible”).
Skills rubric. At Wave, we write down a “rubric” for how an ideal person in their role would act, and then performance reviews involve checking it against that person’s actual behavior. One example from the Wave senior engineering rubric: “resourcefully debugs complicated issues”. The word “resourcefully” here implies that the person has learned a lot of processes for debugging things; so if you are trying to mentor someone along this axis, you might see something they got stuck on and realize that they are missing an entire category of debugging skills—perhaps they haven’t ever used a tracing framework. This is a rich source of process feedback (noting that skills are a subset of process).
The book Principles by Ray Dalio is a wonderful resource on process orientation.
Last thing. I want to be really clear on this: the only purpose of good process is to produce good outcomes. A process is not good unless it produces good outcomes. I’m trying to walk a fine line in this post—don’t over-focus on the outcomes, but also don’t attach virtue to process for its own sake.