Simulate the CEO

Link post

Humans can organize themselves into remarkably large groups. Google has over a hundred thousand employees, the worldwide Scouting movement has over fifty million scouts, and the Catholic Church has over a billion believers.

So how do large numbers of people coordinate to work towards a common mission?

Most organizations are headed by some kind of “CEO” figure. They may use a title like President, Executive Director, or Pope, and their power is likely constrained by some kind of board or parliament, but the basic idea is the same—there is a single person who directs the behavior of everyone else.

If you have a small group of people then the CEO can just tell each individual person what to do, but that doesn’t scale to large organizations. Thus most organizations have layers of middle managers (vicars, moderators, regional coordinators, etc) between the CEO and regular members.

In this post I want to argue that one important thing those middle managers do is that they “simulate the CEO”. If someone wants to know what they should be doing, the middle manager can respond with an approximation of the answer the CEO would have given, and thus allow a large number of people to act as if the CEO was telling them what to do.

This isn’t a perfect explanation of how a large organization works. In particular, it ignores the messy human politics and game playing that makes up an important part of how companies work and what middle managers do. But I think CEO-simulation is a large enough part of what middle managers do that it’s worth taking a blog post to explore the concept further.

In particular, if simulating the CEO is a large part of what middle managers do, then it’s interesting to think about what could happen if large language models like GPT get good at simulating the CEO


A common role in tech companies is the Product Manager (PM). The job of the a PM

Is to cause the company to do something (eg launch a product) that requires work from multiple teams. Crucially, the PM does not manage any of these teams and has no power to tell any of them what to do.

So why does anyone do what the PM asks?

Mostly, it’s because people trust that the PM is an accurate simulation of the CEO. If the PM says something is important, then the CEO thinks it is important. If the PM says the team should do things a particular way then the CEO would want them to do it that way. If you do what the PM says then the CEO will be happy with you and your status in the company will improve.

Part of the reason Sundar Pichai rose to being CEO of Google is that he got a reputation for being able to explain Larry Page’s thinking better than Larry could—he was a better simulation of Larry than Larry was.

Of course the ability to simulate the CEO is important for any employee. A people manager will gain power if people believe they accurately simulate the CEO, and so will a designer or an engineer. But the importance of simulating the CEO is most visible with a PM since they have no other source of power.


Of course, the CEO can’t possibly understand every detail of what a large company does. The CEO of Intel might have a high level understanding of how their processors are designed, manufactured, and sold, but they definitely don’t understand any of these areas with enough depth to be able to directly manage people working on those things. Similarly the Pope knows little about how a particular Catholic School is run.

In practice, a CEO will usually defer to the judgment of people they trust on a particular topic. For example, the CEO of a company might defer to the head of sales for decisions about sales. You can think of the combination of the CEO and the people they defer to as making up an “Extended CEO”—a super-intelligence made by combining the mind of the CEO with the minds of the people the CEO defers to.

When I say someone needs to simulate the CEO, what they really need to simulate is the Extended CEO.


Similarly, most organizations will have different cultures in different teams. For example Google has very different cultures in Search and Android. In general this is good—it allows a company to try out multiple cultures and find out what works best, and it allows a team to tailor its culture to the kind of work that the team does.

Having different teams cultures is fine when teams have clear boundaries and don’t need to work together. However if teams need to interact with each other then it is usually necessary for them to align around a shared way of doing things in the space where they interact.

I’ve used the word ‘team’ here, but similar principles apply to any two groups of people that do some things separately and some things together—whether they are companies, families, countries, or sports teams. Two sports teams can have very different cultures, but need to agree on the rules of the game they play together. Two countries can have very different cultures, but have treaties to agree how they interact.


What do you do if you think the (extended) CEO is wrong?

It’s rarely useful to act in direct opposition to the CEO. Even if you are right, you will rapidly lose your ability to influence other people once it becomes clear that you no longer accurately simulate the CEO.

Instead, the best practice is to split your mind into two halves. One half continues to simulate the (possibly wrong) CEO and directs others according to the CEO’s wishes while the other half attempts to persuade the CEO to change their mind. This is sometimes known as “disagree and commit”—you follow the official plan while being open about the reasons it might be wrong.

This also looks a lot like democracy and rule of law. It’s usually best to follow the laws as written, while campaigning for the laws to be changed to something better.

An important special case is that sometimes the CEO will tell you to do X, but they would have said to do Y if they had access to more information. In that case you can probably do Y instead of X if the team and the CEO both trust you to accurately simulate the CEO.


The CEO can make things easier for everyone else by making themselves easy to simulate.

They can do this by making sure that when they make a decision, they also outline the principles that others could have used to make that decision.

Many organizations have a short list of easily memorized principles that can predict the way the CEO is likely to answer a wide variety of questions. Google has “don’t be evil” and “focus on the user”. Facebook has “move fast and break things”. Christianity has the Beatitudes.

Sometimes this means intentionally thinking in a simpler and more predictable way, in order to make the CEO’s thinking easier to simulate. It’s often better to have a slightly less good mental model that lots of people can apply consistently at speed than a more sophisticated mental model that is so hard to simulate that everyone has to check in with the CEO.

If a manager finds themselves having to micromanage their employees, it often means that they haven’t made themselves easy enough to simulate.


Metrics are also a way of simulating the CEO.

If the engineers at Google Search had infinite time, then the best way to decide whether a ranking change was good might be to have the CEO personally look at thousands of queries and give their personal judgment about whether the results had got better. But that doesn’t scale. Instead, Google pays human raters to evaluate search results according to directions given in a rater guidelines document that tells them how to simulate the CEO.

Similarly, the best way for Facebook to judge whether a product change is good might be to have the CEO personally observe every user’s use of the product. Instead Facebook uses metrics like Time Spent, and Meaningful Social Interactions that roughly approximate the opinion the CEO would have formed, had they seen the users using the product.


So what happens if we throw large language models like GPT into the mix?

A good manager is probably smarter than GPT, but what if you fine-tuned GPT by having it read every internal email and document that had ever been written inside the company. It’s possible that it might be better at simulating the CEO than the majority of employees. Moreover, unlike a senior manager, you can ask GPT as many dumb questions as you like without worrying that you are wasting its time.

Similarly, a GPT simulation of the CEO would probably be a better judge of whether a product change is good than low paid human raters or crude metrics like Time Spent. In these cases you don’t need to be better at simulating the CEO than a skilled manager—just better than the cheap approximations we use for metrics.

We are already starting to see AIs manage humans—such as at Amazon, where distribution center workers have their performance evaluated by a machine learning algorithm.

Is any of this a good idea? I’m not sure. There is something creepy about the idea of having an AI make management decisions or decide what product changes to launch. But if it is more effective then it will probably happen.