Hi. So I sometimes see people saying things like, “Okay, so your argument is that at some point in the future we’re going to develop intelligent agents that are able to reason about the world in general and take actions in the world to achieve their goals. These agents might have superhuman intelligence that allows them to be very good at achieving their goals, and this is a problem because they might have different goals from us. But don’t we kind of have that already? Corporations can be thought of as superintelligent agents. They’re able to think about the world in general and they can outperform individual humans across a range of cognitive tasks. And they have goals—namely, maximizing profits or shareholder value or whatever—and these goals aren’t the same as the overall goals of humanity. So corporations are a kind of misaligned superintelligence.”
The people who say this, having established the metaphor, at this point tend to diverge, mostly along political lines. Some say, “Corporations are therefore a clear threat to human values and goals in the same way that misaligned superintelligences are, and they need to be much more tightly controlled if not destroyed altogether.” Others say, “Corporations are like misaligned superintelligences, but corporations have been instrumental in the huge increases of human wealth and well-being that we’ve seen over the last couple of centuries, with pretty minor negative side effects overall. If that’s the effect of misaligned superintelligences, I don’t see why we should be concerned about AI.” And others say, “Corporations certainly have their problems, but we seem to have developed systems that keep them under control well enough that they’re able to create value and do useful things without literally killing everyone. So perhaps we can learn something about how to control or align superintelligences by looking at how we handle corporations.”
So we’re gonna let the first two fight amongst themselves and we’ll talk to the third guy.
So how good is this metaphor? Are corporations really like misaligned superintelligences? (Quick note before we start: we’re going to be comparing corporations to AI systems, and this gets a lot more complicated when you consider that corporations in fact use AI systems. So for the sake of simplicity, we’re going to assume that corporations don’t use AI systems, because otherwise the problem gets recursive and, like, not in a cool way.)
First off, are corporations agents in the relevant way? I would say “yeah, pretty much.” I think that it’s reasonably productive to think of a corporation as an agent. They do seem to make decisions and take actions in the world in order to achieve goals in the world. But I think you face a similar problem thinking of corporations as agents as you do when you try to think of human beings as agents. In economics, it’s common to model human beings as agents that want to maximize their money in some sense. And you can model corporations in the same way, and this is useful. But it is kind of a simplification in that human beings in practice want things that aren’t just money. And while corporations are more directly aligned with profit maximizing than individual human beings are, it’s not quite that simple. So yes, we can think of corporations as agents, but we can’t treat their stated goals as being exactly equivalent to their actual goals in practice. More on that later.
So corporations are more or less agents. Are they generally intelligent agents? Again, yeah, I think so. I mean, corporations are made up of human beings, so they have all the same general intelligence capabilities that human beings have.
So then the question is: are they superintelligent? This is where things get interesting, because the answer is “kind of.” Like, SpaceX is able to design a better rocket than any individual human engineer could design. Rocket design is a cognitive task, and SpaceX is better at that than any human being. Therefore, SpaceX is a superintelligence… in the domain of rocket design. But a calculator is a superintelligence in the domain of arithmetic. That’s not enough.
Are corporations general superintelligences? Do they outperform humans across a wide range of cognitive tasks, as an AGI could? In practice, it depends on the task. Consider playing a strategy game. For the sake of simplicity, let’s use a game that humans still beat AI systems at, like Starcraft. If a corporation, for some reason, had to win at Starcraft, it could perform about as well as the best human players. It would do that by hiring the best human players. But you won’t achieve superhuman play that way. A human player acting on behalf of the corporation is just a human player, and the corporation doesn’t really have a way to do much better than that. A team of reasonably good Starcraft players working together to control one army will still lose to a single very good player working alone.
This seems to be true for a lot of strategy games. The classic example is the game of Kasparov versus the World, where Garry Kasparov played against the entire rest of the world cooperating on the Internet. The game was kind of weird, but Kasparov ended up winning. And the kind of real-world strategy that corporations have to do seems like it might be similar as well. When companies outsmart their competition, it’s usually because they have a small number of decision-makers who are unusually smart, rather than because they have a hundred reasonably smart people working together. For at least some tasks, teams of humans are not able to effectively combine their intelligence to achieve highly superhuman performance.
So corporations are limited to around human-level intelligence on those tasks. To break down where this is, let’s look at some different options corporations have. Four ways to combine human superintelligences. One obvious way is specialization: if you can divide the task into parts that people can specialize in, you can outperform individuals. You can have one person who’s skilled at engine design, one who’s great at aerodynamics, one who knows a lot about structural engineering, and one who’s good at avionics. [Graph: multiple narrow curves.] Can you tell I’m not a rocket surgeon? Anyway, if these people with their different skills are able to work together well, with each person doing what they’re best at, the resulting agent will in a sense have superhuman intelligence. No single human could ever be so good at so many different things. [Graph: the maximum of these curves is a broad curve.] But this mechanism doesn’t get you superhumanly high intelligence, just superhumanly broad intelligence, whereas superintelligent software (AGI) might look like this. [Graph: a both broad and high curve.]
So specialization yields a fairly limited form of superintelligence, if you can split your task up. But that’s not easy for all tasks. For example, the task of coming up with creative ideas or strategies isn’t easy to split up. You either have a good idea or you don’t. But as a team, you can get everyone to suggest a strategy or idea, and then pick the best one. That way, a group can perform better than any individual human. How much better, though, and how does that change with the size of the team? I got curious about exactly how this works, so I came up with a toy model. Now, I’m not a statistician, I’m a computer scientist, so rather than working it out properly I just simulated it a hundred million times, because that was quicker.
Okay, so here’s the ideal quality distribution for an individual human. We’ll model it as a normal distribution with a mean of 100 and a standard deviation of 20. So what this means is you ask a human for a suggestion, and sometimes they do really well and come up with a 130-level strategy. Sometimes they screw up and can only give you a 70 idea. But most of the time, it’s around 100. Now suppose we had a second person whose intelligence is the same as the first. We have both of them come up with ideas and we keep whichever idea is better. The resulting team of two people combined looks like this. On average, the ideas are better. The mean is now, what, 107? And as we keep adding people, the performance gets better. Here’s 5 people, 10, 20, 50, 100. Remember, these are probability distributions, so the height doesn’t really matter. The point is that the distributions move to the right and get thinner. The average idea quality goes up and the standard deviation goes down. So we’re coming up with better ideas and more reliably. But you see how the progress is slowing down. We’re using a hundred times as much brainpower here, but our average ideas are only like 25% better. What if we use a thousand people, ten times more resources? Again, only gets us up to around 135. Diminishing returns.
So what does this mean for corporations? Well, first off, to be fair, this team of a thousand people is clearly superintelligent. The worst ideas it ever has are still so good that an individual human will hardly ever manage to think of them. But it’s still pretty limited. There’s all this space off to the right of the graph that it would take vast team sizes to ever get into. If you’re wondering how this would look with seven billion humans, well, you have to work out the statistical solution yourself. The point is the team isn’t that superintelligent because it’s never going to think of an idea that no human could think of, which is kind of obvious when you think about it. But AGI is unlimited in that way.
And in practice, even this model is way too optimistic for corporations. Firstly, because it assumes that the quality of suggestions for a particular problem is uncorrelated between humans, which is clearly not true. And secondly, because you have to pick out the best suggestion, but how can you be sure that you’ll know the best idea when you see it? It happens to be true a lot of the time, for a lot of the problems that we care about, that evaluating solutions is easier than coming up with them. “You know, Homer, it’s very easy to criticize.” Machine learning relies pretty heavily on this. Like, writing a program that differentiates pictures of cats and dogs is really hard, but evaluating such a program is fairly simple. You just show it lots of pictures of cats and dogs and see how well it does. The clever bit is in figuring out how to take a method for evaluating solutions and use that to create good solutions.
Anyway, this assumption isn’t always true, and even when it is the fact that evaluation is easier or cheaper than generation, doesn’t mean that evaluation is easy or cheap. Like, I couldn’t generate a good rocket design myself, but I can tell you that this one needs work. [Video of a rocket exploding.] So evaluation is easier than generation. But that’s a very expensive way to find out, and I wouldn’t have been able to do it the cheap way by just looking at the blueprints. The skills needed to evaluate in advance whether a given rocket design will explode are very closely related to the skills needed to generate a non-exploding rocket design.
So yeah, even if a corporation could somehow get around being limited to the kind of ideas that humans are able to generate, they’re still limited to the kind of ideas that humans are able to recognize as good ideas. Just how serious is this limitation? How good are the strategies and ideas that corporations are missing out on? Well, take a minute to think of an idea that’s too good for any human to recognize it as good. Got one? Well, it was worth a shot. We actually do have an example of this kind of thing, in move 37 from AlphaGo’s 2016 match with world champion Lee Sedol. “That’s a very… that’s a very surprising move. I thought it was a mistake.” Yeah, that turned out to be pretty much the move that won the game. But your Go-playing corporation is never going to make move 37. Even if someone happens to suggest it, it’s almost certainly not going to be chosen. “Normally, human, we never play this one because it’s bad!” It’s not enough for someone in your corporation to have a great idea. The people at the top need to recognize that it’s a great idea. That means that there’s a limit on the effective creative or strategic intelligence of a corporation which is determined by the intelligence of the decision-makers and their ability to know a good idea when they see one.
Okay. What about speed? That’s one of the things that makes AI systems so powerful, and one of the ways that software AGI is likely to be superintelligent. The general trend is we go from “computer can’t do this at all” to “computers can do this much faster than people”. Not always, but in general. So I wouldn’t be surprised if that pattern continues with AGI. How does the corporation rate on speed? Again, it kind of depends. This is closely related to something we’ve talked about before: parallelizability. Some tasks are easy to split up and work on in parallel, and some aren’t. For example, if you’ve got a big list of a thousand numbers and you need to add them all up, it’s very easy to parallelize. If you have ten people, you can just say, “Okay, you take the first hundred numbers, you take the second hundred, you take the third, and so on.” Have everybody add up their part of the list, and then at the end, you add up everyone’s totals. However long the list is, you can throw more people at it and get it done faster—much faster than any individual human could. This is the kind of task where it’s easy for corporations to achieve superhuman speed.
But suppose instead of summing a list, you have a simple simulation that you want to run for, say, a thousand seconds. You can’t say, “okay, you work out the first hundred seconds of the simulation, and you do the next hundred, and you do the next hundred, and so on,” because obviously the person who’s simulating second 100 needs to know what happened at the end of second 99 before they can get started. So this is what’s called an inherently serial task. You can’t easily do it much faster by adding more people. You can’t get a baby in less than nine months by hiring two pregnant women, you know. Most real-world tasks are somewhere in between. You get some benefits from adding more people, but again, you hit diminishing returns. Some parts of the task can be split up and worked on in parallel. Some parts need to happen one after the other.
So yes, corporations can achieve superhuman speed at some important cognitive tasks, but really, if you want to talk about speed in a principled way, you need to differentiate between throughput (how much goes through the system within a certain time) and latency (how long it takes a single thing to go through the system). These ideas are most often used in things like networking, and I think that’s the easiest way to explain it. So basically, let’s say you need to send someone a large file, and you can either send it over a dial-up internet connection, or you can send them a physical disk through the postal system. The dial-up connection is low-latency (each bit of the file goes through the system quickly) but it’s also low-throughput (the rate at which you can send data is pretty low). Whereas sending the physical disk is high-latency (it might take days for the first bit to arrive) but it’s also high-throughput (you can put vast amounts of data on the disk, so your average data sent per second could actually be very good).
Corporations are able to combine human intelligences to achieve superhuman throughput, so they can complete large complex tasks faster than individual humans could. But the thing is, a system can’t have lower latency than its slowest component. And corporations are made of humans, so corporations aren’t able to achieve superhuman latency. And in practice, as you’ve no doubt experienced, it’s quite the opposite. So corporate intelligence is kind of like sending the physical disk. Corporations can get a lot of cognitive work done in a given time, but they’re slow to react. And that’s a big part of what makes corporations relatively controllable: they tend to react so slowly that even governments are sometimes able to move fast enough to deal with them. Software superintelligence, on the other hand, could have superhuman throughput and superhuman latency, which is something we’ve never experienced before in a general intelligence.
So are corporations superintelligent agents? Well, they’re pretty much generally intelligent agents which are somewhat superintelligent in some ways and somewhat below human performance in others. So yeah, kinda?
The next question is: are they misaligned? But this video is already like 14 and a half minutes long, so we’ll get to that in the next video.
Hi. So I sometimes see people saying things like, “Okay, so your argument is that at some point in the future we’re going to develop intelligent agents that are able to reason about the world in general and take actions in the world to achieve their goals. These agents might have superhuman intelligence that allows them to be very good at achieving their goals, and this is a problem because they might have different goals from us. But don’t we kind of have that already? Corporations can be thought of as superintelligent agents. They’re able to think about the world in general and they can outperform individual humans across a range of cognitive tasks. And they have goals—namely, maximizing profits or shareholder value or whatever—and these goals aren’t the same as the overall goals of humanity. So corporations are a kind of misaligned superintelligence.”
The people who say this, having established the metaphor, at this point tend to diverge, mostly along political lines. Some say, “Corporations are therefore a clear threat to human values and goals in the same way that misaligned superintelligences are, and they need to be much more tightly controlled if not destroyed altogether.” Others say, “Corporations are like misaligned superintelligences, but corporations have been instrumental in the huge increases of human wealth and well-being that we’ve seen over the last couple of centuries, with pretty minor negative side effects overall. If that’s the effect of misaligned superintelligences, I don’t see why we should be concerned about AI.” And others say, “Corporations certainly have their problems, but we seem to have developed systems that keep them under control well enough that they’re able to create value and do useful things without literally killing everyone. So perhaps we can learn something about how to control or align superintelligences by looking at how we handle corporations.”
So we’re gonna let the first two fight amongst themselves and we’ll talk to the third guy.
So how good is this metaphor? Are corporations really like misaligned superintelligences? (Quick note before we start: we’re going to be comparing corporations to AI systems, and this gets a lot more complicated when you consider that corporations in fact use AI systems. So for the sake of simplicity, we’re going to assume that corporations don’t use AI systems, because otherwise the problem gets recursive and, like, not in a cool way.)
First off, are corporations agents in the relevant way? I would say “yeah, pretty much.” I think that it’s reasonably productive to think of a corporation as an agent. They do seem to make decisions and take actions in the world in order to achieve goals in the world. But I think you face a similar problem thinking of corporations as agents as you do when you try to think of human beings as agents. In economics, it’s common to model human beings as agents that want to maximize their money in some sense. And you can model corporations in the same way, and this is useful. But it is kind of a simplification in that human beings in practice want things that aren’t just money. And while corporations are more directly aligned with profit maximizing than individual human beings are, it’s not quite that simple. So yes, we can think of corporations as agents, but we can’t treat their stated goals as being exactly equivalent to their actual goals in practice. More on that later.
So corporations are more or less agents. Are they generally intelligent agents? Again, yeah, I think so. I mean, corporations are made up of human beings, so they have all the same general intelligence capabilities that human beings have.
So then the question is: are they superintelligent? This is where things get interesting, because the answer is “kind of.” Like, SpaceX is able to design a better rocket than any individual human engineer could design. Rocket design is a cognitive task, and SpaceX is better at that than any human being. Therefore, SpaceX is a superintelligence… in the domain of rocket design. But a calculator is a superintelligence in the domain of arithmetic. That’s not enough.
Are corporations general superintelligences? Do they outperform humans across a wide range of cognitive tasks, as an AGI could? In practice, it depends on the task. Consider playing a strategy game. For the sake of simplicity, let’s use a game that humans still beat AI systems at, like Starcraft. If a corporation, for some reason, had to win at Starcraft, it could perform about as well as the best human players. It would do that by hiring the best human players. But you won’t achieve superhuman play that way. A human player acting on behalf of the corporation is just a human player, and the corporation doesn’t really have a way to do much better than that. A team of reasonably good Starcraft players working together to control one army will still lose to a single very good player working alone.
This seems to be true for a lot of strategy games. The classic example is the game of Kasparov versus the World, where Garry Kasparov played against the entire rest of the world cooperating on the Internet. The game was kind of weird, but Kasparov ended up winning. And the kind of real-world strategy that corporations have to do seems like it might be similar as well. When companies outsmart their competition, it’s usually because they have a small number of decision-makers who are unusually smart, rather than because they have a hundred reasonably smart people working together. For at least some tasks, teams of humans are not able to effectively combine their intelligence to achieve highly superhuman performance.
So corporations are limited to around human-level intelligence on those tasks. To break down where this is, let’s look at some different options corporations have. Four ways to combine human superintelligences. One obvious way is specialization: if you can divide the task into parts that people can specialize in, you can outperform individuals. You can have one person who’s skilled at engine design, one who’s great at aerodynamics, one who knows a lot about structural engineering, and one who’s good at avionics. [Graph: multiple narrow curves.] Can you tell I’m not a rocket surgeon? Anyway, if these people with their different skills are able to work together well, with each person doing what they’re best at, the resulting agent will in a sense have superhuman intelligence. No single human could ever be so good at so many different things. [Graph: the maximum of these curves is a broad curve.] But this mechanism doesn’t get you superhumanly high intelligence, just superhumanly broad intelligence, whereas superintelligent software (AGI) might look like this. [Graph: a both broad and high curve.]
So specialization yields a fairly limited form of superintelligence, if you can split your task up. But that’s not easy for all tasks. For example, the task of coming up with creative ideas or strategies isn’t easy to split up. You either have a good idea or you don’t. But as a team, you can get everyone to suggest a strategy or idea, and then pick the best one. That way, a group can perform better than any individual human. How much better, though, and how does that change with the size of the team? I got curious about exactly how this works, so I came up with a toy model. Now, I’m not a statistician, I’m a computer scientist, so rather than working it out properly I just simulated it a hundred million times, because that was quicker.
Okay, so here’s the ideal quality distribution for an individual human. We’ll model it as a normal distribution with a mean of 100 and a standard deviation of 20. So what this means is you ask a human for a suggestion, and sometimes they do really well and come up with a 130-level strategy. Sometimes they screw up and can only give you a 70 idea. But most of the time, it’s around 100. Now suppose we had a second person whose intelligence is the same as the first. We have both of them come up with ideas and we keep whichever idea is better. The resulting team of two people combined looks like this. On average, the ideas are better. The mean is now, what, 107? And as we keep adding people, the performance gets better. Here’s 5 people, 10, 20, 50, 100. Remember, these are probability distributions, so the height doesn’t really matter. The point is that the distributions move to the right and get thinner. The average idea quality goes up and the standard deviation goes down. So we’re coming up with better ideas and more reliably. But you see how the progress is slowing down. We’re using a hundred times as much brainpower here, but our average ideas are only like 25% better. What if we use a thousand people, ten times more resources? Again, only gets us up to around 135. Diminishing returns.
So what does this mean for corporations? Well, first off, to be fair, this team of a thousand people is clearly superintelligent. The worst ideas it ever has are still so good that an individual human will hardly ever manage to think of them. But it’s still pretty limited. There’s all this space off to the right of the graph that it would take vast team sizes to ever get into. If you’re wondering how this would look with seven billion humans, well, you have to work out the statistical solution yourself. The point is the team isn’t that superintelligent because it’s never going to think of an idea that no human could think of, which is kind of obvious when you think about it. But AGI is unlimited in that way.
And in practice, even this model is way too optimistic for corporations. Firstly, because it assumes that the quality of suggestions for a particular problem is uncorrelated between humans, which is clearly not true. And secondly, because you have to pick out the best suggestion, but how can you be sure that you’ll know the best idea when you see it? It happens to be true a lot of the time, for a lot of the problems that we care about, that evaluating solutions is easier than coming up with them. “You know, Homer, it’s very easy to criticize.” Machine learning relies pretty heavily on this. Like, writing a program that differentiates pictures of cats and dogs is really hard, but evaluating such a program is fairly simple. You just show it lots of pictures of cats and dogs and see how well it does. The clever bit is in figuring out how to take a method for evaluating solutions and use that to create good solutions.
Anyway, this assumption isn’t always true, and even when it is the fact that evaluation is easier or cheaper than generation, doesn’t mean that evaluation is easy or cheap. Like, I couldn’t generate a good rocket design myself, but I can tell you that this one needs work. [Video of a rocket exploding.] So evaluation is easier than generation. But that’s a very expensive way to find out, and I wouldn’t have been able to do it the cheap way by just looking at the blueprints. The skills needed to evaluate in advance whether a given rocket design will explode are very closely related to the skills needed to generate a non-exploding rocket design.
So yeah, even if a corporation could somehow get around being limited to the kind of ideas that humans are able to generate, they’re still limited to the kind of ideas that humans are able to recognize as good ideas. Just how serious is this limitation? How good are the strategies and ideas that corporations are missing out on? Well, take a minute to think of an idea that’s too good for any human to recognize it as good. Got one? Well, it was worth a shot. We actually do have an example of this kind of thing, in move 37 from AlphaGo’s 2016 match with world champion Lee Sedol. “That’s a very… that’s a very surprising move. I thought it was a mistake.” Yeah, that turned out to be pretty much the move that won the game. But your Go-playing corporation is never going to make move 37. Even if someone happens to suggest it, it’s almost certainly not going to be chosen. “Normally, human, we never play this one because it’s bad!” It’s not enough for someone in your corporation to have a great idea. The people at the top need to recognize that it’s a great idea. That means that there’s a limit on the effective creative or strategic intelligence of a corporation which is determined by the intelligence of the decision-makers and their ability to know a good idea when they see one.
Okay. What about speed? That’s one of the things that makes AI systems so powerful, and one of the ways that software AGI is likely to be superintelligent. The general trend is we go from “computer can’t do this at all” to “computers can do this much faster than people”. Not always, but in general. So I wouldn’t be surprised if that pattern continues with AGI. How does the corporation rate on speed? Again, it kind of depends. This is closely related to something we’ve talked about before: parallelizability. Some tasks are easy to split up and work on in parallel, and some aren’t. For example, if you’ve got a big list of a thousand numbers and you need to add them all up, it’s very easy to parallelize. If you have ten people, you can just say, “Okay, you take the first hundred numbers, you take the second hundred, you take the third, and so on.” Have everybody add up their part of the list, and then at the end, you add up everyone’s totals. However long the list is, you can throw more people at it and get it done faster—much faster than any individual human could. This is the kind of task where it’s easy for corporations to achieve superhuman speed.
But suppose instead of summing a list, you have a simple simulation that you want to run for, say, a thousand seconds. You can’t say, “okay, you work out the first hundred seconds of the simulation, and you do the next hundred, and you do the next hundred, and so on,” because obviously the person who’s simulating second 100 needs to know what happened at the end of second 99 before they can get started. So this is what’s called an inherently serial task. You can’t easily do it much faster by adding more people. You can’t get a baby in less than nine months by hiring two pregnant women, you know. Most real-world tasks are somewhere in between. You get some benefits from adding more people, but again, you hit diminishing returns. Some parts of the task can be split up and worked on in parallel. Some parts need to happen one after the other.
So yes, corporations can achieve superhuman speed at some important cognitive tasks, but really, if you want to talk about speed in a principled way, you need to differentiate between throughput (how much goes through the system within a certain time) and latency (how long it takes a single thing to go through the system). These ideas are most often used in things like networking, and I think that’s the easiest way to explain it. So basically, let’s say you need to send someone a large file, and you can either send it over a dial-up internet connection, or you can send them a physical disk through the postal system. The dial-up connection is low-latency (each bit of the file goes through the system quickly) but it’s also low-throughput (the rate at which you can send data is pretty low). Whereas sending the physical disk is high-latency (it might take days for the first bit to arrive) but it’s also high-throughput (you can put vast amounts of data on the disk, so your average data sent per second could actually be very good).
Corporations are able to combine human intelligences to achieve superhuman throughput, so they can complete large complex tasks faster than individual humans could. But the thing is, a system can’t have lower latency than its slowest component. And corporations are made of humans, so corporations aren’t able to achieve superhuman latency. And in practice, as you’ve no doubt experienced, it’s quite the opposite. So corporate intelligence is kind of like sending the physical disk. Corporations can get a lot of cognitive work done in a given time, but they’re slow to react. And that’s a big part of what makes corporations relatively controllable: they tend to react so slowly that even governments are sometimes able to move fast enough to deal with them. Software superintelligence, on the other hand, could have superhuman throughput and superhuman latency, which is something we’ve never experienced before in a general intelligence.
So are corporations superintelligent agents? Well, they’re pretty much generally intelligent agents which are somewhat superintelligent in some ways and somewhat below human performance in others. So yeah, kinda?
The next question is: are they misaligned? But this video is already like 14 and a half minutes long, so we’ll get to that in the next video.