A superintelligence is defined as ‘any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest’ (p22). By this definition, it seems some superintelligences exist: e.g. basically economically productive activity I can do, Google can do better. Does I.J.Good’s argument in the last chapter (p4) apply to these superintelligences?
To remind you, it was:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep under control.
It’s pretty clear that Google does not scale linearly to the number of people. As a company gets very big, it hits strongly diminishing returns, both in its effectiveness and its internal alignment towards consistent, unified goals.
Organizations grow in power slowly enough that other organizations arise to compete with them. In the case of Google, this can be other companies and also governments.
An organization may be greater than human in some areas, but there is a limit to its ability to “explode.”
To paint the picture a bit more:
If (some) corporations are more intelligent than individuals, corporations should be able to design better (smarter) corporations than themselves (since they were designed by humans), and these should be able to design even better ones, and so on.
We do have consulting firms that are apparently dedicated to studying business methods, and they don’t seem to have undergone any kind of recursive self-improvement cascade.
So why is this? One possibility is that intelligence is not general enough. Another is that there is a sufficiently general intelligence, but corporations don’t have more of it than people.
A third option, which I at least partly endorse, is that corporations aren’t entirely designed by individuals; they’re designed by an even larger aggregate: culture, society, the market, the historical dialectic, or some such, which is smarter than they are.
To carry this notion to its extreme, human intelligence is what a lone human growing up in the wilderness has. Everything above that is human-society intelligence.
To paint the picture a bit more: If (some) corporations are more intelligent than individuals, corporations should be able to design better (smarter) corporations than themselves (since they were designed by humans), and these should be able to design even better ones, and so on.
A second problem lies in the mechanics, even assuming sufficient fidelity: the process is adversarial.
Any sort of self-improvement loop for corporations (or similar ideas, like perpetuities) has to deal with the basic fact that it’s, as the Soylent Green joke goes, ‘made of people [humans]!’ There are clear diseconomies of scale due to the fact that you’re inherently dealing with humans: you cannot simply invest a nickel and come back to a cosmic fortune in 5 millennia because if your investments prosper, the people managing it will simply steal it or blow the investments or pay themselves lucratively or the state will steal it (see: the Catholic Church in England and France, large corporations in France, waqfs throughout the Middle East, giant monasteries in Japan and China...) If some corporation did figure out a good trick to improve itself, and it increased profitability and took over a good chunk of its market, now it must contend with internal principal-agent problems and dysfunctionality of the humans which comprise it. (Monopolies are not known for their efficiency or nimbleness; but why not? They were good enough to become a monopoly in the first place, after all. Why do they predictably decay? Why do big tech corporations seem to have such difficulties maintaining their ‘culture’ and have to feed voraciously on startups, like Bathory bathing in the blood of the young or mice receiving transfusions?)
Any such corporate loop would fizzle out as soon as it started to yield some fruit. You can’t start such a loop without humans, and humans mean the loop won’t continue as they harvest the fruits.
Of course, if a corporation could run on hardware which didn’t think for itself and if it had some sort of near-magical solution to principal-agent problems and other issues (perhaps some sort of flexible general intelligence with incentives and a goal system that could be rewired), then it might be a different story...
...Since the design of [companies] is one of these intellectual activities, an ultraintelligent [company] could design even better [companies]..
to something like ‘Since the design of [companies] is one of these intellectual activities, then - if anyone could reliably design companies at all—an ultraintelligent [company] could design even better companies, though at a decreasing rate due to diseconomies of scale’?
though at a decreasing rate due to diseconomies of scale
Not quite. My point here is that human-based corporations have several flaws which I think cumulatively bar any major amount of self-improvement: they can’t perpetuate or replicate themselves very well which bars self-improvement or evolving better versions of themselves (because they are made of squishy parts like pieces of paper and humans; there is nothing remotely equivalent to ‘DNA’ or ‘source code’ for a corporation which could be copied with high fidelity), and if they did, their treacherous components (humans) would steal any gains.
If you could make a corporation out of something more reliable and concrete like computer programming, and if you could replace the treacherous components with more trustworthy components, then the Goodian loop seems possible to me. Of course, at that point, one would just consider the corporation as a whole to be an intelligence with perhaps a slightly exotic decentralized architecture of highly-powered neural networks glued together by some framework code, like how a human brain can be seen as a number of somewhat independent regions glued together by networks of neurons, and it would just be a special case of the original Goodian loop ‘Since the design of another [intelligence] is an activity...’.
there is nothing remotely equivalent to ‘DNA’ or ‘source code’ for a corporation which could be copied with high fidelity
Franchising seems to work fairly well. Although I suppose that’s slightly different: you have one entity whose business it is to create and promote a highly reliable and reproducible business model, and then a whole bunch of much smaller entities running that model and sending kickbacks to the parent company. But the parent’s business model and its children’s don’t have much in common.
Are there any franchise-like organizations that spread peer to peer? I don’t know of any, but this isn’t my field.
Franchising isn’t that common—I only seem to hear of it in the food industry, anyway. It seems to be good for a few simple niches where uniformity (predictability) is itself valued by customers at the expense of quality.
Even if franchising only arises in specific demand circumstances, it suggests that it is possible to replicate a business more-or-less, and that there are other reasons why it isn’t done more often. And for some kind of evolution, you don’t really need a peer to peer spreading franchise—if the parent organization creates new offshoots more like the ones that went better last time, you would have the same effect, and I bet they do.
Also, I don’t think replication is required in the Good argument—merely being able to create a new entity which is more effective than you.
Even if franchising only arises in specific demand circumstances, it suggests that it is possible to replicate a business more-or-less
No, it suggests it’s possible to replicate a few particular businesses with sufficient success. (If the franchisee fails, that’s not a big problem for the franchiser.) The examples I know of are all fairly simple businesses like fast food. Their exceptionality in this respect means they are the exception which proves the rule.
No, it suggests it’s possible to replicate a few particular businesses with sufficient success
All startups (by Paul Graham’s definition) rely on massively replicating a successful business element, for example.
The boundaries of a firm are, in certain ways, arbitrary. A firm can “replicate” by selling franchises, but it can also replicate by opening new offices, new factories, etc.
Some examples: the big four accounting firms, test prep, offshore drilling, cell service infrastructure...
A third option, which I at least partly endorse, is that corporations aren’t entirely designed by individuals; they’re designed by an even larger aggregate: culture, society, the market, the historical dialectic, or some such, which is smarter than they are.
My meaning is very straightforward. While we often treat computation as abstract information processing, in reality it requires and depends on certain resources, notably a particular computing substrate and a usable inflow of energy. The availability of these resources can and often does limit the what can be done.
Physically, biologically, and historically resource limits are what usually constrains the growth of systems.
The limits in question are rarely absolute, of course, and often enough there are ways to find more resources or engineer away the need for some particular resource. However that itself consumes resources (notably, time). For a growing intelligence resource constraints might not be a huge problem in the long term, but they are often the bottleneck in the short term.
This argument doesn’t say anything about the likely pace of such an intelligence explosion. If you are willing to squint, it’s not too hard to see ourselves as living through one.
Google has some further candidates. While it can accomplish much more than its founders:
It uses more resources to do so (in several senses, though not all)
Google’s founders probably could not produce something as effective as Google with very high probability (ex ante)
Owing to the first point, you might expect an “explosion” driven by these dynamics to bottom out when all of society is organized as effectively as google, since at this point there is no further room for development using similar mechanisms.
Owing to the second point, you might not expect there to be any explosion at all. Even if Google was say 100x better than its founders at generating successful companies, it is not clear whether it would create more than 1 in expectation.
A superintelligence is defined as ‘any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest’ (p22). By this definition, it seems some superintelligences exist: e.g. basically economically productive activity I can do, Google can do better. Does I.J.Good’s argument in the last chapter (p4) apply to these superintelligences?
To remind you, it was:
It’s pretty clear that Google does not scale linearly to the number of people. As a company gets very big, it hits strongly diminishing returns, both in its effectiveness and its internal alignment towards consistent, unified goals.
Organizations grow in power slowly enough that other organizations arise to compete with them. In the case of Google, this can be other companies and also governments.
An organization may be greater than human in some areas, but there is a limit to its ability to “explode.”
To paint the picture a bit more: If (some) corporations are more intelligent than individuals, corporations should be able to design better (smarter) corporations than themselves (since they were designed by humans), and these should be able to design even better ones, and so on.
We do have consulting firms that are apparently dedicated to studying business methods, and they don’t seem to have undergone any kind of recursive self-improvement cascade.
So why is this? One possibility is that intelligence is not general enough. Another is that there is a sufficiently general intelligence, but corporations don’t have more of it than people.
A third option, which I at least partly endorse, is that corporations aren’t entirely designed by individuals; they’re designed by an even larger aggregate: culture, society, the market, the historical dialectic, or some such, which is smarter than they are.
To carry this notion to its extreme, human intelligence is what a lone human growing up in the wilderness has. Everything above that is human-society intelligence.
Eliezer has argued that corporations cannot replicate with sufficient fidelity for evolution to operate, which would also rule out any sort of corporate self-improvement: http://lesswrong.com/lw/l6/no_evolutions_for_corporations_or_nanodevices/
A second problem lies in the mechanics, even assuming sufficient fidelity: the process is adversarial. Any sort of self-improvement loop for corporations (or similar ideas, like perpetuities) has to deal with the basic fact that it’s, as the Soylent Green joke goes, ‘made of people [humans]!’ There are clear diseconomies of scale due to the fact that you’re inherently dealing with humans: you cannot simply invest a nickel and come back to a cosmic fortune in 5 millennia because if your investments prosper, the people managing it will simply steal it or blow the investments or pay themselves lucratively or the state will steal it (see: the Catholic Church in England and France, large corporations in France, waqfs throughout the Middle East, giant monasteries in Japan and China...) If some corporation did figure out a good trick to improve itself, and it increased profitability and took over a good chunk of its market, now it must contend with internal principal-agent problems and dysfunctionality of the humans which comprise it. (Monopolies are not known for their efficiency or nimbleness; but why not? They were good enough to become a monopoly in the first place, after all. Why do they predictably decay? Why do big tech corporations seem to have such difficulties maintaining their ‘culture’ and have to feed voraciously on startups, like Bathory bathing in the blood of the young or mice receiving transfusions?)
Any such corporate loop would fizzle out as soon as it started to yield some fruit. You can’t start such a loop without humans, and humans mean the loop won’t continue as they harvest the fruits.
Of course, if a corporation could run on hardware which didn’t think for itself and if it had some sort of near-magical solution to principal-agent problems and other issues (perhaps some sort of flexible general intelligence with incentives and a goal system that could be rewired), then it might be a different story...
So you would change the line,
to something like ‘Since the design of [companies] is one of these intellectual activities, then - if anyone could reliably design companies at all—an ultraintelligent [company] could design even better companies, though at a decreasing rate due to diseconomies of scale’?
Not quite. My point here is that human-based corporations have several flaws which I think cumulatively bar any major amount of self-improvement: they can’t perpetuate or replicate themselves very well which bars self-improvement or evolving better versions of themselves (because they are made of squishy parts like pieces of paper and humans; there is nothing remotely equivalent to ‘DNA’ or ‘source code’ for a corporation which could be copied with high fidelity), and if they did, their treacherous components (humans) would steal any gains.
If you could make a corporation out of something more reliable and concrete like computer programming, and if you could replace the treacherous components with more trustworthy components, then the Goodian loop seems possible to me. Of course, at that point, one would just consider the corporation as a whole to be an intelligence with perhaps a slightly exotic decentralized architecture of highly-powered neural networks glued together by some framework code, like how a human brain can be seen as a number of somewhat independent regions glued together by networks of neurons, and it would just be a special case of the original Goodian loop ‘Since the design of another [intelligence] is an activity...’.
Franchising seems to work fairly well. Although I suppose that’s slightly different: you have one entity whose business it is to create and promote a highly reliable and reproducible business model, and then a whole bunch of much smaller entities running that model and sending kickbacks to the parent company. But the parent’s business model and its children’s don’t have much in common.
Are there any franchise-like organizations that spread peer to peer? I don’t know of any, but this isn’t my field.
Franchising isn’t that common—I only seem to hear of it in the food industry, anyway. It seems to be good for a few simple niches where uniformity (predictability) is itself valued by customers at the expense of quality.
Now I’ve got a wild idea for a burger joint that optimizes its business model using genetic programming methods.
Even if franchising only arises in specific demand circumstances, it suggests that it is possible to replicate a business more-or-less, and that there are other reasons why it isn’t done more often. And for some kind of evolution, you don’t really need a peer to peer spreading franchise—if the parent organization creates new offshoots more like the ones that went better last time, you would have the same effect, and I bet they do.
Also, I don’t think replication is required in the Good argument—merely being able to create a new entity which is more effective than you.
No, it suggests it’s possible to replicate a few particular businesses with sufficient success. (If the franchisee fails, that’s not a big problem for the franchiser.) The examples I know of are all fairly simple businesses like fast food. Their exceptionality in this respect means they are the exception which proves the rule.
All startups (by Paul Graham’s definition) rely on massively replicating a successful business element, for example.
The boundaries of a firm are, in certain ways, arbitrary. A firm can “replicate” by selling franchises, but it can also replicate by opening new offices, new factories, etc.
Some examples: the big four accounting firms, test prep, offshore drilling, cell service infrastructure...
They were designed by Gnon, same as humans.
The fourth option is that there are resource constraints and they matter.
You mean that our use of resources is already close to optimal so that higher intelligence won’t boost results a whole lot?
See my reply here.
Could you rephrase what you mean more specifically? Does apply to AI also?
My meaning is very straightforward. While we often treat computation as abstract information processing, in reality it requires and depends on certain resources, notably a particular computing substrate and a usable inflow of energy. The availability of these resources can and often does limit the what can be done.
Physically, biologically, and historically resource limits are what usually constrains the growth of systems.
The limits in question are rarely absolute, of course, and often enough there are ways to find more resources or engineer away the need for some particular resource. However that itself consumes resources (notably, time). For a growing intelligence resource constraints might not be a huge problem in the long term, but they are often the bottleneck in the short term.
This argument doesn’t say anything about the likely pace of such an intelligence explosion. If you are willing to squint, it’s not too hard to see ourselves as living through one.
Google has some further candidates. While it can accomplish much more than its founders:
It uses more resources to do so (in several senses, though not all)
Google’s founders probably could not produce something as effective as Google with very high probability (ex ante)
Owing to the first point, you might expect an “explosion” driven by these dynamics to bottom out when all of society is organized as effectively as google, since at this point there is no further room for development using similar mechanisms.
Owing to the second point, you might not expect there to be any explosion at all. Even if Google was say 100x better than its founders at generating successful companies, it is not clear whether it would create more than 1 in expectation.