To paint the picture a bit more: If (some) corporations are more intelligent than individuals, corporations should be able to design better (smarter) corporations than themselves (since they were designed by humans), and these should be able to design even better ones, and so on.
A second problem lies in the mechanics, even assuming sufficient fidelity: the process is adversarial.
Any sort of self-improvement loop for corporations (or similar ideas, like perpetuities) has to deal with the basic fact that it’s, as the Soylent Green joke goes, ‘made of people [humans]!’ There are clear diseconomies of scale due to the fact that you’re inherently dealing with humans: you cannot simply invest a nickel and come back to a cosmic fortune in 5 millennia because if your investments prosper, the people managing it will simply steal it or blow the investments or pay themselves lucratively or the state will steal it (see: the Catholic Church in England and France, large corporations in France, waqfs throughout the Middle East, giant monasteries in Japan and China...) If some corporation did figure out a good trick to improve itself, and it increased profitability and took over a good chunk of its market, now it must contend with internal principal-agent problems and dysfunctionality of the humans which comprise it. (Monopolies are not known for their efficiency or nimbleness; but why not? They were good enough to become a monopoly in the first place, after all. Why do they predictably decay? Why do big tech corporations seem to have such difficulties maintaining their ‘culture’ and have to feed voraciously on startups, like Bathory bathing in the blood of the young or mice receiving transfusions?)
Any such corporate loop would fizzle out as soon as it started to yield some fruit. You can’t start such a loop without humans, and humans mean the loop won’t continue as they harvest the fruits.
Of course, if a corporation could run on hardware which didn’t think for itself and if it had some sort of near-magical solution to principal-agent problems and other issues (perhaps some sort of flexible general intelligence with incentives and a goal system that could be rewired), then it might be a different story...
...Since the design of [companies] is one of these intellectual activities, an ultraintelligent [company] could design even better [companies]..
to something like ‘Since the design of [companies] is one of these intellectual activities, then - if anyone could reliably design companies at all—an ultraintelligent [company] could design even better companies, though at a decreasing rate due to diseconomies of scale’?
though at a decreasing rate due to diseconomies of scale
Not quite. My point here is that human-based corporations have several flaws which I think cumulatively bar any major amount of self-improvement: they can’t perpetuate or replicate themselves very well which bars self-improvement or evolving better versions of themselves (because they are made of squishy parts like pieces of paper and humans; there is nothing remotely equivalent to ‘DNA’ or ‘source code’ for a corporation which could be copied with high fidelity), and if they did, their treacherous components (humans) would steal any gains.
If you could make a corporation out of something more reliable and concrete like computer programming, and if you could replace the treacherous components with more trustworthy components, then the Goodian loop seems possible to me. Of course, at that point, one would just consider the corporation as a whole to be an intelligence with perhaps a slightly exotic decentralized architecture of highly-powered neural networks glued together by some framework code, like how a human brain can be seen as a number of somewhat independent regions glued together by networks of neurons, and it would just be a special case of the original Goodian loop ‘Since the design of another [intelligence] is an activity...’.
there is nothing remotely equivalent to ‘DNA’ or ‘source code’ for a corporation which could be copied with high fidelity
Franchising seems to work fairly well. Although I suppose that’s slightly different: you have one entity whose business it is to create and promote a highly reliable and reproducible business model, and then a whole bunch of much smaller entities running that model and sending kickbacks to the parent company. But the parent’s business model and its children’s don’t have much in common.
Are there any franchise-like organizations that spread peer to peer? I don’t know of any, but this isn’t my field.
Franchising isn’t that common—I only seem to hear of it in the food industry, anyway. It seems to be good for a few simple niches where uniformity (predictability) is itself valued by customers at the expense of quality.
Even if franchising only arises in specific demand circumstances, it suggests that it is possible to replicate a business more-or-less, and that there are other reasons why it isn’t done more often. And for some kind of evolution, you don’t really need a peer to peer spreading franchise—if the parent organization creates new offshoots more like the ones that went better last time, you would have the same effect, and I bet they do.
Also, I don’t think replication is required in the Good argument—merely being able to create a new entity which is more effective than you.
Even if franchising only arises in specific demand circumstances, it suggests that it is possible to replicate a business more-or-less
No, it suggests it’s possible to replicate a few particular businesses with sufficient success. (If the franchisee fails, that’s not a big problem for the franchiser.) The examples I know of are all fairly simple businesses like fast food. Their exceptionality in this respect means they are the exception which proves the rule.
No, it suggests it’s possible to replicate a few particular businesses with sufficient success
All startups (by Paul Graham’s definition) rely on massively replicating a successful business element, for example.
The boundaries of a firm are, in certain ways, arbitrary. A firm can “replicate” by selling franchises, but it can also replicate by opening new offices, new factories, etc.
Some examples: the big four accounting firms, test prep, offshore drilling, cell service infrastructure...
Eliezer has argued that corporations cannot replicate with sufficient fidelity for evolution to operate, which would also rule out any sort of corporate self-improvement: http://lesswrong.com/lw/l6/no_evolutions_for_corporations_or_nanodevices/
A second problem lies in the mechanics, even assuming sufficient fidelity: the process is adversarial. Any sort of self-improvement loop for corporations (or similar ideas, like perpetuities) has to deal with the basic fact that it’s, as the Soylent Green joke goes, ‘made of people [humans]!’ There are clear diseconomies of scale due to the fact that you’re inherently dealing with humans: you cannot simply invest a nickel and come back to a cosmic fortune in 5 millennia because if your investments prosper, the people managing it will simply steal it or blow the investments or pay themselves lucratively or the state will steal it (see: the Catholic Church in England and France, large corporations in France, waqfs throughout the Middle East, giant monasteries in Japan and China...) If some corporation did figure out a good trick to improve itself, and it increased profitability and took over a good chunk of its market, now it must contend with internal principal-agent problems and dysfunctionality of the humans which comprise it. (Monopolies are not known for their efficiency or nimbleness; but why not? They were good enough to become a monopoly in the first place, after all. Why do they predictably decay? Why do big tech corporations seem to have such difficulties maintaining their ‘culture’ and have to feed voraciously on startups, like Bathory bathing in the blood of the young or mice receiving transfusions?)
Any such corporate loop would fizzle out as soon as it started to yield some fruit. You can’t start such a loop without humans, and humans mean the loop won’t continue as they harvest the fruits.
Of course, if a corporation could run on hardware which didn’t think for itself and if it had some sort of near-magical solution to principal-agent problems and other issues (perhaps some sort of flexible general intelligence with incentives and a goal system that could be rewired), then it might be a different story...
So you would change the line,
to something like ‘Since the design of [companies] is one of these intellectual activities, then - if anyone could reliably design companies at all—an ultraintelligent [company] could design even better companies, though at a decreasing rate due to diseconomies of scale’?
Not quite. My point here is that human-based corporations have several flaws which I think cumulatively bar any major amount of self-improvement: they can’t perpetuate or replicate themselves very well which bars self-improvement or evolving better versions of themselves (because they are made of squishy parts like pieces of paper and humans; there is nothing remotely equivalent to ‘DNA’ or ‘source code’ for a corporation which could be copied with high fidelity), and if they did, their treacherous components (humans) would steal any gains.
If you could make a corporation out of something more reliable and concrete like computer programming, and if you could replace the treacherous components with more trustworthy components, then the Goodian loop seems possible to me. Of course, at that point, one would just consider the corporation as a whole to be an intelligence with perhaps a slightly exotic decentralized architecture of highly-powered neural networks glued together by some framework code, like how a human brain can be seen as a number of somewhat independent regions glued together by networks of neurons, and it would just be a special case of the original Goodian loop ‘Since the design of another [intelligence] is an activity...’.
Franchising seems to work fairly well. Although I suppose that’s slightly different: you have one entity whose business it is to create and promote a highly reliable and reproducible business model, and then a whole bunch of much smaller entities running that model and sending kickbacks to the parent company. But the parent’s business model and its children’s don’t have much in common.
Are there any franchise-like organizations that spread peer to peer? I don’t know of any, but this isn’t my field.
Franchising isn’t that common—I only seem to hear of it in the food industry, anyway. It seems to be good for a few simple niches where uniformity (predictability) is itself valued by customers at the expense of quality.
Now I’ve got a wild idea for a burger joint that optimizes its business model using genetic programming methods.
Even if franchising only arises in specific demand circumstances, it suggests that it is possible to replicate a business more-or-less, and that there are other reasons why it isn’t done more often. And for some kind of evolution, you don’t really need a peer to peer spreading franchise—if the parent organization creates new offshoots more like the ones that went better last time, you would have the same effect, and I bet they do.
Also, I don’t think replication is required in the Good argument—merely being able to create a new entity which is more effective than you.
No, it suggests it’s possible to replicate a few particular businesses with sufficient success. (If the franchisee fails, that’s not a big problem for the franchiser.) The examples I know of are all fairly simple businesses like fast food. Their exceptionality in this respect means they are the exception which proves the rule.
All startups (by Paul Graham’s definition) rely on massively replicating a successful business element, for example.
The boundaries of a firm are, in certain ways, arbitrary. A firm can “replicate” by selling franchises, but it can also replicate by opening new offices, new factories, etc.
Some examples: the big four accounting firms, test prep, offshore drilling, cell service infrastructure...