No Evolutions for Corporations or Nanodevices
“The laws of physics and the rules of math don’t cease to apply. That leads me to believe that evolution doesn’t stop. That further leads me to believe that nature —bloody in tooth and claw, as some have termed it —will simply be taken to the next level...
“[Getting rid of Darwinian evolution is] like trying to get rid of gravitation. So long as there are limited resources and multiple competing actors capable of passing on characteristics, you have selection pressure.”
—Perry Metzger, predicting that the reign of natural selection would continue into the indefinite future.
In evolutionary biology, as in many other fields, it is important to think quantitatively rather than qualitatively. Does a beneficial mutation “sometimes spread, but not always”? Well, a psychic power would be a beneficial mutation, so you’d expect it to spread, right? Yet this is qualitative reasoning, not quantitative—if X is true, then Y is true; if psychic powers are beneficial, they may spread. In Evolutions Are Stupid, I described the equations for a beneficial mutation’s probability of fixation, roughly twice the fitness advantage (6% for a 3% advantage). Only this kind of numerical thinking is likely to make us realize that mutations which are only rarely useful are extremely unlikely to spread, and that it is practically impossible for complex adaptations to arise without constant use. If psychic powers really existed, we should expect to see everyone using them all the time—not just because they would be so amazingly useful, but because otherwise they couldn’t have evolved in the first place.
“So long as there are limited resources and multiple competing actors capable of passing on characteristics, you have selection pressure.” This is qualitative reasoning. How much selection pressure?
While there are several candidates for the most important equation in evolutionary biology, I would pick Price’s Equation, which in its simplest formulation reads:
change in average characteristic = covariance(relative fitness, characteristic)
This is a very powerful and general formula. For example, a particular gene for height can be the Z, the characteristic that changes, in which case Price’s Equation says that the change in the probability of possessing this gene equals the covariance of the gene with reproductive fitness. Or you can consider height in general as the characteristic Z, apart from any particular genes, and Price’s Equation says that the change in height in the next generation will equal the covariance of height with relative reproductive fitness.
(At least, this is true so long as height is straightforwardly heritable. If nutrition improves, so that a fixed genotype becomes taller, you have to add a correction term to Price’s Equation. If there are complex nonlinear interactions between many genes, you have to either add a correction term, or calculate the equation in such a complicated way that it ceases to enlighten.)
Many enlightenments may be attained by studying the different forms and derivations of Price’s Equation. For example, the final equation says that the average characteristic changes according to its covariance with relative fitness, rather than its absolute fitness. This means that if a Frodo gene saves its whole species from extinction, the average Frodo characteristic does not increase, since Frodo’s act benefited all genotypes equally and did not covary with relative fitness.
It is said that Price became so disturbed with the implications of his equation for altruism that he committed suicide, though he may have had other issues. (Overcoming Bias does not advocate committing suicide after studying Price’s Equation.)
One of the enlightenments which may be gained by meditating upon Price’s Equation is that “limited resources” and “multiple competing actors capable of passing on characteristics” are not sufficient to give rise to an evolution. “Things that replicate themselves” is not a sufficient condition. Even “competition between replicating things” is not sufficient.
Do corporations evolve? They certainly compete. They occasionally spin off children. Their resources are limited. They sometimes die.
But how much does the child of a corporation resemble its parents? Much of the personality of a corporation derives from key officers, and CEOs cannot divide themselves by fission. Price’s Equation only operates to the extent that characteristics are heritable across generations. If great-great-grandchildren don’t much resemble their great-great-grandparents, you won’t get more than four generations’ worth of cumulative selection pressure—anything that happened more than four generations ago will blur itself out. Yes, the personality of a corporation can influence its spinoff—but that’s nothing like the heritability of DNA, which is digital rather than analog, and can transmit itself with 10^-8 errors per base per generation.
With DNA you have heritability lasting for millions of generations. That’s how complex adaptations can arise by pure evolution—the digital DNA lasts long enough for a gene conveying 3% advantage to spread itself over 768 generations, and then another gene dependent on it can arise. Even if corporations replicated with digital fidelity, they would currently be at most ten generations into the RNA World.
Now, corporations are certainly selected, in the sense that incompetent corporations go bust. This should logically make you more likely to observe corporations with features contributing to competence. And in the same sense, any star that goes nova shortly after it forms, is less likely to be visible when you look up at the night sky. But if an accident of stellar dynamics makes one star burn longer than another star, that doesn’t make it more likely that future stars will also burn longer—the feature will not be copied onto other stars. We should not expect future astrophysicists to discover complex internal features of stars which seem designed to help them burn longer. That kind of mechanical adaptation requires much larger cumulative selection pressures than a once-off winnowing.
Think of the principle introduced in Einstein’s Arrogance—that the vast majority of the evidence required to think of General Relativity had to go into raising that one particular equation to the level of Einstein’s personal attention; the amount of evidence required to raise it from a deliberately considered possibility to 99.9% certainty was trivial by comparison. In the same sense, complex features of corporations which require hundreds of bits to specify, are produced primarily by human intelligence, not a handful of generations of low-fidelity evolution. In biology, the mutations are purely random and evolution supplies thousands of bits of cumulative selection pressure. In corporations, humans offer up thousand-bit intelligently designed complex “mutations”, and then the further selection pressure of “Did it go bankrupt or not?” accounts for a handful of additional bits in explaining what you see.
Advanced molecular nanotechnology—the artificial sort, not biology—should be able to copy itself with digital fidelity through thousands of generations. Would Price’s Equation thereby gain a foothold?
Correlation is covariance divided by variance, so if A is highly predictive of B, there can be a strong “correlation” between them even if A is ranging from 0 to 9 and B is only ranging from 50.0001 and 50.0009. Price’s Equation runs on covariance of characteristics with reproduction—not correlation! If you can compress variance in characteristics into a tiny band, the covariance goes way down, and so does the cumulative change in the characteristic.
The Foresight Institute suggests, among other sensible proposals, that the replication instructions for any nanodevice should be encrypted. Moreover, encrypted such that flipping a single bit of the encoded instructions will entirely scramble the decrypted output. If all nanodevices produced are precise molecular copies, and moreover, any mistakes on the assembly line are not heritable because the offspring got a digital copy of the original encrypted instructions for use in making grandchildren, then your nanodevices ain’t gonna be doin’ much evolving.
You’d still have to worry about prions—self-replicating assembly errors apart from the encrypted instructions, where a robot arm fails to grab a carbon atom that is used in assembling a homologue of itself, and this causes the offspring’s robot arm to likewise fail to grab a carbon atom, etc., even with all the encrypted instructions remaining constant. But how much correlation is there likely to be, between this sort of transmissible error, and a higher reproductive rate? Let’s say that one nanodevice produces a copy of itself every 1000 seconds, and the new nanodevice is magically more efficient (it not only has a prion, it has a beneficial prion) and copies itself every 999.99999 seconds. It needs one less carbon atom attached, you see. That’s not a whole lot of variance in reproduction, so it’s not a whole lot of covariance either.
And how often will these nanodevices need to replicate? Unless they’ve got more atoms available than exist in the solar system, or for that matter, the visible Universe, only a small number of generations will pass before they hit the resource wall. “Limited resources” are not a sufficient condition for evolution; you need the frequently iterated death of a substantial fraction of the population to free up resources. Indeed, “generations” is not so much an integer as an integral over the fraction of the population that consists of newly created individuals.
This is, to me, the most frightening thing about grey goo or nanotechnological weapons—that they could eat the whole Earth and then that would be it, nothing interesting would happen afterward. Diamond is stabler than proteins held together by van der Waals forces, so the goo would only need to reassemble some pieces of itself when an asteroid hit. Even if prions were a powerful enough idiom to support evolution at all—evolution is slow enough with digital DNA!—less than 1.0 generations might pass between when the goo ate the Earth and when the Sun died.
To sum up, if you have all of the following properties:
Entities that replicate
Substantial variation in their characteristics
Substantial variation in their reproduction
Persistent correlation between the characteristics and reproduction
High-fidelity long-range heritability in characteristics
Frequent birth of a significant fraction of the breeding population
And all this remains true through many iterations
Then you will have significant cumulative selection pressures, enough to produce complex adaptations by the force of evolution.
- The Simple Math of Everything by 17 Nov 2007 22:42 UTC; 85 points) (
- Conjuring An Evolution To Serve You by 19 Nov 2007 5:55 UTC; 68 points) (
- When Anthropomorphism Became Stupid by 16 Aug 2008 23:43 UTC; 49 points) (
- Fake Fake Utility Functions by 6 Dec 2007 6:30 UTC; 39 points) (
- Chaos and Stability by 12 Feb 2022 10:19 UTC; 33 points) (
- Optimization and the Singularity by 23 Jun 2008 5:55 UTC; 32 points) (
- 16 May 2022 0:01 UTC; 26 points)'s comment on The Last Paperclip by (
- Interpersonal Morality by 29 Jul 2008 18:01 UTC; 25 points) (
- Failure By Analogy by 18 Nov 2008 2:54 UTC; 24 points) (
- 28 Aug 2011 16:43 UTC; 24 points)'s comment on Polyhacking by (
- [SEQ RERUN] No Evolutions for Corporations or Nanodevices by 29 Oct 2011 18:22 UTC; 11 points) (
- 2 May 2014 0:31 UTC; 11 points)'s comment on May 2014 Media Thread by (
- Rationality Reading Group: Part L: The Simple Math of Evolution by 21 Oct 2015 21:50 UTC; 10 points) (
- 26 Oct 2008 19:47 UTC; 9 points)'s comment on Aiming at the Target by (
- 25 Nov 2011 14:27 UTC; 7 points)'s comment on Should LessWrong be Interested in the Occupy Movements? by (
- 12 Mar 2022 1:06 UTC; 6 points)'s comment on We’re already in AI takeoff by (
- 20 Jan 2014 23:47 UTC; 5 points)'s comment on 2013 Survey Results by (
- 1 Oct 2014 20:33 UTC; 4 points)'s comment on Superintelligence Reading Group 3: AI and Uploads by (
- 2 Jun 2008 7:26 UTC; 4 points)'s comment on The Rhythm of Disagreement by (
- 7 Apr 2011 3:20 UTC; 3 points)'s comment on Bayesianism versus Critical Rationalism by (
- 5 Aug 2009 21:03 UTC; 3 points)'s comment on Recommended reading: George Orwell on knowledge from authority by (
- 19 Apr 2011 23:29 UTC; 3 points)'s comment on Nonsuperintelligent AI threat by (
- 2 Feb 2011 0:41 UTC; 2 points)'s comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- 30 Apr 2011 3:20 UTC; 2 points)'s comment on How hard do we really want to sell cryonics? by (
- 15 May 2016 20:09 UTC; 2 points)'s comment on Open Thread May 9 - May 15 2016 by (
- 9 Jan 2011 0:14 UTC; 1 point)'s comment on Could markets be called optimization proccesses? by (
- 30 Oct 2019 17:03 UTC; 1 point)'s comment on The Missing Piece by (
- 15 Jul 2009 11:21 UTC; 1 point)'s comment on “Sex Is Always Well Worth Its Two-Fold Cost” by (
- 7 Nov 2010 16:20 UTC; 0 points)'s comment on Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) by (
- 6 Aug 2008 5:45 UTC; 0 points)'s comment on Contaminated by Optimism by (
- 2 May 2013 9:47 UTC; 0 points)'s comment on New report: Intelligence Explosion Microeconomics by (
- 4 Sep 2013 6:17 UTC; 0 points)'s comment on Help describing decibans? by (
- 8 Jul 2008 2:25 UTC; 0 points)'s comment on Is Morality Given? by (
- 10 Feb 2011 14:02 UTC; -2 points)'s comment on The UFAI among us by (
- 19 Nov 2010 20:29 UTC; -4 points)'s comment on Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) by (
Another great post about evolutionary biology, Eliezer.
I’d be interested in seeing your opinion regarding Gould’s concept of contingency in evolutionary outcomes (he talks about it a lot when discussing the Cambrian explosion).
(Disclaimer: I don’t particularly favor Gould or his views, and I mostly agree with your criticism of him, but I’m genuinely curious to know whether there’s merit to this idea.)
Or maybe I should wait for this month’s open thread to make this suggestion. I apologize in advance if that’s the case.
This is an important point, well worth making clear.
The theory of change in stars over time that I am familiar with says that early stars were nearly pure hydrogen. Heavier elements were formed in them as they burned and when they became nova. Subsequent stars created and were composed of increasing concentrations of increasingly heavy elements. Did this not change the life span of stars? Did I misunderstand your point?
Also, is there an equation that is claimed to describe the change in the entropy of the universe?
Can it be used to figure out if the increase in entropy caused by a star going nova would cause an increase in entropy in the universe as a whole? If one nova is insufficient, how many would have to go nova simultaneously to cause an increase? How long would the increase last?
The theory that you are familiar with is a little off. What stars can produce is solely a function of size, not generation. Already fused material from a previous star does not allow the new star to fuse more elements. Likewise, the longevity of stars is solely a function of size. It’s a balance between the heat of fusion and the pressure of gravity. More matter in the star means more pressure, which means the rate of fusion increases and more elements can be fused, but the fuel is consumed significantly faster.
The smaller a star is the longer it burns, because there is less pressure being exerted by gravity to drive the fusion process. Big stars don’t last long (the biggest only a few million years), but they produce the all of the naturally occurring elements—up to iron via normal fusion, and the heavier elements during supernova that occurs after iron fusion begins. Smaller stars like our sun will never get past the carbon stage and will never go supernova, and smaller stars still like brown dwarfs will never get past the hydrogen stage. These small stars last the longest because their rate of fusion is incredibly slow.
Quantitative thinking is just so much mystical numerology unless it is grounded in qualitative thinking. Unless you don’t need your mathematics to mean anything with respect to the world, you must relate it to the world by using a system of assertions called a model. Of course, you know this, I’d just like you to bring this fact out from behind the curtain where you normally keep it.
Example: when I hear a scientist talk about how winning the lottery (or some other rare event) is less likely than getting hit by lightning, I have to wonder what the odds are of being hit by lightning if you take shelter during a storm, as most people do, or if you live in Nome, Alaska? I bet agoraphobic people are far less likely to die in car accidents, too. In other words, broad numerical reasoning, when applied to specific cases without recalculating for those cases, is essentially the same thing as the sloppy qualitative reasoning that you’re worried about. It’s just as absurd.
Maybe what you’re trying to say is that sloppy and ungrounded qualitative reasoning is to be avoided, in favor of quantitative reasoning grounded in the appropriate qualitative reasoning that give the numbers meaning. That would be a qualitative judgment on your part, of course, but it seems like a defensible one in this case.
I think you are trying to advocate, not quantitative reasoning, but rather good reasoning. There’s no call to hang the albatross of bad reasoning around the neck of qualitative research as a field. That bird belongs to all of us.
This is, to me, the most frightening thing about grey goo or nanotechnological weapons—that they could eat the whole Earth and then that would be it, nothing interesting would happen afterward.
@Eliezer: So unless that goo can already get off-planet, it won’t ever? Good! Personally, I’m more scared by things that can eat the universe, like UFAI. If it’s only us gets eaten, someone else can step up before the last star burns out.
@Others: All the more reason to support FAI research. The longer it takes to get it right, the more time for someone less careful to crack recursive self-improvement.
PS. Where they can communicate, I’d worry more about rogue evolution in nanobot software rather than hardware. Huge replication potential & speed, hi-fi heritability through many iterations, etc. and then if a half-intelligent virus hits the fabricator software...
Please could you post a link to Perry’s article? I couldn’t find it.
The argument here would seem to suggest that some of my earlier statements were a bit too absolute. This was partly deliberate, in order to be provocative. I wonder if this is a bias?
In any case, it may still turn out to be extremely difficult to prevent these conditions from continuing to hold in the future.
But lack of broad distribution of an ability doesn’t necessarily mean the ability doesn’t exist. One of the themes of this blog is that human brain power has outstripped “nature” (I use that advisedly) in its ability to change, create and evolve. If psychic powers were an epiphenomenon of supercomplex brain structure, for example, then they would be no different than the ability to, say, do higher mathematics. That is, something most humans are physically capable of but only a tiny fraction of which have actually put in the requisite study, and learned from the right teachers. The ability to do higher mathematics could be seen, abstractly, as conferring a huge advantage for the organism. But whether that translates to higher rates of reproduction is another question.
The lack of psychic powers and higher mathematics in the general populace does not mean that the ability could not have evolved. Only that it did not evolve independently of another useful adaptation (like a brain that could make reasoned and complex inferences about the ancestral environment).
Eliezer, the criteria you list may be necessary for the evolution of complex structures. But I think it’s worth highlighting that practically important evolutionary results could come about without the need for new complex structures. For example, suppose we have a population of controlled self-replicating nanobots, built unwisely in such a way that they keep replicating until a separate breaking circuit kicks in and shuts off replication. Now suppose there is a mutation in the code of one nanobot such that its offspring lack a working breaking circuit. Then this mutant nanobot could start an exponential goo. There need only be a one-step selection, but the results could be dramatic. Similarly with Hanson’s colonizers that burn the cosmic commons—they might not gain much in complexity through evolution, but evolutionary selection could ensure that a certain type of colonizer (which was present but very rare at time=0) will eventually dominate at the frontier.
The mechanisms of cosmological, biological, and organizational evolution are as dissimilar as the mechanisms of artistic (paint on canvass), photographic, and mental image making.
An artist uses a brush to paint a picture. Even though both make images, we don’t expect to find a brush painting the paper or the chip inside a camera.
Corporations change. That the word evolution can be used to refer to such changes does not mean the changes are similar to the changes in stars or amoebae.
Is that what you are saying?
What corporations do is very different from biological evolution, but if a corporation develops a successful idea then it is likely to be copied by other corporations without anything like biological reproduction entering the picture.
That’s commonly known as “superorganicism” in anthropology. The paper:
“Culture is Part of Human Biology: Why the Superorganic Concept Serves the Human Sciences Badly”
...explains why this idea needs to go into the dustbin of history.
Am I correct in assuming that you have neither followed nor studied the efforts of W. Edwards Deming and other practitioners of statistical quality control to introduce those methods into American manufacturing companies from the 1930s through the mid 1980s? That you do not know how few companies have adopted them even after the Baldrige award was established in 1987?
That you do not know how few managers (of manufacturing or anything else) even know that there is such a thing as design of experiments?
You may have experienced only the best of management and have participated in successful introductions into your organization of practices believed to account for the success of others.
If this is so, let me assure you that you have had extraordinarily rare experiences and have been either exceptionally lucky or exceptionally wise in your choice of place or places to work.
Maybe I should have said something more like “conceivably could be” rather than “is likely to be”. Certainly I didn’t mean to imply that every firm in an industry will immediately copy somebody else’s good idea. There isn’t even a guarantee that a good idea will be recognized as one in the company in which it originates.
But the point is that ideas can be copied without anything like biological reproduction taking place. Why they so seldom are is an interesting question, I’ve added Deming to my “to read” list.
Beinhocker argues in Origin of Wealth that the appropriate unit of selection is not the corporation but rather the generalized concept of business plan. While Elezier’s preconditions for evolution are a bit more extensive than the normal set, I believe Beinhocker’s business plans (not to be confused with the artifacts that float around Sand Hill Road) meet all of Elezier’s criteria, and hence the population evolves via natural selection.
Business model seems pretty close to “generalized concept of business plan”. But I doubt the fidelity of its replication is very high. Human concepts are notoriously fuzzy among different minds.
Forgive me for not picking up on the irony of including corporations and nanodevices in the same sentence. Eliezer is obviously correct in that corporations don’t evolve because they don’t replicate. A childish wish to gloat has to be held in check so as not to name and shame all those ‘child’ corporations whose DNA is specifically contrary to their parents’. The anti-wish list for nanodevices, on the other hand, is relevant and necessary. However, it is also entirely superfluous, as we all know, thanks to Dr Denning, that we are in a deterministic universe and that ‘Que sera, sera’. Sit back and enjoy the ride.
In biology, “evolution” is defined as being the process involving changes of the heritable characteristics of a population over time.
Corporations pass all manner of things on to other companies—including resouces, employees, business methods, intellectual property, documents, premises, computer programs, etc. We are not talking about just a few bits of analog information here—often vast quantities of digital resources are involved.
Corporations form a population. Frequencies of instances of the above listed items in that population varies over time.
Therefore the population of corporations evolves—in the spirit of the classical biological sense of the term.
That’s microevolution. Can you imagine anything like macroevolution in this case? Like, we dig, dig, dig, and there’s a fossil corporation. We dig, dig, dig some more, and there’s a fossil something that could have evolved into that corporation from above, but fundametally different?
Evolution isn’t about digging.
No, it is about reconstruction. Bad enough that population is a general term. If there is a sequence of hereditable and recombinant features in corporations, then ok, call them population. If there is a way of novel—like, somethingtththat would change the whole scene and ensure an emergence of a previously unimaginable body plan, then I’ll grant you evolution. Until then—don’t mix nature and human design.
If I do genetic engineering to change around a few genes I’m engaging in human design but I still have evolution.
The NIH definition of biological evolution is:
No, you will have it after the changes are shown to be in the next few generations. And corporations aren’t biological objects. If you apply the term to them, re-define it.
How about corporate AI evolution? You’ll find a clever depiction of such (runaway) evolution in Accelerando, www.accelerando.org. Great book, that, btw, in other respects, too.
I should probably weigh in on the nanodevice issue as well. Nanodevices will certainly evolve—as part of the rest of cultural evolution. However, what seems to being suggested is that any self-replicating nanodevices will be constructed in a way that they are fragile—in order to deliberately prevent their evolution in the wild, away from the intent of their designers.
I’m sceptical about whether this will ultimately be done. Today’s bacteria do not have such constraints placed on them—and most do not cause problems. Having your genome encrypted is a substantial competitive handicap—since it means you must constantly decrypt it—and you cannot adapt in the face of pathogens or environmental changes. IMO, those disadvantages will probably be compelling enough to eventually result in the production of self-replicating nanodevices that genuinely evolve.
Some other strategies will be used to help with safety. These days, many weserners constantly inject fresh gut microbes into ourselves—and the sheer rate of their influx helps to flush out any “old” mutant varieties. Also, there are plans to equip bacteria with both anti-bacterial compounds, and corresponding resistance genes. By cycling through a range of toxins, you can iteratively upgrade bacterial genomes, while killing off the previous versions—assuming no single bacterium can have all the toxin-resistance genes at once. Such a plan may eventually be used to defeat tooth decay. Similar strategies should work with nanodevices.
It’s true that bacteria aren’t a major issue for modern humans, but modern humans happen to be among the most hostile places imaginable for bacteria. Lacking complex adaptions to help, it makes more sense for a bacteria to survive the rigors of space than to survive on our skin; of course, they do have those complex adaptions, and those adaptions do cost a lot of energy. They’re just superior to the alternative, eg. death.
Part of the reason for making replicating nanotechnology fragile, as I see it, is that this way we’ll be much less likely to see the sort of runaway weapons race that has led to an environment where replicators must devote most of their resources simply to avoid being killed by other replicators. It’s a fresh start. Let’s make the most of it.
So far, the virus-host coevolution battle within machines has been going on for about 30 years now. It seems as though people mostly don’t care enough about viruses to bother with engineering them down to very low levels.
I visualise a situation more like that with computer viruses. Yes, there are viruses with encrypted genomes—but the encryption is a defense against the immune system—not a defense against random mutations. In fact, the encrypted viruses typically deliberately mutate themselves after decryption.
I get the feeling there’s an obvious answer to this, but: why is it necessary to have a full-on encryption system for each nanomachine’s assembly instrutions, with all the decryption overhead that implies? Wouldn’t something like CRC, or one of the quicker hash functions, be a much easier way to prevent accidental changes between generations?
My guess would be: If the integrity check gets corrupted, the mutated nanomachine could possibly “work”, but if the decryption routine gets corrupted, the instructions can’t get decrypted and the nanomachine wouldn’t work.
Hm, makes sense. I suppose I was imagining that if the parent is already at the point where it’s doing the assembly, then we already know from earlier that the parent is correct, and the verification issue now only applies to the child machine.
However, I hadn’t considered the possibility that the parent’s data could get mutated after the parent’s assembly, but that would certainly be possible, and create a single point of vulnerability at a simple integrity check’s implementation.