Book Review: Design Principles of Biological Circuits

I re­mem­ber see­ing a talk by a syn­thetic biol­o­gist, al­most a decade ago. The biol­o­gist used a ge­netic al­gorithm to evolve an elec­tronic cir­cuit, some­thing like this:

(source)

He then printed out the evolved cir­cuit, brought it to his col­league in the elec­tri­cal en­g­ineer­ing de­part­ment, and asked the en­g­ineer to an­a­lyze the cir­cuit and figure out what it did.

“I re­fuse to an­a­lyze this cir­cuit,” the col­league replied, “be­cause it was not de­signed to be un­der­stand­able by hu­mans.” He has a point—that cir­cuit is a big, opaque mess.

This, the biol­o­gist ar­gued, is the root prob­lem of biol­ogy: evolu­tion builds things from ran­dom mu­ta­tion, con­nect­ing things up with­out rhyme or rea­son, into one gi­ant spaghetti tower. We can take it apart and look at all the pieces, we can simu­late the whole thing and see what hap­pens, but there’s no rea­son to ex­pect any deeper un­der­stand­ing. Or­ganisms did not evolve to be un­der­stand­able by hu­mans.

I used to agree with this po­si­tion. I used to ar­gue that there was no rea­son to ex­pect hu­man-in­tel­ligible struc­ture in­side biolog­i­cal or­ganisms, or deep neu­ral net­works, or other sys­tems not de­signed to be un­der­stand­able. But over the next few years af­ter that biol­o­gist’s talk, I changed my mind, and one ma­jor rea­son for the change is Uri Alon’s book An In­tro­duc­tion to Sys­tems Biol­ogy: De­sign Prin­ci­ples of Biolog­i­cal Cir­cuits.

Alon’s book is the ideal coun­ter­ar­gu­ment to the idea that or­ganisms are in­her­ently hu­man-opaque: it di­rectly demon­strates the hu­man-un­der­stand­able struc­tures which com­prise real biolog­i­cal sys­tems. Right from the first page of the in­tro­duc­tion:

… one can, in fact, for­mu­late gen­eral laws that ap­ply to biolog­i­cal net­works. Be­cause it has evolved to perform func­tions, biolog­i­cal cir­cuitry is far from ran­dom or hap­haz­ard. … Although evolu­tion works by ran­dom tin­ker­ing, it con­verges again and again onto a defined set of cir­cuit el­e­ments that obey gen­eral de­sign prin­ci­ples.
The goal of this book is to high­light some of the de­sign prin­ci­ples of biolog­i­cal sys­tems… The main mes­sage is that biolog­i­cal sys­tems con­tain an in­her­ent sim­plic­ity. Although cells evolved to func­tion and did not evolve to be com­pre­hen­si­ble, sim­plify­ing prin­ci­ples make biolog­i­cal de­sign un­der­stand­able to us.

It’s hard to up­date one’s gut-level in­stinct that biol­ogy is a gi­ant mess of spaghetti with­out see­ing the struc­ture first hand, so the goal of this post is to pre­sent just enough of the book to provide some in­tu­ition that, just maybe, biol­ogy re­ally is hu­man-un­der­stand­able.

This re­view is prompted by the re­lease of the book’s sec­ond edi­tion, just this past Au­gust, and that’s the edi­tion I’ll fol­low through. I will fo­cus speci­fi­cally on the parts I find most rele­vant to the cen­tral mes­sage: biolog­i­cal sys­tems are not opaque. I will omit the last three chap­ters en­tirely, since they have less of a gears-level fo­cus and more of an evolu­tion­ary fo­cus, al­though I will likely make an en­tire sep­a­rate post on the last chap­ter (evolu­tion of mod­u­lar­ity).

Chap­ters 1-4: Bac­te­rial Tran­scrip­tion Net­works and Motifs

E-coli has about 4500 pro­teins, but most of those are chun­ked to­gether into chem­i­cal path­ways which work to­gether to perform spe­cific func­tions. Differ­ent path­ways need to be ex­pressed de­pend­ing on the en­vi­ron­ment—for in­stance, e-coli won’t ex­press their lac­tose-me­tab­o­liz­ing ma­chin­ery un­less the en­vi­ron­ment con­tains lots of lac­tose and not much glu­cose (which they like bet­ter).

In or­der to ac­ti­vate/​de­ac­ti­vate cer­tain genes de­pend­ing on en­vi­ron­men­tal con­di­tions, bac­te­ria use tran­scrip­tion fac­tors: pro­teins sen­si­tive to spe­cific con­di­tions, which ac­ti­vate or re­press tran­scrip­tion of genes. We can think of the tran­scrip­tion fac­tor ac­tivity as the cell’s in­ter­nal model of its en­vi­ron­ment. For ex­am­ple, from Alon:

Many differ­ent situ­a­tions are sum­ma­rized by a par­tic­u­lar tran­scrip­tion fac­tor ac­tivity that sig­nifies “I am starv­ing”. Many other situ­a­tions are sum­ma­rized by a differ­ent tran­scrip­tion fac­tor ac­tivity that sig­nifies “My DNA is dam­aged”. Th­ese tran­scrip­tion fac­tors reg­u­late their tar­get genes to mo­bi­lize the ap­pro­pri­ate pro­tein re­sponses in each case.

The en­tire state of the tran­scrip­tion fac­tors—the e-coli’s whole model of its en­vi­ron­ment—has about 300 de­grees of free­dom. That’s 300 tran­scrip­tion fac­tors, each cap­tur­ing differ­ent in­for­ma­tion, and reg­u­lat­ing about 4500 pro­tein genes.

Tran­scrip­tion fac­tors of­ten reg­u­late the tran­scrip­tion of other tran­scrip­tion fac­tors. This al­lows in­for­ma­tion pro­cess­ing in the tran­scrip­tion fac­tor net­work. For in­stance, if ei­ther of two differ­ent fac­tors (X, Y) can block tran­scrip­tion of a third (Z), then that’s effec­tively a log­i­cal NOR gate: Z lev­els will be high when nei­ther X nor Y is high. In gen­eral, tran­scrip­tion fac­tors can ei­ther re­press or pro­mote (though rarely both), and ar­bi­trar­ily com­pli­cated logic is pos­si­ble in prin­ci­ple—in­clud­ing feed­back loops.

Now we ar­rive at our first ma­jor piece of ev­i­dence that or­ganisms aren’t opaque spaghetti piles: bac­te­rial tran­scrip­tion net­work mo­tifs.

Ran­dom mu­ta­tions form ran­dom con­nec­tions be­tween tran­scrip­tion fac­tors—mu­ta­tions can make any given tran­scrip­tion fac­tor reg­u­late any other very eas­ily. But ac­tual tran­scrip­tion net­works do not look like ran­dom graphs. Here’s a vi­su­al­iza­tion from the book:

A few differ­ences are im­me­di­ately visi­ble:

  • Real net­works have much more au­toreg­u­la­tion (tran­scrip­tion fac­tors ac­ti­vat­ing/​re­press­ing their own tran­scrip­tion) than ran­dom networks

  • Other than self-loops (aka au­toreg­u­la­tion), real net­works con­tain al­most no feed­back loops (at least in bac­te­ria), though such loops are quite com­mon in ran­dom networks

  • Real net­works are mostly tree-shaped; most nodes have at most a sin­gle par­ent.

Th­ese pat­terns can be quan­tified and ver­ified statis­ti­cally via “mo­tifs” (or “an­ti­mo­tifs”): con­nec­tion pat­terns which oc­cur much more fre­quently (or less fre­quently) in real tran­scrip­tion fac­tor net­works than in ran­dom net­works.

Alon uses an e-coli tran­scrip­tion net­work with 424 nodes and 519 con­nec­tions to quan­tify mo­tifs. Chap­ters 2-4 each look at a par­tic­u­lar class of mo­tifs in de­tail:

  • Chap­ter 2 looks at au­toreg­u­la­tion. If the net­work were ran­dom, we’d ex­pect about 1.2 ± 1.1 au­toreg­u­la­tory loops. The ac­tual net­work has 40.

  • Chap­ter 3 looks at three-node mo­tifs. There is one mas­sively over­rep­re­sented mo­tif: the feed-for­ward loop (see di­a­gram be­low), with 42 in­stances in the real net­work and only 1.7 ± 1.3 in a ran­dom net­work. Dist­in­guish­ing ac­ti­va­tion from re­pres­sion, there are eight pos­si­ble feed-for­ward loop types, and two of the eight ac­count for 80% of the feed-for­ward loops in the real net­work.

  • Chap­ter 4 looks at larger mo­tifs, though it omits the statis­tics. Fan-in and fan-out pat­terns, as well as fanned-out feed-for­ward loops, are an­a­lyzed.

Alon an­a­lyzes the chem­i­cal dy­nam­ics of each pat­tern, and dis­cusses what each is use­ful for in a cell—for in­stance, au­toreg­u­la­tory loops can fine-tune re­sponse time, and feed-for­ward loops can act as filters or pulse gen­er­a­tors.

Chap­ters 5-6: Feed­back and Mo­tifs in Other Biolog­i­cal Networks

Chap­ter 5 opens with de­vel­op­men­tal tran­scrip­tion net­works, the tran­scrip­tion net­works which lay out the body plan and differ­en­ti­ate be­tween cell types in mul­ti­cel­lu­lar or­ganisms. Th­ese are some­what differ­ent from the bac­te­rial tran­scrip­tion net­works dis­cussed in the ear­lier chap­ters. Most of the over­rep­re­sented mo­tifs in bac­te­ria are also over­rep­re­sented in de­vel­op­men­tal net­works, but there are also new over­rep­re­sented mo­tifs—in par­tic­u­lar, pos­i­tive au­toreg­u­la­tion and two-node pos­i­tive feed­back.

Both of these pos­i­tive feed­back pat­terns are use­ful mainly for in­duc­ing bista­bil­ity—i.e. mul­ti­ple sta­ble steady states. A bistable sys­tem with steady states A and B will stay in A if it starts in A, or stay in B if it starts in B, mean­ing that it can be used as a sta­ble mem­ory el­e­ment. This is es­pe­cially im­por­tant to de­vel­op­men­tal sys­tems, where cells need to de­cide what type of cell they will be­come (in co­or­di­na­tion with other cells) and then stick to it—we wouldn’t want a proto-liver cell chang­ing its mind and be­com­ing a proto-kid­ney cell in­stead.

After dis­cussing pos­i­tive feed­back, Alon in­cludes a brief dis­cus­sion of mo­tifs in other biolog­i­cal net­works, in­clud­ing pro­tein-pro­tein in­ter­ac­tions and neu­ronal net­works. Per­haps sur­pris­ingly (es­pe­cially for neu­ronal net­works), these in­clude many of the same over­rep­re­sented mo­tifs as tran­scrip­tion fac­tor net­works—sug­gest­ing uni­ver­sal prin­ci­ples at work.

Fi­nally, chap­ter 6 is de­voted en­tirely to biolog­i­cal os­cilla­tors, e.g. cir­ca­dian rhythms or cell-cy­cle reg­u­la­tion or heart beats. The rele­vant mo­tifs in­volve nega­tive feed­back loops. The main sur­prise is that os­cilla­tions can some­times be sus­tained even when it seems like they should die out over time—ther­mo­dy­namic noise in chem­i­cal con­cen­tra­tions can “kick” the sys­tem so that the os­cilla­tions con­tinue in­definitely.

At this point, the dis­cus­sion of mo­tifs in biolog­i­cal net­works wraps up. Need­less to say, plenty of refer­ences are given which quan­tify mo­tifs in var­i­ous biolog­i­cal or­ganisms and net­work types.

Chap­ters 7-8: Ro­bust Recog­ni­tion and Sig­nal-Passing

There’s quite a bit of hid­den pur­pose in biolog­i­cal sys­tems—seem­ingly waste­ful side-re­ac­tions or seem­ingly ar­bi­trary re­ac­tion sys­tems turn out to be func­tion­ally crit­i­cal. Chap­ters 7-8 show that ro­bust­ness is one such “hid­den” pur­pose: biolog­i­cal sys­tems are buf­feted by ther­mo­dy­namic noise, and their func­tions need to be ro­bust to that noise. Once we know to look for it, ro­bust­ness shows up all over, and many seem­ingly-ar­bi­trary de­signs don’t look so ran­dom any­more.

Chap­ter 7 mainly dis­cusses ki­netic proofread­ing, a sys­tem used by both ri­bo­somes (RNA-read­ing ma­chin­ery) and the im­mune sys­tem to re­duce er­ror rates. At first glance, ki­netic proofread­ing just looks like a waste­ful side-re­ac­tion: the ri­bo­some/​im­mune cell binds its tar­get molecule, then performs an en­ergy-con­sum­ing side re­ac­tion and just waits around a while be­fore it can move on to the next step. And if the tar­get un­binds at any time, then it has to start all over again!

Yet this is ex­actly what’s needed to re­duce er­ror rates.

The key is that the cor­rect tar­get is always most en­er­get­i­cally sta­ble to bind, so it stays bound longer (on av­er­age) than in­cor­rect tar­gets. At equil­ibrium, maybe 1% of the bound tar­gets are in­cor­rect. The ir­re­versible side-re­ac­tion acts as a timer: it marks that some tar­get is bound, and starts time. If the tar­get falls off, then the side-re­ac­tion is un­done and the whole pro­cess starts over… but the in­cor­rect tar­gets fall off much more quickly that the cor­rect tar­gets. So, we end up with cor­rect tar­gets “en­riched”: the frac­tion of in­cor­rect tar­gets drops well be­low its origi­nal level of 1%. Both the de­lay and the en­ergy con­sump­tion are nec­es­sary in or­der for this to work: the de­lay to give the in­cor­rect tar­gets time to fall off, and the en­ergy con­sump­tion to make the timer ir­re­versible (oth­er­wise ev­ery­thing just equil­ibrates back to 1% er­ror).

Alon offers an anal­ogy, in which a mu­seum cu­ra­tor wants to sep­a­rate the true Pi­casso lovers from the non-lovers. The Pi­casso room usu­ally has about 10x more lovers than non-lovers (since the lovers spend much more time in the room), but the cu­ra­tor wants to do bet­ter. So, with a nor­mal mix of peo­ple in there, he locks the in­com­ing door and opens a one-way door out. Over the next few min­utes, only a few of the pi­casso lovers leave, but prac­ti­cally all the non-lovers leave—Pi­casso lovers end up with much more than the origi­nal 10x en­rich­ment in the room. Again, we see both key pieces: ir­re­versibil­ity and a de­lay.

It’s also pos­si­ble to stack such sys­tems, perform­ing mul­ti­ple ir­re­versible side-re­ac­tions in se­quence, in or­der to fur­ther lower the er­ror rate. Alon goes into much more depth, and ex­plains the ac­tual re­ac­tions in­volved in more de­tail.

Chap­ter 8 then dives into a differ­ent kind of ro­bust­ness: ro­bust sig­nal-pass­ing. The goal here is to pass some sig­nal from out­side the cell to in­side. The prob­lem is, there’s a lot of ther­mo­dy­namic noise in the num­ber of re­cep­tors—if there hap­pen to be 20% more re­cep­tors than av­er­age, then a sim­ple de­tec­tion cir­cuit would mea­sure 20% stronger sig­nal. This prob­lem can be avoided, but it re­quires a spe­cific—and non­triv­ial—sys­tem struc­ture.

In this case, the main trick is to have the re­cep­tor both ac­ti­vate and de­ac­ti­vate (i.e. phos­pho­ry­late and de­phos­pho­ry­late) the in­ter­nal sig­nal molecule, with rates de­pend­ing on whether the re­cep­tor is bound. At first glance, this might seem waste­ful: what’s the point of a re­cep­tor which un­does its own effort? But for ro­bust­ness, it’s crit­i­cal—be­cause the re­cep­tor both ac­ti­vates and de­ac­ti­vates the in­ter­nal sig­nal, its con­cen­tra­tion can­cels out in the equil­ibrium ex­pres­sion. That means that the num­ber of re­cep­tors won’t im­pact the equil­ibrium ac­tivity level of the sig­nal molecule, only how fast it reaches equil­ibrium.

The trick can also be ex­tended to provide ro­bust­ness to the back­ground level of the sig­nal molecule it­self—Alon pro­vides more de­tail. As you might ex­pect, this type of struc­ture is a com­mon pat­tern in biolog­i­cal sig­nal-re­cep­tor cir­cuits.

For our pur­poses, the main take­away from these two chap­ters is that, just be­cause the sys­tem looks waste­ful/​ar­bi­trary, does not mean it is. Once we know what to look for, it be­comes clear that the struc­ture of biolog­i­cal sys­tems is not nearly so ar­bi­trary as it looks.

When we move from an in­door room into full sun­light, our eyes quickly ad­just to the bright­ness. A bac­te­ria swim­ming around in search of food can de­tect chem­i­cal gra­di­ents among back­ground con­cen­tra­tions vary­ing by three or­ders of mag­ni­tude. Beta cells in the pan­creas reg­u­late glu­cose us­age, bring­ing the long-term blood glu­cose con­cen­tra­tion back to 5 mM, even when we shift to eat­ing or ex­er­cis­ing more. In gen­eral, a wide va­ri­ety of biolog­i­cal sens­ing sys­tems need to be able to de­tect changes and then re­turn to a sta­ble baseline, across a wide range of back­ground in­ten­sity lev­els.

Alon dis­cusses three prob­lems in this vein, each with its own chap­ter:

  • Ex­act adap­ta­tion: the “out­put sig­nal” of a sys­tem always re­turns to the same baseline when the in­put stops chang­ing, even if the in­put set­tles at a new level.

  • Fold change: the sys­tem re­sponds to per­centage changes, across sev­eral deci­bels of back­ground in­ten­sity.

  • Ex­tra­cel­lu­lar ver­sions of the above prob­lems, in which con­trol is de­cen­tral­ized.

Main take­away: fairly spe­cific de­signs are needed to achieve ro­bust be­hav­ior.

Ex­act Adaptation

The main tool used for ex­act adap­ta­tion will be im­me­di­ately fa­mil­iar to en­g­ineers who’ve seen some lin­ear con­trol the­ory: in­te­gral feed­back con­trol. There are three key pieces:

  • Some in­ter­nal state vari­able M—e.g. con­cen­tra­tion/​ac­ti­va­tion of some molecule type or count of some cell type—used to track “er­ror” over time

  • An “in­ter­nal” sig­nal X

  • An “ex­ter­nal” sig­nal and a re­cep­tor, which in­creases pro­duc­tion/​ac­ti­va­tion of the in­ter­nal sig­nal when­ever it senses the ex­ter­nal signal

The “er­ror” tracked by the in­ter­nal state M is the differ­ence be­tween the in­ter­nal sig­nal’s con­cen­tra­tion X and its long-term steady-state con­cen­tra­tion X^*. The in­ter­nal state in­creases/​de­creases in di­rect pro­por­tion to that differ­ence, so that over time, the M is pro­por­tional to the in­te­gral \int_t (X^* - X) dt. Then, M it­self re­presses pro­duc­tion/​ac­ti­va­tion of the in­ter­nal sig­nal X.

The up­shot: if the ex­ter­nal sig­nal in­creases, then at first the in­ter­nal sig­nal X also in­creases, as the ex­ter­nal re­cep­tor in­creases pro­duc­tion/​ac­ti­va­tion of X. But this pushes X above its long-term steady-state X^*, so M grad­u­ally in­creases, re­press­ing X. The longer and fur­ther X is above its steady-state, the more M in­creases, and the more X is re­pressed. Even­tu­ally, M reaches a level which bal­ances the new av­er­age level of the ex­ter­nal sig­nal, and X re­turns to the baseline.

Alon then dis­cusses ro­bust­ness of this mechanism com­pared to other pos­si­ble mechanisms. Turns out, this kind of feed­back mechanism is ro­bust to changes in the back­ground level of M, X, etc—steady-state lev­els shift, but the qual­i­ta­tive be­hav­ior of ex­act adap­ta­tion re­mains. Other, “sim­pler” mechanisms do not ex­hibit such ro­bust­ness.

Fold-Change Detection

Fold-change de­tec­tion is a pretty com­mon theme in biolog­i­cal sen­sory sys­tems, from eyes to bac­te­rial chem­i­cal re­cep­tors. We­ber’s Law is the gen­eral state­ment: sen­sory sys­tems usu­ally re­spond to changes on a log scale.

There’s two im­por­tant pieces here:

  • “Re­spond to changes” means ex­act adap­tion—the sys­tem re­turns to a neu­tral steady-state value in the long run when noth­ing is chang­ing.

  • “Log scale” means it’s per­cent changes which mat­ter, and the sys­tem can work across sev­eral or­ders of mag­ni­tude of ex­ter­nal signal

Alon gives an in­ter­est­ing ex­am­ple: ap­par­ently if you use a screen and an eye-tracker to can­cel out a per­son’s rapid eye move­ments, their whole field of vi­sion turns to grey and they can’t see any­thing. That’s re­spond­ing to changes. On the other hand, if we step into bright light, back­ground in­ten­sity can eas­ily jump by an or­der of mag­ni­tude—yet a 10% con­trast looks the same in low light or bright light. That’s op­er­at­ing on a log-scale.

Again, there’s some pretty spe­cific crite­ria for sys­tems to ex­hibit fold-change de­tec­tion—few sys­tems have con­sis­tent, use­ful be­hav­ior over mul­ti­ple or­ders of mag­ni­tude of in­put val­ues. Alon gives two par­tic­u­lar cir­cuits, as well as a gen­eral crite­rion.

Ex­tra­cel­lu­lar/​De­cen­tral­ized Adaptation

Alon moves on to the ex­am­ple of blood glu­cose reg­u­la­tion. Blood glu­cose needs to be kept at a pretty steady 5 mM level long-term; too low will starve the brain, and too high will poi­son the brain. The body uses an in­te­gral feed­back mechanism to achieve ro­bust ex­act adap­ta­tion of glu­cose lev­els, with the count of pan­cre­atic beta cells serv­ing as the state vari­able: when glu­cose is too low, the cells (slowly) die off, and when glu­cose is too high, the cells (slowly) pro­lifer­ate.

The main new player is in­sulin. Beta cells do not them­selves pro­duce or con­sume much glu­cose; rather, they pro­duce in­sulin, which we can think of as an in­verse-price sig­nal for glu­cose. When in­sulin lev­els are low (so the “price” of glu­cose is high), many cells through­out the body cut back on their glu­cose con­sump­tion. The beta cells serve as mar­ket-mak­ers: they ad­just the in­sulin/​price level un­til the glu­cose mar­ket clears—mean­ing that there is no long-term in­crease or de­crease in blood glu­cose.

A very similar sys­tem ex­ists for many other metabo­lite/​hor­mone pairs. For in­stance, cal­cium and parathy­roid uses a nearly-iden­ti­cal sys­tem: in­te­gral feed­back mechanism us­ing cell count as a state vari­able with a hor­mone serv­ing as price-sig­nal to provide de­cen­tral­ized feed­back con­trol through­out the body.

Alon also spends a fair bit of time on one par­tic­u­lar is­sue with this set-up: mu­tant cells which mis­mea­sure the glu­cose con­cen­tra­tion could pro­lifer­ate and take over the tis­sue. One defense against this prob­lem is for the beta cells to die when they mea­sure very high glu­cose lev­els (in­stead of pro­lifer­at­ing very quickly). This han­dles must mu­ta­tions, but it also means that suffi­ciently high glu­cose lev­els can trig­ger an un­sta­ble feed­back loop: beta cells die, which re­duces in­sulin, which means higher glu­cose “price” and less glu­cose us­age through­out the body, which pushes glu­cose lev­els even higher. That’s type-2 di­a­betes.

Chap­ter 12: Mor­pholog­i­cal Patterning

The last chap­ter we’ll cover here is on mor­pholog­i­cal pat­tern­ing: the use of chem­i­cal re­ac­tions and diffu­sion to lay out the body plans of mul­ti­cel­lu­lar or­ganisms.

The ba­sic sce­nario in­volves one group of cells (A) pro­duc­ing some sig­nal molecule, which diffuses into a neigh­bor­ing group of cells (B). The neigh­bors then differ­en­ti­ate them­selves based on how strong the sig­nal is: those nearby A will see high sig­nal, so they adopt one fate, while those farther away see lower sig­nal, so they adopt an­other fate, with some cut­off in be­tween.

This im­me­di­ately runs into a prob­lem: if A pro­duces too much or too lit­tle of the sig­nal molecule, then the cut­off will be too far to one side or the other—e.g. the or­ganism could end up with a tiny rib and big space be­tween ribs, or a big rib and a tiny space be­tween. It’s not ro­bust.

Once again, the right de­sign can miti­gate the prob­lem.

Ap­par­ently one group ran a brute-force search over pa­ram­e­ter space, look­ing for biolog­i­cally-plau­si­ble sys­tems which pro­duced ro­bust pat­tern­ing. Only a few tiny cor­ners of the pa­ram­e­ter space worked, and those tiny cor­ners all used a qual­i­ta­tively similar mechanism. Alon ex­plains the mechanism in some depth, but I’m gonna punt on it—much as I en­joy non­lin­ear PDEs (and this one is even an­a­lyt­i­cally tractable), I’m not go­ing to in­flict them on read­ers here.

Once again, though it may seem that evolu­tion can solve prob­lems a mil­lion differ­ent ways and it’s hope­less to look for struc­ture, it ac­tu­ally turns out that only a few spe­cific de­signs work—and those are un­der­stand­able by hu­mans.

Takeaway

Let’s re­turn to the Alon quote from the in­tro­duc­tion:

Be­cause it has evolved to perform func­tions, biolog­i­cal cir­cuitry is far from ran­dom or hap­haz­ard. … Although evolu­tion works by ran­dom tin­ker­ing, it con­verges again and again onto a defined set of cir­cuit el­e­ments that obey gen­eral de­sign prin­ci­ples.
The goal of this book is to high­light some of the de­sign prin­ci­ples of biolog­i­cal sys­tems… The main mes­sage is that biolog­i­cal sys­tems con­tain an in­her­ent sim­plic­ity. Although cells evolved to func­tion and did not evolve to be com­pre­hen­si­ble, sim­plify­ing prin­ci­ples make biolog­i­cal de­sign un­der­stand­able to us.

We’ve now seen both gen­eral ev­i­dence and spe­cific ex­am­ples of this.

In terms of gen­eral ev­i­dence, we’ve seen that biolog­i­cal reg­u­la­tory net­works do not look statis­ti­cally ran­dom. Rather, a hand­ful of pat­terns—“mo­tifs”—re­peat of­ten, lend­ing the sys­tem a lot of con­sis­tent struc­ture. Even though the sys­tem was not de­signed to be un­der­stand­able, there’s still a lot of rec­og­niz­able in­ter­nal struc­ture.

In terms of spe­cific ex­am­ples, we’ve seen that only a small sub­set of pos­si­ble de­signs can achieve cer­tain biolog­i­cal goals:

  • Ro­bust recog­ni­tion of molecules

  • Ro­bust sig­nal-passing

  • Ro­bust ex­act adap­ta­tion and dis­tributed ex­act adaptation

  • Fold-change detection

  • Ro­bust mor­pholog­i­cal patterning

The de­signs which achieve ro­bust­ness are ex­actly the de­signs used by real biolog­i­cal sys­tems. Even though the sys­tem was not de­signed to be un­der­stand­able, the sim­ple fact that it works ro­bustly forces the use of a hand­ful of un­der­stand­able struc­tures.

A fi­nal word: when we do not un­der­stand some­thing, it does not look like there is any­thing to be un­der­stood at all—it just looks like ran­dom noise. Just be­cause it looks like noise does not mean there is no hid­den struc­ture.