Book Review—Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness

In this post:

  • A true-false test about octopuses

  • What is it like to be an oc­to­pus?

  • An ex­er­cise in up­dat­ing on sur­pris­ing facts

  • Ex­per­i­ments re­lated to an­i­mal suffer­ing and consciousness

  • The evolu­tion of aging

  • Should you read Other Minds: The Oc­to­pus, the Sea, and the Deep Ori­gins of Con­scious­ness?

I. Introduction

Peter God­frey-Smith’s Other Minds: the Oc­to­pus, the Sea, and the Deep Ori­gins of Con­scious­ness is a phe­nom­e­nal mish­mash of oc­to­pus- and con­scious­ness-re­lated top­ics. It deals with ev­ery­thing from the evolu­tion of oc­to­puses, to their so­cial life, to an­i­mal con­scious­ness (in­clud­ing oc­to­pus con­scious­ness), to evolu­tion­ary the­o­ries of ag­ing, and more. All of this is tied to­gether by a pal­pable fas­ci­na­tion with oc­to­puses, which man­i­fests it­self in rich de­scrip­tions of God­frey-Smith’s own ex­pe­riences scuba-div­ing off the coast of Aus­tralia to ob­serve them.

The book at­tempts to fit dis­cus­sion of an im­pres­sive amount of in­ter­est­ing top­ics all into one slim vol­ume. On the one hand, this is great, as each topic is fas­ci­nat­ing in its own right, and sev­eral are rele­vant to EA/​ra­tio­nal­ity. On the other hand, fit­ting in so many top­ics is a difficult task which the book only halfway pulls off. There wasn’t enough room to dis­cuss each topic in as much depth as they de­served, and the breadth of top­ics meant that the book felt some­what un­or­ga­nized and di­s­unified. The book as a whole didn’t seem to have any cen­tral claim; it was sim­ply a col­lec­tion of in­ter­est­ing facts, ob­ser­va­tions, mus­ings, and the­o­ries that some­how re­late to ei­ther oc­to­puses, con­scious­ness, or both, plus a bunch of fas­ci­nat­ing first-hand de­scrip­tions of oc­to­pus be­hav­ior.

Do I recom­mend the book? Yes and no. For gen­eral in­ter­est, definitely—it’s an in­ter­est­ing, en­joy­able read; but for ra­tio­nal­ists and EAs, there are prob­a­bly bet­ter things to read on each topic the book dis­cusses that would go into more depth, so it may not be the most effec­tive in­vest­ment of time for learn­ing about, say, the­o­ries of an­i­mal con­scious­ness. So in the rest of this re­view I’ve tried to 80:20 the book a bit, pul­ling out the in­sights that I found most in­ter­est­ing and rele­vant to EA/​ra­tio­nal­ism (as well as adding my own mus­ings here and there). Be­cause of this, the re­view is both quite long, and un­avoid­ably re­flects a bit of the dis­joint­ed­ness of the book it­self—the sec­tions can largely be read in­de­pen­dently of each other.


Be­fore I be­gin, a true-false test. For the fol­low­ing state­ments about oc­to­puses, write down whether you think they are true or false, and how con­fi­dent you are in your re­sponse. We’ll come back to these later in the re­view, and the an­swers will be at the end:

  1. Oc­to­puses can squirt jets of ink as an es­cape tac­tic.

  2. Oc­to­puses have color vi­sion.

  3. Oc­to­puses have bilat­eral sym­me­try.

  4. Oc­to­puses can cam­ou­flage them­selves nearly perfectly by chang­ing the color and tex­ture of their skin to match what­ever sur­face or ob­ject they are try­ing to blend into.

  5. Oc­to­puses can fit through any hole or gap big­ger than their eye.

  6. Oc­to­puses can rec­og­nize in­di­vi­d­ual hu­mans.

  7. Most oc­to­pus species live for more than 20 years.

  8. Oc­to­puses are mostly soli­tary an­i­mals.

  9. Oc­to­puses have been known to use shards of glass from shat­tered bot­tles on the seafloor as weapons to fight other oc­to­puses.

An­swers be­low.


II. What is it like to be an oc­to­pus?

The ner­vous sys­tem of an oc­to­pus is struc­tured quite differ­ently than a mam­malian ner­vous sys­tem. The oc­to­pus not only has a high con­cen­tra­tion of neu­rons in its head (a “brain”), but also clusters of neu­rons through­out their body, par­tic­u­larly in each arm. At one point God­frey-Smith notes that the num­ber of neu­rons in the oc­to­pus’s cen­tral brain is only a lit­tle over half of the num­ber of neu­rons in the rest of its body (67). This means that each arm must have, roughly, 14 as many neu­rons as the cen­tral brain. What might it feel like to have 8 other “brains” in your body, each 14 the size of your “main brain”?[^1]

Most likely, it feels com­pletely differ­ent than we’re even ca­pa­ble of imag­in­ing. God­frey-Smith doesn’t dis­cuss the ques­tion, and there are some philoso­phers, such as Daniel Den­nett, who would deny that it even makes sense. Nev­er­the­less, there are two main points of refer­ence that I can think of that hu­mans might use to imag­ine some­thing of what this would feel like.

First, we hu­mans have what is some­times called a “gut brain”: ~500 mil­lion neu­rons in our di­ges­tive tracts, about as many as in a cat’s brain, that con­trol our di­ges­tive pro­cess. What does it feel like to have this brain? Well, hunger sig­nals, the gag re­flex, and some emo­tions (via the lim­bic sys­tem), are con­trol­led by this brain, so roughly, it feels like what those men­tal states feel like. Per­haps the oc­to­pus has similar sig­nals that come from its arm brains. Th­ese likely feel noth­ing like the sig­nals com­ing from our gut brain, and would of course have differ­ent func­tions: while our gut brain sends us hunger or dis­gust re­flexes, the oc­to­pus’s arm sig­nals might func­tion some­thing like our pro­pi­o­cep­tive sense, in­form­ing the “main brain” about the lo­ca­tion of the arms, and maybe also re­lay­ing touch and taste/​smell in­for­ma­tion from the re­spec­tive sense or­gans lo­cated on the arms.

Or per­haps the arm brains don’t just re­lay sen­sory in­for­ma­tion from the arms, per­haps they also play a part in con­trol­ling the arms’ move­ments (God­frey-Smith posits that this is the main rea­son for why the oc­to­pus’s ner­vous sys­tem is so dis­tributed). Stick­ing with the gut-brain/​arm-brain anal­ogy for the mo­ment, what does it feel like for the gut-brain to con­trol the stom­ach? That is, what does it feel like to di­gest food? … Often like noth­ing at all. We some­times don’t even no­tice di­ges­tion oc­cur­ring, and we cer­tainly don’t have any de­tailed sen­sa­tion of what’s go­ing on when it does. So per­haps the oc­to­pus just tells its arms where to go in broad strokes, and they take it from there, similar to how our gut-brain just “takes it from there.”

The idea that the arm brains con­trol the move­ment of the arms also brings me to my sec­ond com­par­i­son: split-brain syn­drome. Split-brain re­sults when the cor­pus cal­lo­sum is sur­gi­cally sev­ered (usu­ally as a treat­ment for epilepsy). After this op­er­a­tion, pa­tients can func­tion mostly nor­mally, but it can be ex­per­i­men­tally de­ter­mined that the two sides of their body func­tion some­what in­de­pen­dently of each other. For ex­am­ple:

a pa­tient with split brain is shown a pic­ture of a chicken foot and a snowy field in sep­a­rate vi­sual fields and asked to choose from a list of words the best as­so­ci­a­tion with the pic­tures. The pa­tient would choose a chicken to as­so­ci­ate with the chicken foot and a shovel to as­so­ci­ate with the snow; how­ever, when asked to rea­son why the pa­tient chose the shovel, the re­sponse would re­late to the chicken (e.g. “the shovel is for clean­ing out the chicken coop”). (source) (origi­nal source)

No­to­ri­ously, the two sides can some­times get into con­flict, as when one split-brain pa­tient vi­o­lently shook his wife with his left hand while try­ing to stop him­self with his right hand.

What does it feel like for that to hap­pen? Well, I don’t ac­tu­ally know. But I won­der, does this per­son feel touch sen­sa­tions from their left hand? If so, are these pro­duced by the same pro­cesses that patch over our blind spot, or are they ac­tual touch sen­sa­tions? Sup­pose you put a split-brain pa­tient’s left hand be­hind an opaque bar­rier; would they be able to tell when some­thing touched it?

As­sum­ing that the an­swer to this ques­tion is “no,” an­other pos­si­bil­ity for what hav­ing “arm-brains” could feel like is “ba­si­cally noth­ing”: just as the split-brain pa­tient only knows what their left side is do­ing through ex­ter­nal cues, like see­ing it move, and doesn’t have any con­trol over it, so too with the oc­to­pus’s arms. It doesn’t re­ally “feel” what’s go­ing on in its arms, the arms them­selves know what’s go­ing on and that’s enough.

III. An ex­er­cise in up­dat­ing on sur­pris­ing facts

[Note: this sec­tion is largely re­cy­cled from one of my short­form posts; if you’ve already read that, this will be mostly re­dun­dant. If you haven’t read that, sim­ply read on.]

I con­fess, the pur­pose of the true-false test at the be­gin­ning of this re­view was largely to dis­guise one ques­tion in par­tic­u­lar, so that the mere ask­ing of it didn’t provide Bayesian ev­i­dence that the an­swer should be sur­pris­ing: “6.) Oc­to­puses can rec­og­nize in­di­vi­d­ual hu­mans.” Take a mo­ment to look back at your an­swer to that ques­tion. What does your model say about whether oc­to­puses should be able to rec­og­nize in­di­vi­d­ual hu­mans? Why can hu­mans rec­og­nize other in­di­vi­d­ual hu­mans, and what does that say about whether oc­to­puses should be able to?

...

...

...

...

...

...

As it turns out, oc­to­puses can rec­og­nize in­di­vi­d­ual hu­mans. For ex­am­ple, in the book it’s men­tioned that at one lab, one of the oc­to­puses had a habit of squirt­ing jets of wa­ter at one par­tic­u­lar re­searcher. Take a mo­ment to let it sink in how sur­pris­ing this is: oc­to­puses, which 1.) are mostly nonso­cial an­i­mals, 2.) have a com­pletely differ­ent ner­vous sys­tem struc­ture that evolved on a com­pletely differ­ent branch of the tree of life, and 3.) have no evolu­tion­ary his­tory of in­ter­ac­tion with hu­mans, can rec­og­nize in­di­vi­d­ual hu­mans, and differ­en­ti­ate them from other hu­mans. I’m pretty sure hu­mans have a hard time differ­en­ti­at­ing be­tween in­di­vi­d­ual oc­to­puses.

And since things are not in­her­ently sur­pris­ing, only sur­pris­ing to mod­els, this means my world model (and yours, if you were sur­prised by this) needs to be up­dated. First, gen­er­ate a cou­ple of up­dates you might make to your model af­ter find­ing this out. I’ll wait...

...

...

...

...

...

...

...

Now that you’ve done that, here’s what I came up with:

(1) Per­haps the abil­ity to rec­og­nize in­di­vi­d­u­als isn’t as tied to be­ing a so­cial an­i­mal as I had thought (2) Per­haps hu­mans are eas­ier to tell apart than I thought (i.e. hu­mans have more dis­t­in­guish­ing fea­tures, or these dis­t­in­guish­ing fea­tures are larger/​more vi­su­ally no­tice­able, etc., than I thought) (3) Per­haps the abil­ity to dis­t­in­guish in­di­vi­d­ual hu­mans doesn’t re­quire a spe­cific psy­cholog­i­cal mod­ule, as I had thought, but rather falls out of a more gen­eral abil­ity to dis­t­in­guish ob­jects from each other (God­frey-Smith men­tions this pos­si­bil­ity in the book). (4) Per­haps I’m over­i­mag­in­ing how fine-grained the oc­to­pus’s abil­ity to dis­t­in­guish hu­mans is. I.e. maybe that per­son was the only one in the lab with a par­tic­u­lar hair color or some­thing, and they can’t dis­t­in­guish the rest of the peo­ple. (Though note, an­other ex­am­ple given in the book was that one oc­to­pus liked to squirt new peo­ple, peo­ple it hadn’t seen reg­u­larly in the lab be­fore. This wouldn’t mesh very well with the “oc­to­puses can only make coarse-grained dis­tinc­tions be­tween peo­ple” hy­poth­e­sis.)

To be clear, those were my first thoughts; I don’t think all of them are cor­rect. As per my short­form post about this, I’m mostly lean­ing to­wards an­swer (2) be­ing the cor­rect up­date—maybe the rea­son oc­to­puses can rec­og­nize hu­mans but not the other way around is mostly be­cause in­di­vi­d­ual hu­mans are just more vi­su­ally dis­tinct from each other than in­di­vi­d­ual oc­to­puses, in that hu­mans have a wider ar­ray of dis­t­in­guish­ing fea­tures or these fea­tures are larger or oth­er­wise eas­ier to no­tice. But of course, these an­swers are nei­ther mu­tu­ally ex­clu­sive nor ex­haus­tive. For ex­am­ple, I think an­swer (3) also prob­a­bly has some­thing to do with it. I sus­pect that hu­mans prob­a­bly have a spe­cific mod­ule for rec­og­niz­ing hu­mans, but it seems clear that oc­to­puses couldn’t have such a mod­ule, so it must not be strictly nec­es­sary in or­der to tell hu­mans apart. Maybe a gen­eral ob­ject-rec­og­niz­ing ca­pa­bil­ity plus how­ever vi­su­ally dis­tinct hu­mans are from each other is enough.[^2]

I’d also love to hear in the com­ments what up­dates other peo­ple had from this.

IV. An­i­mal consciousness

Some­thing else from the book that I found in­ter­est­ing con­cerns an­i­mal con­scious­ness/​sub­jec­tive ex­pe­rience. I sus­pect this is old hat for those who have done any sig­nifi­cant re­search into an­i­mal suffer­ing, but it added a cou­ple more gears to my model of an­i­mal con­scious­ness, so I’ll share it here for those whose mod­els were similarly gear-less. Re­mem­ber blind­sight (where peo­ple who are blind due to dam­age in their vi­sual cor­tex can perform bet­ter than chance at vi­sion tasks, be­cause the rest of their brain still gets vi­sual in­for­ma­tion, even though they don’t have ac­cess to it con­sciously)? A pair of vi­sion sci­en­tists (Milner and Goodale) be­lieve, roughly, that that’s what’s go­ing on in frogs all the time. What con­vinced them of this is an ex­per­i­ment performed by David In­gle in which he was able to sur­gi­cally re­verse some, but not all, of the vi­sual abil­ities of some froggy test sub­jects. Namely, when his frogs saw a fly in one side of their vi­sual field, they would snap as if it were on the other, but they were able to go around bar­ri­ers perfectly nor­mally. Milner and Goodale take this as ev­i­dence that the frog doesn’t have an in­te­grated vi­sual ex­pe­rience at all. They write:

So what did these rewired frogs “see”? There is no sen­si­ble an­swer to this. The ques­tion only makes sense if you be­lieve that the brain has a sin­gle vi­sual rep­re­sen­ta­tion of the out­side world that gov­erns all of an an­i­mal’s be­hav­ior. In­gle’s ex­per­i­ments re­veal that this can­not pos­si­bly be true. (Milner and Goodale 2005, qtd. in Peter God­frey-Smith, Other Minds, 2016, p. 80)

God­frey-Smith then goes on to dis­cuss Milner and Goodale’s view:

Once you ac­cept that a frog does not have a unified rep­re­sen­ta­tion of the world, and in­stead has a num­ber of sep­a­rate streams that han­dle differ­ent kinds of sens­ing, there is no need to ask what the frog sees: in Milner and Goodale’s words, “the puz­zle dis­ap­pears.” Per­haps one puz­zle dis­ap­pears, but an­other is raised. What does it feel like to be a frog per­ceiv­ing the world in this situ­a­tion? I think Milner and Goodale are sug­gest­ing that it feels like noth­ing. There is no ex­pe­rience here be­cause the ma­chin­ery of vi­sion in frogs is not do­ing the sorts of things it does in us that give rise to sub­jec­tive ex­pe­rience. (God­frey-Smith, pp. 89-90)[^3]

Though he doesn’t men­tion it, there seems to me to be an ob­vi­ous re­ply here: the phe­nomenon of blind­sight re­veals that there are parts of our vi­sual pro­cess­ing that don’t feel like any­thing to us (or per­haps, as God­frey-Smith prefers, they feel like some­thing, just not like vi­sion), but this clearly doesn’t change the fact that we (most of us) do have vi­sual ex­pe­rience. Why couldn’t some­thing similar be go­ing on in the frogs? They have a vi­sual field, but they also have other vi­sual pro­cess­ing go­ing on as well which doesn’t make it into their vi­sual field.

Let me try to ex­plain this thought a bit bet­ter. One thing that the hu­man blind­sight sub­ject de­scribed in Other Minds (known as “DF”) was able to do was put let­ters through a mail-slot placed at differ­ent an­gles. Now those of us who have nor­mal sight pre­sum­ably still do all the same pro­cess­ing as those with blind­sight, plus some ex­tra. So, imag­ine some­one perform­ing brain surgery on a per­son with nor­mal vi­sion, which af­fected what­ever brain cir­cuitry al­lows peo­ple to al­ign a let­ter at the cor­rect an­gle to get it through a mail-slot. At the risk of ar­gu­ing based on ev­i­dence I haven’t seen yet, one way I could imag­ine the sce­nario play­ing out is the fol­low­ing: this per­son would wake up and find that, though their vi­sual ex­pe­rience was the same as be­fore, for some rea­son they couldn’t man­age to fit let­ters through mail-slots any­more. They would ex­pe­rience this in a similar way as some­one with ex­cep­tion­ally poor bal­ance ex­pe­riences their in­abil­ity to walk a tightrope—it’s not as though they can’t see where the rope is, they just can’t man­age to put their feet in the right place to stay on it. I’d guess that the same thing would hap­pen for the per­son, and that the same thing is hap­pen­ing for the frog. Re­spec­tively: it’s not as though the per­son can’t see where the mail-slot is, they just can’t man­age to get the let­ter through it, and it’s not as though the frog can’t see where the fly is, it just can’t seem to get its tongue to move in the right di­rec­tion to catch it.[^4]

In any case, even if we dis­count this ar­gu­ment, does Milner and Goodale’s ar­gu­ment amount to an ar­gu­ment that most an­i­mals don’t have in­ner lives, and in par­tic­u­lar that they don’t feel pain?

Not so, God­frey-Smith wants to ar­gue. He in­cludes some dis­cus­sion of var­i­ous the­o­ries of con­scious­ness/​sub­jec­tive ex­pe­rience and how early or late it arose,[^5] but what in­ter­ested me was an ex­per­i­ment that tried to test whether an an­i­mal, in this case a Ze­brafish, ac­tu­ally feels pain, or is only perform­ing in­stinc­tive be­hav­iors that look to us like pain.

The ex­per­i­ment goes like this: There are two en­vi­ron­ments, A and B, and the fish is known to pre­fer A to B. The ex­per­i­menter in­jects the fish with a chem­i­cal thought to be painful. Then, the ex­per­i­menter dis­solves painkil­ler in en­vi­ron­ment B, and lets the fish choose again which en­vi­ron­ment it prefers. With the painkil­ler and the painful chem­i­cal, the fish prefers en­vi­ron­ment B (though with the painful chem­i­cal and no painkil­ler, it still prefers A). The fish seems to be choos­ing en­vi­ron­ment B in or­der to re­lieve its pain, and this isn’t the kind of situ­a­tion that the fish could have an evolved re­flex to re­act to. Since the fish is be­hav­ing as we would ex­pect it to if it felt pain and the op­po­site of how we would ex­pect it to if it didn’t feel pain, and a re­flex can’t be the ex­pla­na­tion, this is ev­i­dence that the fish feels pain, rather than sim­ply seem­ing to feel pain.

What ex­cited me about this was the idea that we could use ex­per­i­ments to tell some­thing about the in­ner lives of an­i­mals. Even though I’ve been thor­oughly dis­abused of the idea of a philo­soph­i­cal zom­bie,[^6] I still had the idea that sub­jec­tive ex­pe­rience is some­thing that can’t re­ally be tested “from the out­side.” Read­ing about these ex­per­i­ments made me much more op­ti­mistic that ex­per­i­ments could be use­ful to help de­ter­mine whether and which an­i­mals are moral pa­tients.

V. Aging

Another fact that might sur­prise (and per­haps sad­den) you: oc­to­puses, for the most part, only live about 2 years. One might think that in­tel­li­gence is most ad­van­ta­geous when you live long enough to benefit from the things you learn with it. Nev­er­the­less, oc­to­puses only live about 2 years. Why is this? God­frey-Smith posits that oc­to­puses evolved in­tel­li­gence not for the benefits of long-term learn­ing, but sim­ply to con­trol their highly-amor­phous bod­ies. Since an oc­to­pus’s body can move so freely, it takes a very large ner­vous sys­tem to con­trol it, which gave rise to what in­tel­li­gence they pos­sess. Even so, once they had in­tel­li­gence, shouldn’t this have caused se­lec­tion pres­sure to­wards longer lives? I’m still con­fused on this count, but this does lead us to an­other ques­tion: why do most liv­ing or­ganisms age in the first place? There are or­ganisms that don’t, at least on the timescales we’ve ob­served them on so far, so why are there any that do? What evolu­tion­ary benefit does ag­ing provide, could it provide? One would think that ag­ing, at least once an or­ganism had reached ma­tu­rity, would be strictly dis­ad­van­ta­geous and thus se­lected against, so why do we mostly ob­serve or­ganisms that age and die?

God­frey-Smith sur­veys sev­eral stan­dard the­o­ries, but the one he pre­sents as most likely to be cor­rect (origi­nated by Peter Medawar and Ge­orge Willi­ams) is as fol­lows. Imag­ine an or­ganism that didn’t age; once it reached its prime, it re­mained that way, able to sur­vive and re­pro­duce in­definitely un­til it died of e.g. pre­da­tion, dis­ease, a fal­ling rock, or some other ex­ter­nal cause, all of which I’ll call “ac­ci­den­tal death.” If we as­sume the av­er­age prob­a­bil­ity of dy­ing by ac­ci­den­tal death is con­stant each year, then the or­ganism’s prob­a­bil­ity of sur­viv­ing to age n de­creases as n in­creases. Thus, for large enough , ap­proaches 0, mean­ing that there is some age which the or­ganism is al­most cer­tain to die be­fore reach­ing, even with­out ag­ing. Now imag­ine that the or­ganism has a mu­ta­tion with effects that are pos­i­tive be­fore age , but nega­tive af­ter age . Such a mu­ta­tion would have al­most no se­lec­tion pres­sure against it, since the or­ganism would al­most cer­tainly die of ac­ci­den­tal death be­fore its nega­tive effects could man­i­fest. Thus, such mu­ta­tions could ac­cu­mu­late, and the few or­ganisms that did sur­vive to age would start to show those nega­tive effects.

The truth is more gen­eral than that. In gen­eral, as gets lower, so does the se­lec­tion pres­sure against any mu­ta­tion whose nega­tive effects only ap­pear af­ter age . This the­ory pre­dicts that or­ganisms should ex­hibit a slow and steady in­crease of nega­tive symp­toms caused by mu­ta­tions whose nega­tive side effects only show up later, and an age which al­most no in­di­vi­d­u­als sur­vive be­yond, which is what we in fact ob­serve.

Still though, why should there be any pos­i­tive pres­sure to­wards these mu­ta­tions, even if there’s lit­tle pres­sure against them? Be­cause, as I men­tioned, at least some of these mu­ta­tions might have pos­i­tive effects that show up ear­lier bound up with the nega­tive effects that show up later. This pos­i­tive se­lec­tion pres­sure, com­bined with the re­duced nega­tive se­lec­tion pres­sure due to their nega­tive effects only show­ing up late, af­ter most with the mu­ta­tion have already died due to ac­ci­den­tal death, is enough to get these mu­ta­tions to fix­a­tion. God­frey-Smith uses the anal­ogy, origi­nally due to Ge­orge Willi­ams, of putting money in a sav­ings ac­count to be ac­cessed when you’re 120 years old. You’ll al­most cer­tainly be dead by then, so it’s rather pointless to save for that far off. In the same way, it’s evolu­tion­ar­ily pointless for or­ganisms to pass up mu­ta­tions that have pos­i­tive effects now and nega­tive effects later when those nega­tive effects only show up af­ter the an­i­mal is al­most cer­tain to be dead by ac­ci­den­tal death. So or­ganisms take those mu­ta­tions, and most do not sur­vive to pay the price; ag­ing is what hap­pens to those who do.

If this is the cor­rect evolu­tion­ary ac­count of why ag­ing oc­curs, it has an in­ter­est­ing im­pli­ca­tion for anti-ag­ing re­search: there might be cer­tain routes to elimi­nat­ing ag­ing that come with un­fore­seen down­sides. If we were to elimi­nate ag­ing by find­ing the genes that pro­duce these nega­tive side effects and turn­ing them off (please for­give my ut­ter ig­no­rance of ge­net­ics and the sci­ence of ag­ing), this could also rob us of what­ever benefits those genes pro­vided ear­lier in life that caused them to be adopted in the first place. This is not to say that we should not pur­sue anti-ag­ing re­search (in fact I’m strongly in fa­vor of it), but just that we should be on the look­out for this kind of trap, and avoid it if we can.


# Ap­pendix: An­swers to True-False Questions

  1. Oc­to­puses can squirt jets of ink as an es­cape tac­tic. True

  2. Oc­to­puses have color vi­sion. False

  3. Oc­to­puses have bilat­eral sym­me­try. True

  4. Oc­to­puses can cam­ou­flage them­selves nearly perfectly by chang­ing the color and tex­ture of their skin to match what­ever sur­face or ob­ject they are try­ing to blend into. True

  5. Oc­to­puses can fit through any hole or gap big­ger than their eye. True

  6. Oc­to­puses can rec­og­nize in­di­vi­d­ual hu­mans. True

  7. Most oc­to­pus species live for more than 20 years. False

  8. Oc­to­puses are mostly soli­tary an­i­mals. True

  9. Oc­to­puses have been known to use shards of glass from shat­tered bot­tles on the seafloor as weapons to fight other oc­to­puses. As far as I know, false


Notes:

[^1]: To give a sense of the re­la­tion­ship be­tween the oc­to­pus’s cen­tral brain and its arms, here are some quotes from the book:

How does an oc­to­pus’s brain re­late to its arms? Early work, look­ing at both be­hav­ior and anatomy, gave the im­pres­sion that the arms en­joyed con­sid­er­able in­de­pen­dence. The chan­nel of nerves that leads from each arm back to the cen­tral brain seemed pretty slim. Some be­hav­ioral stud­ies gave the im­pres­sion that oc­to­puses did not even track where their own arms might be. As Roger Han­lon and John Mes­sen­ger put it in their book Cephalo­pod Be­hav­ior, the arms seemed “cu­ri­ously di­vorced” from the brain, at least in the con­trol of ba­sic mo­tions. (67)

Some sort of mix­ture of lo­cal­ized and top-down con­trol might be op­er­at­ing. The best ex­per­i­men­tal work I know that bears on this topic comes out of Binyamin Hochner’s lab­o­ra­tory at the He­brew Univer­sity of Jerusalem. A 2011 pa­per by Ta­mar Gut­nick, Ruth Byrne, and Michael Kuba, along with Hochner, de­scribed a very clever ex­per­i­ment. They asked whether an oc­to­pus could learn to guide a sin­gle arm along a maze-like path to a spe­cific place in or­der to ob­tain food. The task was set up in such a way that the arm’s own chem­i­cal sen­sors would not suffice to guide it to the food; the arm would have to leave the wa­ter at one point to reach the tar­get lo­ca­tion. But the maze walls were trans­par­ent, so the tar­get lo­ca­tion could be seen. The oc­to­pus would have to guide an arm through the maze with its eyes. It took a long while for the oc­to­puses to learn to do this, but in the end, nearly all of the oc­to­puses that were tested suc­ceeded. The eyes can guide the arms. At the same time, the pa­per also noted that when the oc­to­puses are do­ing well with this task, the arm that’s find­ing the food ap­pears to do its own lo­cal ex­plo­ra­tion as it goes, crawl­ing and feel­ing around. So it seems that two forms of con­trol are work­ing in tan­dem: there is cen­tral con­trol of the arm’s over­all path, via the eyes, com­bined with a fine-tun­ing of the search by the arm it­self. (68-69)

[^2]: So why do I still think hu­mans have a spe­cific mod­ule for it? Here’s one pos­si­ble rea­son: I’m guess­ing oc­to­puses can’t rec­og­nize hu­man faces—they prob­a­bly use other cues, though noth­ing in the book speaks to this one way or the other. If that’s the case, then it might be true both that a gen­eral ob­ject-differ­en­ti­at­ing ca­pa­bil­ity is enough to rec­og­nize in­di­vi­d­ual hu­mans, but that to rec­og­nize faces re­quires a spe­cific mod­ule. If I found out that oc­to­puses could rec­og­nize hu­man faces speci­fi­cally, not just in­di­vi­d­ual hu­mans by other means than face-recog­ni­tion, I would strongly up­date in fa­vor of hu­mans hav­ing no spe­cific face- or other-per­son-recog­ni­tion mod­ule. In the same vein, the fact that peo­ple can lose the abil­ity to rec­og­nize faces with­out it af­fect­ing any other vi­sual ca­pac­i­ties (known as prosopag­nosia or “face-blind­ness”) sug­gests that a sin­gle mod­ule is re­spon­si­ble for that abil­ity.

[^3]: After read­ing Daniel Den­nett’s Con­scious­ness Ex­plained, it’s ac­tu­ally not at all clear to me why God­frey-Smith in­ter­prets Milner and Goodale this way. It seems more nat­u­ral to sup­pose that they’re sug­gest­ing some­thing similar to Den­nett’s de­nial of the “Carte­sian Theater” (the idea that there is some­where where “it all comes to­gether” in the brain, in some sort of “in­ner movie” to use Chalmers’ phrase) and his re­place­ment, the “Mul­ti­ple Drafts Model” (which I don’t feel con­fi­dent to sum­ma­rize here).

[^4]: Another way this might play out is if the frog saw the fly and only the fly as re­versed in its vi­sual field, rather like a hal­lu­ci­na­tion. I don’t see any rea­son why that would be im­pos­si­ble.

[^5]: God­frey-Smith ac­tu­ally makes a dis­tinc­tion be­tween “sub­jec­tive ex­pe­rience” and “con­scious­ness.” The way God­frey-Smith uses these words, when we say that some­thing has “sub­jec­tive ex­pe­rience,” we’re just say­ing that there is some­thing that it feels like to be that thing, while the claim that some­thing has “con­scious­ness” is in some un­speci­fied way stronger. So con­scious­ness is a sub­set of sub­jec­tive ex­pe­rience. He spec­u­lates that sub­jec­tive ex­pe­rience arose fairly early, in the form of things like hunger sig­nals and pain, while con­scious­ness arose later and in­volves things like mem­ory, a “global workspace,” in­te­grated ex­pe­rience, etc.

[^6]: See Den­nett, In­tu­ition Pumps and Other Tools for Think­ing, Ch. 55 “Zom­bies and Zim­boes,” and Eliezer Yud­kowsky’s es­say “Zom­bies! Zom­bies?”