We Haven’t Uploaded Worms

In the­ory you can up­load some­one’s mind onto a com­puter, al­low­ing them to live for­ever as a digi­tal form of con­scious­ness, just like in the Johnny Depp film Tran­scen­dence.

But it’s not just sci­ence fic­tion. Sure, sci­en­tists aren’t any­where near close to achiev­ing such feat with hu­mans (and even if they could, the ethics would be pretty fraught), but now an in­ter­na­tional team of re­searchers have man­aged to do just that with the round­worm Caenorhab­di­tis el­e­gans.
Science Alert

Upload­ing an an­i­mal, even one as sim­ple as c. el­e­gans would be very im­pres­sive. Un­for­tu­nately, we’re not there yet. What the peo­ple work­ing on Open Worm have done in­stead is to build a work­ing robot based on the c. el­e­gans and show that it can do some things that the worm can do.

The c. el­e­gans ne­ma­tode has only 302 neu­rons, and each ne­ma­tode has the same fixed pat­tern. We’ve known this pat­tern, or con­nec­tome, since 1986. [1] In a sim­ple model, each neu­ron has a thresh­old and will fire if the weighted sum of its in­puts is greater than that thresh­old. Which means know­ing the con­nec­tions isn’t enough: we also need to know the weights and thresh­olds. Un­for­tu­nately, we haven’t figured out a way to read these val­ues off of real worms. Suzuki et. al. (2005) [2] ran a ge­netic al­gorithm to learn val­ues for these pa­ram­e­ters that would give a some­what re­al­is­tic worm and showed var­i­ous worm­like be­hav­iors in soft­ware. The re­cent sto­ries about the Open Worm pro­ject have been for them do­ing some­thing similar in hard­ware. [3]

To see why this isn’t enough, con­sider that ne­ma­todes are ca­pa­ble of learn­ing. Sasakura and Mori (2013) [5] provide a rea­son­able overview. For ex­am­ple, ne­ma­todes can learn that a cer­tain tem­per­a­ture in­di­cates food, and then seek out that tem­per­a­ture. They don’t do this by grow­ing new neu­rons or con­nec­tions, they have to be up­dat­ing their con­nec­tion weights. All the ex­ist­ing worm simu­la­tions treat weights as fixed, which means they can’t learn. They also don’t read weights off of any in­di­vi­d­ual worm, which means we can’t talk about any spe­cific worm as be­ing up­loaded.

If this doesn’t count as up­load­ing a worm, how­ever, what would? Con­sider an ex­per­i­ment where some­one trains one group of worms to re­spond to stim­u­lus one way and an­other group to re­spond the other way. Both groups are then scanned and simu­lated on the com­puter. If the simu­lated worms re­sponded to simu­lated stim­u­lus the same way their phys­i­cal ver­sions had, that would be good progress. Ad­di­tion­ally you would want to demon­strate that similar learn­ing was pos­si­ble in the simu­lated en­vi­ron­ment.

(In a 2011 post on what progress with ne­ma­todes might tell us about up­load­ing hu­mans I looked at some of this re­search be­fore. Since then not much has changed with ne­ma­tode simu­la­tion. Moore’s law looks to be do­ing much worse in 2014 than it did in 2011, how­ever, which makes the prospects for whole brain em­u­la­tion sub­stan­tially worse.)

I also posted this on my blog.


[1] The Struc­ture of the Ner­vous Sys­tem of the Ne­ma­tode Caenorhab­di­tis el­e­gans, White et. al. (1986).

[2] A Model of Mo­tor Con­trol of the Ne­ma­tode C. Ele­gans With Neu­ronal Cir­cuits, Suzuki et. al. (2005).

[3] It looks like in­stead of learn­ing weights Bus­bice just set them all to +1 (ex­ci­ta­tory) and −1 (in­hibitory). It’s not clear to me how they knew which con­nec­tions were which; my best guess is that they’re us­ing the “what hap­pens to work” de­tails from [2]. Their full writeup is [4].

[4] The Robotic Worm, Bus­bice (2014).

[5] Be­hav­ioral Plas­tic­ity, Learn­ing, and Me­mory in C. Ele­gans, Sasakura and Mori (2013).