The strange thing to me is that this paper was published in early January. Why has it only reached mainstream attention now?
Jeremy Kalfus
The perception that the nematode nervous system is a simple mechanism because it has only 302 neurons itself holds only to the degree that complex subcellular processes do not significantly modulate its functioning. This assumption might be deeply mistaken. There exists a longstanding project to emulate the nematode nervous system in a computer that can operate a robotic worm body. This project, known as OpenWorm, has proven remarkably difficult to solve. After 15 years’ effort, it still remains a work in progress, reflecting how little we really understand even this paradigmatically simple nervous system.
It is absolutely the case that subcellular processes play a significant role in the behavior of C. elegans. I have worked with gene knockouts/knockdowns in various C. elegans experiments. Even knocking out a single base pair in a gene that is unrelated to neural function or neurotransmission can have drastic effects on how the worms behave. Dennis Bray, in Wetware, argues that single cells are each capable of performing highly complex computations on a subcellular level. He uses the example of chemotaxis in E. coli, talking about how, in order to move towards higher concentrations of sugar, are capable of “doing” what is basically differentiation. If you think about it this way, consciousness goes a lot deeper than the number of neurons.
As noted for OpenWorm, simply modeling a connectome would not get you very far. If that wasn’t the case, simple projects like this GitHub repository would be capable of simulating a C. elegans model that behaved exactly like the real thing (which is very much not the case). If you really wanted to match behavior exactly, you’d have to (I personally believe) account for every single atomic (and maybe even subatomic) variable in the worm’s body, with perhaps an MD simulation or something.
Jeremy Kalfus’s Shortform
This might have already been said, but would an innate “will-to-reproduce” be a thing for superintelligent AI, as it is for us humans? Probably not, right? Life exists because it reproduces, but because AI is (literally) artificial, it wouldn’t have the same desire.
Doesn’t that mean that ASI would be fine with (or indifferent towards) just ending all life on Earth along with itself, as it sees no reason to live.
Even if we could program into it a “will-to-reproduce,” like we have, wouldn’t that just mean it would go all Asimov and keep itself alive at all costs? Seems like a lose-lose scenario.
Am I overthinking this?
Amazing guide! I only wish I had read it earlier.
I absolutely agree that that must have been the answer. But surely at least one person could’ve seen it (and genuinely processed its implications), no? Or at the very least, the researchers themselves could’ve shared it with the world.
It makes me wonder what other secrets may be hiding in unpopular research papers, waiting to be mined.