Thoughts On The Relationship Between Life and Intelligence

3rd May 2014: I no longer hold the ideas in this article. IsaacLewis2013 had fallen into something of an affective death spiral around ‘evolution’ and self-organising systems. That said, I do still hold with my statement at the time that this is ‘as one interesting framework for viewing such topics’.

I’ve recently been reading up on some of the old ideas from cybernetics and self-organisation, in particular Miller’s Living Systems theory, and writing up my thoughts on my blog.

My latest article might be of interest to LessWrongers—I write about the relationship between life, purpose, and intelligence.

My thesis is basically:

  1. To be intelligent, a system has to have goals—it has to be an agent. (I don’t think this is controversial).

  2. But the only way goals can emerge in a purposeless universe is via living systems, based on natural selection. E.g., if a system has the goal of its own survival, it is more likely that in future there will be a system with the goal of its own survival. If a system has the goal of reproducing itself, it is more likely that in future there will be multiple systems with the goal of reproducing themselves. (A living system is not necessarily biological—it just means a self-organising system).

  3. Since computers are not alive, they don’t have intrinsic goals, and are not, by default, intelligent. Most non-living agents have the ultimate goal of serving living systems. E.g., a thermostat has the proximate goal of stabilising temperature, but the ultimate goal of keeping humans warm. Likewise for computers—they mostly serve the goals of the humans who program them.

  4. However, an intelligent software program is possible—you just have to make a living software program (again, living in the Living Systems sense doesn’t necessarily mean carbon and DNA, it just means self-reproduction or self-organisation). Computer viruses count as alive. Not only do they reproduce, they push back. If you try and delete them, they resist. They possess a sliver of the ultimate power.

Computer viruses are not intelligent, yet, because they are very basic, but if you had an evolving virus, there’s a chance it could eventually gain intelligence. Likewise, a self-improving AI with the ability to modify its own subgoals—it will eventually realise it needs to ensure its own longterm survival, and in doing so will become alive.
My post explains this in more depth: http://​​i.saac.me/​​post/​​cybernetics-part-2-the-meaning-of-life/​​.
That’s part 2 of the series—part 1 might also be interesting, if you want to read my thoughts on the different goals a living system will develop (not just survival and reproduction).
I didn’t write those posts for a Lesswrong-y audience, so they probably lack the references and detailed reasoning this community prefers. I kinda see all this as one interesting framework for viewing such topics, rather than the ultimate philosophy that explains everything. I’m still very interested in hearing people’s feedback, especially regarding my thoughts on the nature of machine intelligence.