Randal Koene on brain understanding before whole brain emulation

Background /​ Context

Some people, including me, think that it will be very hard and risky to write and run Artificial General Intelligence (AGI) code without risking catastrophic accidents—up to and including human extinction (see for example my post here).

If that’s true, one option that’s sometimes brought up is a particular Differential Technological Development strategy, wherein we specifically try to get the technology for Whole Brain Emulation (WBE) before we have the technology for writing AGI source code.

Would that actually help solve the problem? I mean, other things equal, if flesh-and-blood humans have probability P of accidentally creating catastrophically-out-of-control AGIs, well, emulated human brains would do the exact same thing with the exact same probability…. Right? Well, maybe. Or it might be more complicated than that. There’s a nice discussion about this in the report from the 2011 “Singularity Summit”. That’s from a decade ago, but AFAICT not much progress has been made since then towards clarifying this particular strategic question.

Anyway, when considering whether or not we should strategically try to differentially accelerate WBE technology, one important aspect is whether it would be feasible to get WBE without incidentally first understanding brain algorithms well enough to code an AGI from scratch using similar algorithms. So that brings us to the point of this post:

The quote

Randal Koene is apparently very big in WBE circles—he coined the term “WBE”, he’s the co-founder of The Carboncopies Foundation, he’s a PhD computational neuroscientist and neuroengineer, etc. etc. Anyway, in a recent interview he seems to come down firmly on the side of “we will understand brain algorithms before WBE”. Here’s his reasoning:

Interviewer (Paul Middlebrooks): Engineering and science-for-understanding are not at odds with each other, necessarily, but they are two different things, and I know this is an open question, but in your opinion how much do we need to understand brains, how much do we need to understand minds, and what is a mind, and how brains and minds are related, how much is understanding part of this picture? Randal, let’s start with you.

Interviewee (Randal Koene): I think I may have shifted my views on that a bit over time, as I understand more about the practical problem of how would you get from where we are to where you can do WBE, and just looking at it in that sense. Y’know, in the past I might have emphasized more that the idea behind WBE is precisely that you don’t need to know everything about the brain as long as you know how the underlying mechanisms work, if you can scan enough, and you can put those mechanisms together, then you’re gonna end up with a working brain. That’s a bit naïve because it presumes that we collect data correctly, that we collect the right data, that we know how to transform that data to the parameters we use in the model, that we’re using the right model, all this kind of stuff, right? And all these questions that I just mentioned, they all require testing. And so validation is a huge issue, and that’s where the understanding of the brain comes in, because if you want to validate that at least the model you’ve built works like a human hippocampus, then you need to have a fairly good understanding of how a human hippocampus works, then you can see whether your system even fits within those boundaries before you can even say “Is this Steve’s hippocampus?” So I would still say that the thing that WBE holds as a tenet is that we don’t need to understand everything about Steve to be able to make a WBE of Steve. We need to understand a heck of a lot about human brains so that we can build a testable model of a human brain that will then house Steve. But we can collect the data about Steve that makes that personalized and tuned to be Steve. So we need to understand a lot about the brain, but in the context of how brains work, not how Steve’s brain works, that’s where you would then be taking the data, and of course you need to know a lot about that transformation of what makes it Steve’s brain in this particular case. (Source: Brain Inspired podcast, 1:15:00)

I just thought this was an interesting perspective and wanted to put it out there.

(For my part, I happen to agree with the conclusion that it’s probably infeasible to do successful WBE without first understanding brain algorithms well enough to make AGI, but for kinda different (non-mutually-exclusive) reasons—basically I think the former is just way harder than the latter. See Building brain-inspired AGI is infinitely easier than understanding the brain. OK, well, I was talking about something slightly different in that post—”understanding” is not the same as “emulating”—but it would be mostly the same arguments and examples.)