I think that the lockpicking example is apt in a way, but it’s worth pointing out that there is more of a continuum between looking at brains and inferring useful principles vs. copying them in detail; you can imagine increasing understanding reducing the computational demands of emulations, replicating various features of human cognition, deriving useful principles for other AI, or reaching superhuman performance in various domains before an emulation is cheap enough to be a broadly human-level replacement.
Personally, I would guess that brain emulation is a technology that is particularly likely to result in a big jump in capabilities. A simialr line of argument also suggests that brain emulation per se is somewhat unlikely, rather than increasing rapid AI progress as our ability to learn from the brain grows. Nevertheless, we can imagine the situation where our understanding of neuroscience remains very bad but our ability at neuroimaging and computation is good enough to run an emulation, and that really could lead to a huge jump.
For AI, it seems like the situation is not so much that one might see very fast progress (in terms of absolute quality of technical achievement) so much as that one might not realize how far you have come. This is not entirely unrelated to the possibility of surprise from brain emulation; both are possible because it might be very hard to understand what a human-level or near-human-level intelligence is doing, even if you can watch it think (or even if you built it).
For other capabilities, we normally imagine that before you build a machine that does X, you will build a machine that does almost-X and you will understand why it manages to do almost-X. Likewise, we imagine that before we could understand how an animal did X well enough to copy it exactly, we would understand enough principles to make an almost-copy which did almost-X. Whether intelligence is really unique in this way, or if this is just an error that will get cleared up as we approach human-level AI, remains to be seen.
There is a continuum between understanding the brain well, and copying it in detail. But it seems that for much of that spectrum—where a big part is still coming from copying well—I would expect a jump. Perhaps a better analogy would involve many locked boxes of nanotechnology, and our having the whole picture when we have a combination of enough lockpicking and enough nanotech understanding.
Do you mean that this line of argument is evidence against brain emulations per se because such jumps are rare?
For AI, the most common arguments I have heard for fast progress involve recursive self-improvement, and/or insights related to intelligence being particularly large and chunky for some reason. Do you mean these are possible because we don’t know how far we have come, or are you thinking of another line of reasoning?
It seems to me that for any capability you wished to copy from an animal via careful replication rather than via understanding would have this character of perhaps quickly progressing when your copying abilities become sufficient. I can’t think of anything else anyone tries to copy in this way though, which is perhaps telling.
I think that the lockpicking example is apt in a way, but it’s worth pointing out that there is more of a continuum between looking at brains and inferring useful principles vs. copying them in detail; you can imagine increasing understanding reducing the computational demands of emulations, replicating various features of human cognition, deriving useful principles for other AI, or reaching superhuman performance in various domains before an emulation is cheap enough to be a broadly human-level replacement.
Personally, I would guess that brain emulation is a technology that is particularly likely to result in a big jump in capabilities. A simialr line of argument also suggests that brain emulation per se is somewhat unlikely, rather than increasing rapid AI progress as our ability to learn from the brain grows. Nevertheless, we can imagine the situation where our understanding of neuroscience remains very bad but our ability at neuroimaging and computation is good enough to run an emulation, and that really could lead to a huge jump.
For AI, it seems like the situation is not so much that one might see very fast progress (in terms of absolute quality of technical achievement) so much as that one might not realize how far you have come. This is not entirely unrelated to the possibility of surprise from brain emulation; both are possible because it might be very hard to understand what a human-level or near-human-level intelligence is doing, even if you can watch it think (or even if you built it).
For other capabilities, we normally imagine that before you build a machine that does X, you will build a machine that does almost-X and you will understand why it manages to do almost-X. Likewise, we imagine that before we could understand how an animal did X well enough to copy it exactly, we would understand enough principles to make an almost-copy which did almost-X. Whether intelligence is really unique in this way, or if this is just an error that will get cleared up as we approach human-level AI, remains to be seen.
There is a continuum between understanding the brain well, and copying it in detail. But it seems that for much of that spectrum—where a big part is still coming from copying well—I would expect a jump. Perhaps a better analogy would involve many locked boxes of nanotechnology, and our having the whole picture when we have a combination of enough lockpicking and enough nanotech understanding.
Do you mean that this line of argument is evidence against brain emulations per se because such jumps are rare?
For AI, the most common arguments I have heard for fast progress involve recursive self-improvement, and/or insights related to intelligence being particularly large and chunky for some reason. Do you mean these are possible because we don’t know how far we have come, or are you thinking of another line of reasoning?
It seems to me that for any capability you wished to copy from an animal via careful replication rather than via understanding would have this character of perhaps quickly progressing when your copying abilities become sufficient. I can’t think of anything else anyone tries to copy in this way though, which is perhaps telling.