I’m having trouble following your criticism. When you say that the human brain does not necessarily use any deep insights into intelligence does that mean that it only uses processes for which we already have functionally equivalent algorithms and it’s merely a problem of scale to interpret all the functions the brain implements? Or do you disagree with the definition of deep? I have no doubt that given enough time we could create an algorithm to functionally emulate a human brain; but would we understand the overall algorithm beyond “this part runs QCD on these cells” or “this neural network implements consciousness”? I think EY’s point is that to understand intelligence we will need to fundamentally understand why a particular algorithm yields an internal experience of consciousness, for instance. I don’t think we have that understanding at present, and presumably that definition of conscious experience makes up at least one of the thousand deep insights referred to by EY..
EDIT: I suppose your argument could be that the brain does not need to understand intelligence in order to implement it. That doesn’t help us if we’re trying to safely re-implement intelligence, though.
I’m having trouble following your criticism. When you say that the human brain does not necessarily use any deep insights into intelligence does that mean that it only uses processes for which we already have functionally equivalent algorithms and it’s merely a problem of scale to interpret all the functions the brain implements? Or do you disagree with the definition of deep? I have no doubt that given enough time we could create an algorithm to functionally emulate a human brain; but would we understand the overall algorithm beyond “this part runs QCD on these cells” or “this neural network implements consciousness”? I think EY’s point is that to understand intelligence we will need to fundamentally understand why a particular algorithm yields an internal experience of consciousness, for instance. I don’t think we have that understanding at present, and presumably that definition of conscious experience makes up at least one of the thousand deep insights referred to by EY..
EDIT: I suppose your argument could be that the brain does not need to understand intelligence in order to implement it. That doesn’t help us if we’re trying to safely re-implement intelligence, though.