Yes, what you say makes sense. One caveat might be that object-level gradient descent isn’t always the thing we want an analogy for—we might expect future systems to do a lot of meta-learning, where evolution might be a better analogy than human learning. Or we might expect future systems to take actions that affect their own architecture in a way that looks like deliberate engineering, which doesn’t have a great analogy with either.
Yes, what you say makes sense. One caveat might be that object-level gradient descent isn’t always the thing we want an analogy for—we might expect future systems to do a lot of meta-learning, where evolution might be a better analogy than human learning. Or we might expect future systems to take actions that affect their own architecture in a way that looks like deliberate engineering, which doesn’t have a great analogy with either.