It was conjectured that intelligence would be something much more complex to design from scratch than physical labor devices, and thus we would need to rely on what was already created by evolution, working on top of it.
I’m pretty sure that’s what we did do? We just chose to copy at the abstraction level of neural nets and predictive processing instead of at the physical substrate level of neurons and proteins. We defintely didn’t take the path of actually understanding what we were doing in all its complexity.
I think the issue is that we’re building external intelligences instead of enhancing our own. We have a Cartesian relationship instead of an embedded one with respect to the current approach to AI (referencing Scott & Abram’s distinction of agent types). It’d be akin to a child building their adult form vs. the child growing into that adult.
I mean, one could definitely imagine/develop AIs which are further from human neurobiology, but current AIs are quite far away from it, in my view.
We defintely didn’t take the path of actually understanding what we were doing in all its complexity.
I think this is a very important point. The default expectation was (1) “intelligence it too complex to design from scratch so we likely need to rely on something” and (2) “so we will rely on human neurobiology”. While (1) has proven to be true, (2) not so much, and (2) was not the only possible conclusion from (1).
That’s pretty much my position, yes, with the small caveat that not relying on neurobiology did not mean excluding our neuroscience-inspired abstract models of cognition.
I’m pretty sure that’s what we did do? We just chose to copy at the abstraction level of neural nets and predictive processing instead of at the physical substrate level of neurons and proteins. We defintely didn’t take the path of actually understanding what we were doing in all its complexity.
Otherwise, excellent post.
I think the issue is that we’re building external intelligences instead of enhancing our own. We have a Cartesian relationship instead of an embedded one with respect to the current approach to AI (referencing Scott & Abram’s distinction of agent types). It’d be akin to a child building their adult form vs. the child growing into that adult.
Thanks!
I mean, one could definitely imagine/develop AIs which are further from human neurobiology, but current AIs are quite far away from it, in my view.
I think this is a very important point. The default expectation was (1) “intelligence it too complex to design from scratch so we likely need to rely on something” and (2) “so we will rely on human neurobiology”. While (1) has proven to be true, (2) not so much, and (2) was not the only possible conclusion from (1).
That’s pretty much my position, yes, with the small caveat that not relying on neurobiology did not mean excluding our neuroscience-inspired abstract models of cognition.