This was a great read! I wonder how much you’re committed to “brain-inspired” vs “mind-inspired” AGI, given that the approach to “understanding the human brain” you outline seems to correspond to Marr’s computational and algorithmic levels of analysis, as opposed to the implementational level (see link for reference). In which case, some would argue, you don’t necessarily have to do too much neuroscience to reverse engineer human intelligence. A lot can be gleaned by doing classic psychological experiments to validate the functional roles of various aspects of human intelligence, before examining in more detail their algorithms and data structures (perhaps this time with the help of brain imaging, but also carefully designed experiments that elicit human problem solving heuristics, search strategies, and learning curves).
I ask because I think “brain-inspired” often gets immediately associated with neural networks, and not say, methods for fast and approximate Bayesian inference (MCMC, particle filters), which are less the AI zeitgeist nowadays, but still very much how cognitive scientists understand the human mind and its capabilities.
Thanks! I guess my feeling is that we have a lot of good implementation-level ideas (and keep getting more), and we have a bunch of algorithm ideas, and psychology ideas and introspection and evolution and so on, and we keep piecing all these things together, across all the different levels, into coherent stories, and that’s the approach I think will (if continued) lead to AGI.
Like, I am in fact very interested in “methods for fast and approximate Bayesian inference” as being relevant for neuroscience and AGI, but I wasn’t really interested in it until I learned bunch of supporting ideas about what part of the brain is doing that, and how it works on the neuron level, and how and when and why that particular capability evolved in that part of the brain. Maybe that’s just me.
I haven’t seen compelling (to me) examples of people going successfully from psychology to algorithms without stopping to consider anything whatsoever about how the brain is constructed . Hmm, maybe very early Steve Grossberg stuff? But he talks about the brain constantly now.
One reason it’s tricky to make sense of psychology data on its own, I think, is the interplay between (1) learning algorithms, (2) learned content (a.k.a. “trained models”), (3) innate hardwired behaviors (mainly in the brainstem & hypothalamus). What you especially want for AGI is to learn about #1, but experiments on adults are dominated by #2, and experiments on infants are dominated by #3, I think.
I haven’t seen compelling (to me) examples of people going successfully from psychology to algorithms without stopping to consider anything whatsoever about how the brain is constructed .
One reason it’s tricky to make sense of psychology data on its own, I think, is the interplay between (1) learning algorithms, (2) learned content (a.k.a. “trained models”), (3) innate hardwired behaviors (mainly in the brainstem & hypothalamus). What you especially want for AGI is to learn about #1, but experiments on adults are dominated by #2, and experiments on infants are dominated by #3, I think.
I guess this depends on how much you think we can make progress towards AGI by learning what’s innate / hardwired / learned at an early age in humans and building that into AI systems, vs. taking more of a “learn everything” approach! I personally think there may still be a lot of interesting human-like thinking and problem solving strategies that we haven’t figured out to implement as algorithms yet (e.g. how humans learn to program, and edit + modify programs and libraries to make them better over time), that adult and child studies would be useful in order to characterize what might even be aiming for, even if ultimately the solution is to use some kind of generic learning algorithm to reproduce it. I also think there’s this fruitful in-between (1) and (3), which is to ask, “What are the inductive biases that guide human learning?”, which I think you can make a lot of headway on without getting to the neural level.
This was a great read! I wonder how much you’re committed to “brain-inspired” vs “mind-inspired” AGI, given that the approach to “understanding the human brain” you outline seems to correspond to Marr’s computational and algorithmic levels of analysis, as opposed to the implementational level (see link for reference). In which case, some would argue, you don’t necessarily have to do too much neuroscience to reverse engineer human intelligence. A lot can be gleaned by doing classic psychological experiments to validate the functional roles of various aspects of human intelligence, before examining in more detail their algorithms and data structures (perhaps this time with the help of brain imaging, but also carefully designed experiments that elicit human problem solving heuristics, search strategies, and learning curves).
I ask because I think “brain-inspired” often gets immediately associated with neural networks, and not say, methods for fast and approximate Bayesian inference (MCMC, particle filters), which are less the AI zeitgeist nowadays, but still very much how cognitive scientists understand the human mind and its capabilities.
https://onlinelibrary.wiley.com/doi/full/10.1111/tops.12137
Thanks! I guess my feeling is that we have a lot of good implementation-level ideas (and keep getting more), and we have a bunch of algorithm ideas, and psychology ideas and introspection and evolution and so on, and we keep piecing all these things together, across all the different levels, into coherent stories, and that’s the approach I think will (if continued) lead to AGI.
Like, I am in fact very interested in “methods for fast and approximate Bayesian inference” as being relevant for neuroscience and AGI, but I wasn’t really interested in it until I learned bunch of supporting ideas about what part of the brain is doing that, and how it works on the neuron level, and how and when and why that particular capability evolved in that part of the brain. Maybe that’s just me.
I haven’t seen compelling (to me) examples of people going successfully from psychology to algorithms without stopping to consider anything whatsoever about how the brain is constructed . Hmm, maybe very early Steve Grossberg stuff? But he talks about the brain constantly now.
One reason it’s tricky to make sense of psychology data on its own, I think, is the interplay between (1) learning algorithms, (2) learned content (a.k.a. “trained models”), (3) innate hardwired behaviors (mainly in the brainstem & hypothalamus). What you especially want for AGI is to learn about #1, but experiments on adults are dominated by #2, and experiments on infants are dominated by #3, I think.
Some recent examples, off the top of my head!
Jain, Y. R., Callaway, F., Griffiths, T. L., Dayan, P., Krueger, P. M., & Lieder, F. (2021). A computational process-tracing method for measuring people’s planning strategies and how they change over time.
Dasgupta, I., Schulz, E., Tenenbaum, J. B., & Gershman, S. J. (2020). A theory of learning to infer. Psychological review, 127(3), 412.
Harrison, P., Marjieh, R., Adolfi, F., van Rijn, P., Anglada-Tort, M., Tchernichovski, O., … & Jacoby, N. (2020). Gibbs Sampling with People. Advances in Neural Information Processing Systems, 33.
I guess this depends on how much you think we can make progress towards AGI by learning what’s innate / hardwired / learned at an early age in humans and building that into AI systems, vs. taking more of a “learn everything” approach! I personally think there may still be a lot of interesting human-like thinking and problem solving strategies that we haven’t figured out to implement as algorithms yet (e.g. how humans learn to program, and edit + modify programs and libraries to make them better over time), that adult and child studies would be useful in order to characterize what might even be aiming for, even if ultimately the solution is to use some kind of generic learning algorithm to reproduce it. I also think there’s this fruitful in-between (1) and (3), which is to ask, “What are the inductive biases that guide human learning?”, which I think you can make a lot of headway on without getting to the neural level.