Interesting! I’m not following everything, but it sounds like you’re describing human cognition for the most part.
I found it interesting that you used the phrase “constraint satisfaction”. I think this concept is crucial for understanding human intelligence; but it’s not used very widely. So I’m curious where you picked it up.
I agree with your conclusion on the alignment section: these seem like low-resolution ideas that seem worth fleshing out.
Good job putting this out there without obsessively polishing it. That shares at least some of your ideas with the rest of us, so we can build on them in parallel with you polishing your understanding and your presentation.
Yes, I am trying to understand a generalized (which also means simplified) and formalizable parallel to human cognition. Some of my thinking on this is inspired by predictive coding and adaptive resonance theory (although prettly loosely by the latter), and I am trying to figure out the implications of our most updated understanding of neurobiological principles, together with a notion of the “riverbeds of cognition”.
In other words, how can we design an architecture such that it is not pressured to take shortcuts or “work around” design decisions we made, as its cognition develops? Is there a “natural path” of cognitive development that avoids some of the common pitfalls and failure modes (i.e. can we aim inner alignment if we have proficiency in this area)? This has a direct bearing on interpretability, and goes together with the goal of a sort of “conceptual curriculum” that is intended to teach the system natural abstractions.
If I remember correctly, the centrality of “constraint satisfaction” fell out of considering causal (hyper/meta)graphs as sensible representational substrate (which was partially inspired by Ben Goertzel). I personally find it quite intuitive to think in graphs.
In my case introspection lead me to the realisation that human reasoning consists to a large degree out of two interlocking parts: Finding constraint of the solution space and constraint satisfaction.
Which has the interesting corollary that AI systems that reach human or superhuman performance by adding search to NNs are not really implementing reasoning but rather brute-forcing it.
It also makes me sceptical that LLMs+search will be AGI.
I agree with all of that. Even being sceptical that LLMs plus search will reach AGI. The lack of constraint satisfaction as the human brain does it could be a real stumbling block.
But LLMs have copied a good bit of our reasoning and therefore our semantic search. So they can do something like constraint satisfaction.
Put the constraints into a query, and the answer will satisfy those constraints. The process used is different than a human brain, but for every problem I can think of, the results are the same.
Now, that’s partly because every problem I can think of is one I’ve already seen solved. But my ability to do truly novel problem solving is rarely used and pretty limitted. So I’m not sure the LLM can’t do just as good a job if it had a scaffolded script to explore its knowledge base from a few different angles.
Interesting! I’m not following everything, but it sounds like you’re describing human cognition for the most part.
I found it interesting that you used the phrase “constraint satisfaction”. I think this concept is crucial for understanding human intelligence; but it’s not used very widely. So I’m curious where you picked it up.
I agree with your conclusion on the alignment section: these seem like low-resolution ideas that seem worth fleshing out.
Good job putting this out there without obsessively polishing it. That shares at least some of your ideas with the rest of us, so we can build on them in parallel with you polishing your understanding and your presentation.
Thanks a lot for the encouragement :)
Yes, I am trying to understand a generalized (which also means simplified) and formalizable parallel to human cognition. Some of my thinking on this is inspired by predictive coding and adaptive resonance theory (although prettly loosely by the latter), and I am trying to figure out the implications of our most updated understanding of neurobiological principles, together with a notion of the “riverbeds of cognition”.
In other words, how can we design an architecture such that it is not pressured to take shortcuts or “work around” design decisions we made, as its cognition develops? Is there a “natural path” of cognitive development that avoids some of the common pitfalls and failure modes (i.e. can we aim inner alignment if we have proficiency in this area)?
This has a direct bearing on interpretability, and goes together with the goal of a sort of “conceptual curriculum” that is intended to teach the system natural abstractions.
If I remember correctly, the centrality of “constraint satisfaction” fell out of considering causal (hyper/meta)graphs as sensible representational substrate (which was partially inspired by Ben Goertzel). I personally find it quite intuitive to think in graphs.
In my case introspection lead me to the realisation that human reasoning consists to a large degree out of two interlocking parts: Finding constraint of the solution space and constraint satisfaction.
Which has the interesting corollary that AI systems that reach human or superhuman performance by adding search to NNs are not really implementing reasoning but rather brute-forcing it.
It also makes me sceptical that LLMs+search will be AGI.
I agree with all of that. Even being sceptical that LLMs plus search will reach AGI. The lack of constraint satisfaction as the human brain does it could be a real stumbling block.
But LLMs have copied a good bit of our reasoning and therefore our semantic search. So they can do something like constraint satisfaction.
Put the constraints into a query, and the answer will satisfy those constraints. The process used is different than a human brain, but for every problem I can think of, the results are the same.
Now, that’s partly because every problem I can think of is one I’ve already seen solved. But my ability to do truly novel problem solving is rarely used and pretty limitted. So I’m not sure the LLM can’t do just as good a job if it had a scaffolded script to explore its knowledge base from a few different angles.