Well, we don’t have AGI right now, there must be some missing ingredients… :-)
direct my models to the topics that I want outputs on
Well you can invoke a high-confidence model: “a detailed solution to math problem X involving ingredients A,B,C”. Then the inference algorithm will shuffle through ideas in the brain trying to build a self-consistent model that involves this shell of a thought but fills in the gaps with other pieces that fit. So that would feel like trying to figure something out.
I think that’s more like inference than learning, but of course you can memorize whatever useful new composite models that come up with during this process.
Well, we don’t have AGI right now, there must be some missing ingredients… :-)
Well you can invoke a high-confidence model: “a detailed solution to math problem X involving ingredients A,B,C”. Then the inference algorithm will shuffle through ideas in the brain trying to build a self-consistent model that involves this shell of a thought but fills in the gaps with other pieces that fit. So that would feel like trying to figure something out.
I think that’s more like inference than learning, but of course you can memorize whatever useful new composite models that come up with during this process.