Learning theory starts from formulating natural desiderata for agents, whereas “logic-AI” usually starts from postulating a logic-based model of the agent ad hoc.
Learning theory naturally allows analyzing computational complexity whereas logic-AI often uses models that are either clearly intractable or even clearly incomputable from the onset.
Learning theory focuses on objects that are observable or finite/constructive, whereas logic-AI often considers objects that unobservable, infinite and unconstructive (which I consider to be a philosophical error).
Learning theory emphasizes induction whereas logic-AI emphasizes deduction.
However, recently I noticed that quasi-Bayesian reinforcement learning and Turing reinforcement learning have very suggestive parallels to logic-AI. TRL agents have beliefs about computations they can run on the envelope: these are essentially beliefs about mathematical facts (but, we only consider computable facts and computational complexity plays some role there). QBRL agents reason in terms of hypotheses that have logical relationships between them: the order on functions corresponds to implication, taking the minimum of two functions corresponds to logical “and”, taking the concave hull of two functions corresponds to logical “or”. (but, there is no “not”, so maybe it’s a sort of intuitionist logic?) In fact, fuzzy beliefs form a continuous dcpo, and considering some reasonable classes of hypotheses probably leads to algebraic dcpo-s, suggesting a strong connection with domain theory (also, it seems like considering beliefs within different ontologies leads to a functor from some geometric category (the category of ontologies) to dcpo-s).
These parallels suggest that the learning theory of QBRL/TRL will involve some form of deductive reasoning and some type of logic. But, this doesn’t mean that QBRL/TRL is redundant w.r.t. logic AI! In fact, QBRL/TRL might lead us to discover exactly which type of logic do intelligent agents need and what is the role logic should play in the theory and inside the algorithms (instead of trying to guess and impose the answer ad hoc, which IMO did not work very well so far). Moreover, I think that the type of logic we are going to get will be something finitist/constructivist, and in particular this is probably how Goedelian paradoxes will be avoid. However, the details remain to be seen.
In the past I considered the learning-theoretic approach to AI theory as somewhat opposed to the formal logic approach popular in MIRI (see also discussion):
Learning theory starts from formulating natural desiderata for agents, whereas “logic-AI” usually starts from postulating a logic-based model of the agent ad hoc.
Learning theory naturally allows analyzing computational complexity whereas logic-AI often uses models that are either clearly intractable or even clearly incomputable from the onset.
Learning theory focuses on objects that are observable or finite/constructive, whereas logic-AI often considers objects that unobservable, infinite and unconstructive (which I consider to be a philosophical error).
Learning theory emphasizes induction whereas logic-AI emphasizes deduction.
However, recently I noticed that quasi-Bayesian reinforcement learning and Turing reinforcement learning have very suggestive parallels to logic-AI. TRL agents have beliefs about computations they can run on the envelope: these are essentially beliefs about mathematical facts (but, we only consider computable facts and computational complexity plays some role there). QBRL agents reason in terms of hypotheses that have logical relationships between them: the order on functions corresponds to implication, taking the minimum of two functions corresponds to logical “and”, taking the concave hull of two functions corresponds to logical “or”. (but, there is no “not”, so maybe it’s a sort of intuitionist logic?) In fact, fuzzy beliefs form a continuous dcpo, and considering some reasonable classes of hypotheses probably leads to algebraic dcpo-s, suggesting a strong connection with domain theory (also, it seems like considering beliefs within different ontologies leads to a functor from some geometric category (the category of ontologies) to dcpo-s).
These parallels suggest that the learning theory of QBRL/TRL will involve some form of deductive reasoning and some type of logic. But, this doesn’t mean that QBRL/TRL is redundant w.r.t. logic AI! In fact, QBRL/TRL might lead us to discover exactly which type of logic do intelligent agents need and what is the role logic should play in the theory and inside the algorithms (instead of trying to guess and impose the answer ad hoc, which IMO did not work very well so far). Moreover, I think that the type of logic we are going to get will be something finitist/constructivist, and in particular this is probably how Goedelian paradoxes will be avoid. However, the details remain to be seen.