Conference on Computer-Aided Verification has a number of interesting talks on how to do verified neuro-symbolic ML. recent videos include “modular synthesis of reactive programs”, “neuro-symbolic program synthesis from natural language and demonstrations”, “gradient descent over metagrammars for syntax guided synthesis”. I think transformers are more powerful than any of these techniques, but they provide interesting comparison for what a model (eg transformers) must be able to learn in order to succeed. https://www.youtube.com/channel/UCe3M4Hc2hCeNGk54Dcbrbpw/videos
Conference on Computer-Aided Verification has a number of interesting talks on how to do verified neuro-symbolic ML. recent videos include “modular synthesis of reactive programs”, “neuro-symbolic program synthesis from natural language and demonstrations”, “gradient descent over metagrammars for syntax guided synthesis”. I think transformers are more powerful than any of these techniques, but they provide interesting comparison for what a model (eg transformers) must be able to learn in order to succeed. https://www.youtube.com/channel/UCe3M4Hc2hCeNGk54Dcbrbpw/videos