A random thought that I just has from more mainstream theoretical CS ML or Geometric Deep Learning is about inductive biases from the perspective of different geodesics.
Like they talk about using structural invariants to design the inductive biases of different ML models and so if we’re talking abiut general abstraction learning my question is if it even makes sense without taking the underlying inductive biases you have into account?
Like maybe the model of Natural Abstractions always has to filter through one inductive bias or another and there are different optimal choices for different domains? Some might be convergent but you gotta use the filter or something?
As stated, a random thought but felt I should share. Here’s a quick overarching link on GDL if you wanna check it out more: https://geometricdeeplearning.com
A random thought that I just has from more mainstream theoretical CS ML or Geometric Deep Learning is about inductive biases from the perspective of different geodesics.
Like they talk about using structural invariants to design the inductive biases of different ML models and so if we’re talking abiut general abstraction learning my question is if it even makes sense without taking the underlying inductive biases you have into account?
Like maybe the model of Natural Abstractions always has to filter through one inductive bias or another and there are different optimal choices for different domains? Some might be convergent but you gotta use the filter or something?
As stated, a random thought but felt I should share. Here’s a quick overarching link on GDL if you wanna check it out more: https://geometricdeeplearning.com