Animals are still pretty solidly in the “abstractions over real-life systems” category for me, though. What I’m looking for, under the “continuum” argument, are any practically useful concepts which don’t clearly belong to either “theoretical concepts” or “real-life abstractions” according to my intuitions.
Biological systematization falls under “abstractions over real-life systems” for me as well, in the exact same way as “Earthly trees”. Conversely, “systems generated by genetic selection algorithms” is clearly a “pure concept”.
(You can sort of generate a continuum here, by gradually adding ever more details to the genetic algorithm until it exactly resembles the conditions of Earthly evolution… But I’m guessing Take 4 would still handle that: the resultant intermediary abstractions would likely either (1) show up in many places in the universe, on different abstraction levels, and clearly represent “pure” concepts, (2) show up in exactly one place in the universe, clearly corresponding to a specific type of real-life systems, (3) not show up at all.)
A random thought that I just has from more mainstream theoretical CS ML or Geometric Deep Learning is about inductive biases from the perspective of different geodesics.
Like they talk about using structural invariants to design the inductive biases of different ML models and so if we’re talking abiut general abstraction learning my question is if it even makes sense without taking the underlying inductive biases you have into account?
Like maybe the model of Natural Abstractions always has to filter through one inductive bias or another and there are different optimal choices for different domains? Some might be convergent but you gotta use the filter or something?
As stated, a random thought but felt I should share. Here’s a quick overarching link on GDL if you wanna check it out more: https://geometricdeeplearning.com
Animals are still pretty solidly in the “abstractions over real-life systems” category for me, though. What I’m looking for, under the “continuum” argument, are any practically useful concepts which don’t clearly belong to either “theoretical concepts” or “real-life abstractions” according to my intuitions.
Biological systematization falls under “abstractions over real-life systems” for me as well, in the exact same way as “Earthly trees”. Conversely, “systems generated by genetic selection algorithms” is clearly a “pure concept”.
(You can sort of generate a continuum here, by gradually adding ever more details to the genetic algorithm until it exactly resembles the conditions of Earthly evolution… But I’m guessing Take 4 would still handle that: the resultant intermediary abstractions would likely either (1) show up in many places in the universe, on different abstraction levels, and clearly represent “pure” concepts, (2) show up in exactly one place in the universe, clearly corresponding to a specific type of real-life systems, (3) not show up at all.)
A random thought that I just has from more mainstream theoretical CS ML or Geometric Deep Learning is about inductive biases from the perspective of different geodesics.
Like they talk about using structural invariants to design the inductive biases of different ML models and so if we’re talking abiut general abstraction learning my question is if it even makes sense without taking the underlying inductive biases you have into account?
Like maybe the model of Natural Abstractions always has to filter through one inductive bias or another and there are different optimal choices for different domains? Some might be convergent but you gotta use the filter or something?
As stated, a random thought but felt I should share. Here’s a quick overarching link on GDL if you wanna check it out more: https://geometricdeeplearning.com