the representations learned by humans being right off the bat more “gears-level”. Maybe like Hawkin’s reference frames. Some decomposability that enables much more powerful abstraction and that has to be pounded into a NN with millions of examples.
That makes a lot of sense, and if it’s that’s true then that’s hopeful for interpretability efforts. It would be easier to read inside an ML model if it’s composed of parts that map to real concepts.
That makes a lot of sense, and if it’s that’s true then that’s hopeful for interpretability efforts. It would be easier to read inside an ML model if it’s composed of parts that map to real concepts.