The findings are interesting. It suggests models generalise better when they understand reasons rather than just patterns. Which leads to question if explaining why a rule exists already helps, could the next step be abstracting across rules to find the underlying principles that generate them? Rather than teaching models a better map of rules, teaching them what produces good rules in the first place.
The findings are interesting. It suggests models generalise better when they understand reasons rather than just patterns. Which leads to question if explaining why a rule exists already helps, could the next step be abstracting across rules to find the underlying principles that generate them? Rather than teaching models a better map of rules, teaching them what produces good rules in the first place.