One way in which “spending a whole lot of time working with a system / idea / domain, and getting to know it and understand it and manipulate it better and better over the course of time” could be solved automatically is just by having a truly huge context window. Example of an experiment: teach a particular branch of math to an LLM that has never seen that branch of math.
Maybe humans have just the equivalent of a sort of huge content window spanning selected stuff from their entire lifetimes, and so this kind of learning is possible for them.
I might be missing something crucial because I don’t understand why this addition is necessary. Why do we have to specify “simple” boundaries on top of saying that we have to draw them around concentrations of unusually high probability density? Like, aren’t probability densities in Thingspace already naturally shaped in such a way that if you draw a boundary around them, it’s automatically simple? I don’t see how you run the risk of drawing weird, noncontiguous boundaries if you just follow the probability densities.