this would actually happen in practice—i.e. why humans would learn high-level models which are related to the universe’s lower-level structure in this way.
My intuition is that this makes it possible to model long-range dependencies with minimum cognitive costs.
Also, do you think there’s some utility in modelling abstracting XL into XH as a forgetful functor?
My intuition is that this makes it possible to model long-range dependencies with minimum cognitive costs.
Also, do you think there’s some utility in modelling abstracting XL into XH as a forgetful functor?
Plausibly, it does have that flavor to it.