Epistemic Motif of Abstract-Concrete Cycles & Domain Expansion

(crossposted from here)

I’ve noticed there are certain epistemic motifs (i.e. legible & consistent patterns of knowledge production) that come up in fairly wide circumstances across mathematics and philosophy. Here’s one that I think is fairly powerful:

Abstract-Concrete Cycles and Domain Expansion

Given a particular object [structure/​definition] that models some concept, we can modify it by either [abstracting out some of its features] or [instantiating a more concrete version of it] as we [modify the domain of discourse] that the object operates on, accompanied with [intuition juice from the real-world that guides our search]. Repeat this over various abstraction-levels, and you end up with a richer set of objects.

Examples of how this plays out

a) Space

Here’s an example from mathematics (inspired from here). You have some concrete notion of a thing you want to capture, say, space. So you operationalize it using some immediately obvious definition like which, being extremely concrete, comes equipped with a bunch of implicit structures (metric, angle, explicit coordinates, etc) - many of which you probably didn’t explicitly intend or even superfluous to your aims.

Then you abstract away the structures one-by-one, e.g.,

  • Metric spaces abstract out the notion of “distance” using metrics.

  • Inner product spaces abstract out the notion of “similarity,” which caches out to abstracting the notion of “magnitude” and “angle.”

  • Topological spaces abstract out the notion of “locality” using open sets.

And with this more abstract notion of space in hand, you can project it down to more concrete structures in domains different from that of original consideration, effectively using it as a generator. e.g.,

  • Function spaces: “Now that I have this much less restrictive notion of space, what happens if I extend its domain of discourse to, say, infinite-dimensional objects like functions?

From here, the loop continues. With a concrete structure in hand that captures what you want, but now with a flavor of operating in a different domain, this may yield insight into how certain structures that were not originally under your consideration might further be abstracted away.

  • Of course, this process is guided by our [intuition about real-world stuff/​aesthetic/​practical utility/​simplicity] rather than being a blind search in conceptspace.

After you continue this process of abstracting/​concretizing your structure over many different levels of abstraction (each being conducive to describing different sorts of domain of discourse), you end with an extremely rich notion of space, some far different from your original .

b) Notion of “Optimization”

In philosophy, I think the case is much more obvious. Alex Flint’s ground of optimization, for example, compares different definitions of optimization primarily by means of domain of discourse expansion, i.e. case analysis of the definition in “weird” scenarios, see which notions generalize better and fit our intuitive notion of what the word “optimization” should mean.

c) Epistemic motif of “Theorem as Definition”

Theorems in one structure can now become a definition in another. This is highly related to the earlier motif: we can consider some theorem as a consequence of the concrete [features/​axioms] of the original structure, and view [the act of turning the theorem into a definition in a much more [relaxed/​general] structure] as abstracting out [the particular concept that the theorem captured in the original structure]. e.g.,

  • Exponentiation: Start with the notion of repeated multiplication. Discover that exponentiation has a Taylor series approximation. Notice that the latter has a much more general domain of discourse, and use it to generalize the notion in the context of complex numbers, operators, etc.

  • Fractal Dimensions: Start with the observation that the dimension of a space corresponds to scaling exponents, and use it to define a measure of “fractal-ness.”

It’s The Rising Sea!

Why does it work?

I can think of a couple reasons why this process works (inspired from here), with “why” in the sense of the reason for its effectiveness as a human process for epistemology.

Just by selection, the concepts that we end up modeling (and with scientists building intuitions about what can be modeled in the first place) tend to be ones that can be captured by a small number of desiderata.

If the desiderata rule out all possibilities or rule in too much, we can correct our intuitions/​desiderata, eg discovering assumptions that subtly restricted the form of our structures in ways which we haven’t noticed.

And by finding such “minimal conditions” our structure must satisfy, it becomes more conducive to adding new structures back in, such that it could be made more concrete for applications into a new domain of discourse. From there, we can list examples and find new theorems, thus enriching our intuition that feeds back into this loop.