Public Static: What is Abstraction?

Author’s Note: Most of the posts in this sequence are essentially a log of work-in-progress. This post is intended as a more presentable (“public”) and higher-confidence (“static”) write-up of some formalizations of abstraction. Much of the material has appeared in other posts; the first two sections in particular are drawn almost verbatim from the opening “What is Abstraction?” post.

Let’s start with a few examples (borrowed from here) to illustrate what we’re talking about:

  • We have a gas consisting of some huge number of particles. We throw away information about the particles themselves, instead keeping just a few summary statistics: average energy, number of particles, etc. We can then make highly precise predictions about things like e.g. pressure just based on the reduced information we’ve kept, without having to think about each individual particle. That reduced information is the “abstract layer”—the gas and its properties.

  • We have a bunch of transistors and wires on a chip. We arrange them to perform some logical operation, like maybe a NAND gate. Then, we throw away information about the underlying details, and just treat it as an abstract logical NAND gate. Using just the abstract layer, we can make predictions about what outputs will result from what inputs. Note that there’s some fuzziness − 0.01 V and 0.02 V are both treated as logical zero, and in rare cases there will be enough noise in the wires to get an incorrect output.

  • I tell my friend that I’m going to play tennis. I have ignored a huge amount of information about the details of the activity—where, when, what racket, what ball, with whom, all the distributions of every microscopic particle involved—yet my friend can still make some reliable predictions based on the abstract information I’ve provided.

  • When we abstract formulas like “1+1=2*1” and “2+2=2*2″ into “n+n=2*n”, we’re obviously throwing out information about the value of n, while still making whatever predictions we can given the information we kept. This is what abstraction is all about in math and programming: throw out as much information as you can, while still maintaining the core “prediction”—i.e. the theorem or algorithm.

  • I have a street map of New York City. The map throws out lots of info about the physical streets: street width, potholes, power lines and water mains, building facades, signs and stoplights, etc. But for many questions about distance or reachability on the physical city streets, I can translate the question into a query on the map. My query on the map will return reliable predictions about the physical streets, even though the map has thrown out lots of info.

The general pattern: there’s some ground-level “concrete” model (or territory), and an abstract model (or map). The abstract model throws away or ignores information from the concrete model, but in such a way that we can still make reliable predictions about some aspects of the underlying system.

Notice that the predictions of the abstract models, in most of these examples, are not perfectly accurate. We’re not dealing with the sort of “abstraction” we see in e.g. programming or algebra, where everything is exact. There are going to be probabilities involved.

In the language of embedded world-models, we’re talking about multi-level models: models which contain both a notion of “table”, and of all the pieces from which the table is built, and of all the atoms from which the pieces are built. We want to be able to use predictions from one level at other levels (e.g. predict bulk material properties from microscopic structure and/​or macroscopic measurements, or predict from material properties whether it’s safe to sit on the table), and we want to move between levels consistently.

Formalization: Starting Point

To repeat the intuitive idea: an abstract model throws away or ignores information from the concrete model, but in such a way that we can still make reliable predictions about some aspects of the underlying system.

So to formalize abstraction, we first need some way to specify which “aspects of the underlying system” we wish to predict, and what form the predictions take. The obvious starting point for predictions is probability distributions. Given that our predictions are probability distributions, the natural way to specify which aspects of the system we care about is via a set of events or logic statements for which we calculate probabilities. We’ll be agnostic about the exact types for now, and just call these “queries”.

That leads to a rough construction. We start with some low-level model and a set of queries . From these, we construct a minimal high-level model by keeping exactly the information relevant to the queries, and throwing away all other information. By the minimal map theorems, we can represent directly by the full set of probabilities ; and contain exactly the same information. Of course, in practical examples, the probabilities will usually have some more compact representation, and will usually contain some extraneous information as well.

To illustrate a bit, let’s identify the low-level model, class of queries, and high-level model for a few of the examples from earlier.

  • Ideal Gas:

    • Low-level model is the full set of molecules, their interaction forces, and a distribution representing our knowledge about their initial configuration.

    • Class of queries consists of combinations of macroscopic measurements, e.g. one query might be “pressure = 12 torr & volume = 1 m^3 & temperature = 110 K”.

    • For an ideal gas, the high-level model can be represented by e.g. temperature, number of particles (of each type if the gas is mixed), and container volume. Given these values and assuming a near-equilibrium initial configuration distribution, we can predict the other macroscopic measurables in the queries (e.g. pressure).

  • Tennis:

    • Low-level model is the full microscopic configuration of me and the physical world around me as I play tennis (or whatever else I do).

    • Class of queries is hard to sharply define at this point, but includes things like “John will answer his cell phone in the next hour”, “John will hold a racket and hit a fuzzy ball in the next hour”, “John will play Civ for the next hour”, etc—all the things whose probabilities change on hearing that I’m going to play tennis.

    • High-level model is just the sentence “I am going to play tennis”.

  • Street Map:

    • Low-level model is the physical city streets

    • Class of queries includes things like “shortest path from Times Square to Central Park starts by following Broadway”, “distance between the Met and the Hudson is less than 1 mile”, etc—all the things we can deduce from a street map.

    • High-level model is the map. Note that the physical map also includes some extraneous information, e.g. the positions of all the individual atoms in the piece of paper/​smartphone.

Already with the second two examples there seems to be some “cheating” going on in the model definition: we just define the query class as all the events/​logic statements whose probabilities change based on the information in the map. But if we can do that, then anything can be a “high-level map” of any “low-level territory”, with the queries taken to be the events/​statements about the territory which the map actually has some information about—not a very useful definition!

Information About Things “Far Away”

In order for abstraction to actually be useful, we need some efficient way to know which queries the abstract model can accurately answer, without having to directly evaluate each query within the low-level model.

In practice, we usually seem to have a notion of which variables are “far apart”, in the sense that any interactions between the two are mediated by many in-between variables.

In this graphical model, interactions between the variables and the variables are mediated by the noisy variables . Abstraction throws out information from which is wiped out by noise in , keeping only the information relevant to .

The mediating variables are noisy, so they wipe out most of the “fine-grained” information present in the variables of interest. We can therefore ignore that fine-grained information when making predictions about things far away. We just keep around whatever high-level signal makes it past the noise of mediating variables, and throw out everything else, so long as we’re only asking questions about far-away variables.

An example: when I type “4+3” in a python shell, I think of that as adding two numbers, not as a bunch of continuous voltages driving electric fields and current flows in little patches of metal and doped silicon. Why? Because, if I’m thinking about what will show up on my monitor after I type “4+3” and hit enter, then the exact voltages and current flows on the CPU are not relevant. This remains true even if I’m thinking about the voltages driving individual pixels in my monitor—even at a fairly low level, the exact voltages in the arithmetic-logic unit on the CPU aren’t relevant to anything more than a few microns away—except for the high-level information contained in the “numbers” passed in and out. Information about exact voltages in specific wires is quickly wiped out by noise within the chip.

Another example: if I’m an astronomer predicting the trajectory of the sun, then I’m presumably going to treat other stars as point-masses. At such long distances, the exact mass distribution within the star doesn’t really matter—except for the high-level information contained in the total mass, momentum and center-of-mass location.

Formalizing this in the same language as the previous section:

  • We have some variables and in the low-level model.

  • Interactions between and are mediated by noisy variables .

  • Noise in wipes out most fine-grained information about , so only the high-level summary is relevant to .

Mathematically: for any which is “not too close” to - i.e. any which do not overlap with (or with itself). Our high-level model replaces with , and our set of valid queries is the whole joint distribution of given .

Now that we have two definitions, it’s time to start the Venn diagram of definitions of abstraction.

So far, we have:

  • A high-level model throws out information from a low-level model in such a way that some set of queries can still be answered correctly: .

  • A high-level model throws out information from some variable in such a way that all information about “far away” variables is kept: .

Systems View

The definition in the previous section just focuses on abstracting a single variable . In practice, we often want to take a system-level view, abstracting a whole bunch of low-level variables (or sets of low-level variables) all at once. This doesn’t involve changing the previous definition, just applying it to many variables in parallel.

We have multiple non-overlapping sets of low-level variables , each with a set of “nearby” variables . Abstraction will only retain information from each relevant to ’s which do not overlap the corresponding . In particular, this means queries will only be maintained by the abstraction if is not “close to” - i.e. if does not overlap . In the notation below, these are called , to remind that they are the low-level variables.

Rather than just one variable of interest , we have many low-level variables (or non-overlapping sets of variables) and their high-level summaries . For each of the , we have some set of variables “nearby” , which mediate its interactions with everything else. Our “far-away” variables Y are now any far-away ’s, so we want

for any sets of indices and which are “far apart”—meaning that does not overlap any or .

(Notation: I will use lower-case indices like for individual variables, and upper-case indices like to represent sets of variables. I will also treat any single index interchangeably with the set containing just that index.)

For instance, if we’re thinking about wires and transistors on a CPU, we might look at separate chunks of circuitry. Voltages in each chunk of circuitry are , and summarizes the binary voltage values. are voltages in any components physically close to chunk on the chip. Anything physically far away on the chip will depend only on the binary voltage values in the components, not on the exact voltages.

The main upshot of all this is that we can rewrite the math in a cleaner way: as a (partial) factorization. Each of the low-level components are conditionally independent given the high-level summaries, so:

This condition only needs to hold when picks out indices such that (i.e. we pick out a subset of the ’s such that no two are “close together”). Note that we can pick any set of indices which satisfies this condition—so we really have a whole family of factorizations of marginal distributions in which no two variables are “close together”. See the appendix to this post for a proof of the formula.

In English: any set of low-level variables which are all “far apart” are independent given their high-level summaries . Intuitively, the picture looks like this:

The abstraction conditions let us swap low-level variables with their high-level summaries, as long as all swapped variables and any query variables are all “far apart”.

We pick some set of low-level variables which are all far apart, and compute their summaries . By construction, we have a model in which each of the high-level variables is a leaf in the graphical model, determined only by the corresponding low-level variables. But thanks to the abstraction condition, we can independently swap any subset of the summaries with their corresponding low-level variables—assuming that all of them are “far apart”.

Returning to the digital circuit example: if we pick any subset of the wires and transistors on a chip, such that no two are too physically close together, then we expect that their exact voltages are roughly independent given the high-level summary of their digital values.

We’ll add this to our Venn diagram as an equivalent formulation of the previous definition.

I have found this formulation to be the most useful starting point in most of my own thinking, and it will be the jumping-off point for our last two notions of abstraction in the next two sections.

Causality

So far we’ve only talked about “queries” on the joint distribution of variables. Another natural step is to introduce causal structure into the low-level model, and require interventional queries to hold on far apart variables.

There are some degrees of freedom in which interventional queries hold on far apart variables. One obvious answer is “all of them”:

… with the same conditions on as before, plus the added condition that the indices in and also be far apart. This is the usual requirement in math/​programming abstraction, but it’s too strong for many real-world applications. For instance, when thinking about fluid dynamics, we don’t expect our abstractions to hold when all the molecules in a particular cell of space are pushed into the corner of that cell. Instead, we could weaken the low-level intervention to sample from low-level states compatible with the high-level intervention:

We could even have low-level interventions sample from some entirely different distribution, to reflect e.g. a physical machine used to perform the interventions.

Another post will talk more about this, but it turns out that we can say quite a bit about causal abstraction while remaining agnostic to the details of the low-level interventions. Any of the above interventional query requirements have qualitatively-similar implications, though obviously some are stronger than others.

In day-to-day life, causal abstraction is arguably more common than non-causal. In fully deterministic problems, validity of interventional queries is essentially the only constraint (though often in guises which do not explicitly mention causality, e.g. functional behavior or logic). For instance, suppose I want to write a python function to sort a list. The only constraint is the abstract input/​output behavior, i.e. the behavior of the designated “output” under interventions on the designated “inputs”. The low-level details—i.e. the actual steps performed by the algorithm—are free to vary, so long as those high-level interventional constraints are satisfied.

This generalizes to other design/​engineering problems: the desired behavior of a system is usually some abstract, high-level behavior under interventions. Low-level details are free to vary so long as the high-level constraints are satisfied.

Exact Abstraction

Finally, one important special case. In math and programming, we typically use abstractions with sharper boundaries than most of those discussed here so far. Prototypical examples:

  • A function in programming: behavior of everything outside the function is independent of the function’s internal variables, given a high-level summary containing only the function’s inputs and outputs. Same for private variables/​methods of a class.

  • Abstract algebra: many properties of mathematical objects hold independent of the internal details of the object, given certain high-level summary properties—e.g. the group axioms, or the ring axioms, or …

  • Interfaces for abstract data structures: the internal organization of the data structure is irrelevant to external users, given the abstract “interface”—a high-level summary of the object’s behavior under different inputs (a.k.a. different interventions).

In these cases, there’s no noisy intermediate variables, and no notion of “far away” variables. There’s just a hard boundary: the internal details of high-level abstract objects do not interact with things of interest “outside” the object except via the high-level summaries.

We can easily cast this as a special case of our earlier notion of abstraction: the set of noisy intermediate variables is empty. The “high-level summary” of the low-level variables contains all information relevant to any variables outside of themselves.

Of course, exact abstraction overlaps quite a bit with causal abstraction. Exact abstractions in math/​programming are typically deterministic, so they’re mainly constrained by interventional predictions rather than distributional predictions.

Summary

We started with a very general notion of abstraction: we take some low-level model and abstract it into a high-level model by throwing away information in such a way that we can still accurately answer some queries. This is extremely general, but in order to actually be useful, we need some efficient way to know which queries are and are not supported by the abstraction.

That brought us to our next definition: abstraction keeps information relevant to “far away” variables. We imagine that interactions between the variable-to-be-abstracted and things far away are mediated by some noisy “nearby” variables , which wipe out most of the information in . So, we can support all queries on things far away by keeping only a relatively small summary .

Applying this definition to a whole system, rather than just one variable, we find a clean formulation: all sets of far-apart low-level variables are independent given the corresponding high-level summaries.

Next, we extended this to causal abstraction by requiring that interventional queries also be supported.

Finally, we briefly mentioned the special case in which there are no noisy intermediate variables, so the abstraction boundary is sharp: there’s just the variables to be abstracted, and everything outside of them. This is the usual notion of abstraction in math and programming.

Appendix: System Formulation Proof

We start with two pieces. By construction, is calculated entirely from , so

(construction)

… without any restriction on which subsets of the variables we look at. Then we also have the actual abstraction condition

(abstraction)

… as long as does not overlap or .

We want to show that

… for any set of non-nearby variables (i.e. ). In English: sets of far-apart low-level variables are independent given their high-level counterparts.

Let’s start with definitions of “far-apart” and “nearby”, so we don’t have to write them out every time:

  • Two sets of indices and are “far apart” if and do not overlap , and vice-versa. Individual indices can be treated as sets containing one element for purposes of this definition—so e.g. two indices or an index and a set of indices could be “far apart”.

  • Indices and/​or sets of indices are “nearby” if they are not far apart.

As before, I will use capital letters for sets of indices and lower-case letters for individual indices, and I won’t distinguish between a single index and the set containing just that index.

With that out of the way, we’ll prove a lemma:

… for any far apart from , both far apart from and (though and need not be far apart from each other). This lets us swap high-level with low-level given variables as we wish, so long as they’re all far apart from each other and from the query variables. Proof:

(by construction)

(by abstraction)

(by construction)

By taking and then marginalizing out unused variables, this becomes

That’s the first half of our lemma. Other half:

(by Bayes)

(by first half)

(by Bayes)

That takes care of the lemma.

Armed with the lemma, we can finish the main proof by iterating through the variables inductively:

(by Bayes)

(by construction)

(by lemma)

(by Bayes)

(by lemma & cancellation)

(by Bayes)

Here , , and are all far apart. Starting with empty and applying this formula to each variable , one-by-one, completes the proof.