Progress and preservation in IDA

Link post

Overview

This post arose out of my attempts to understand IDA and ways it could fail. It might help you do the same and could provide useful vocabulary for discussing desiderata for IDA.

We want IDA to satisfy progress—decomposition should make answering questions easier—and preservation—semantics should be retained across transformations. We need progress in each decomposition and, furthermore, repeated decompositions must be able to eventually simplify each question such that it can be answered directly by a human. Also, each decomposition and aggregation of questions and answers must introduce no more than a bounded amount of semantic drift and, furthermore, repeated decompositions and aggregations should also introduce no more than a bounded amount of semantic drift.

IDA

Iterated distillation and amplification (henceforth IDA) is a proposal for improving the capability of human-machine systems to suprahuman levels in complex domains where even evaluation of system outputs may be beyond unaugmented human capabilities. For a detailed explanation of the mechanics, I’ll refer you to the original paper just linked, section 0 of Machine Learning Projects for Iterated Distillation and Amplification, or one of the many other explanations floating around the Web.

We can view IDA as dynamic programming with function approximation[1] instead of a tabular cache. Just like the cache in dynamic programming, the machine learning component of IDA is a performance optimization. We can excise it and look at just the divide-and-conquer aspect of IDA in our analysis. Then this simplified IDA roughly consists of: (1) repeatedly decomposing tasks into simpler subtasks; (2) eventually completing sufficiently simple subtasks; and (3) aggregating outputs from subtasks into an output which completes the original, undecomposed task. We’ll examine this simplified model[2] in the rest of the post. (If you’d like a more concrete description of the divide-and-conquer component of IDA, there’s a runnable Haskell demo here.)

Safety is progress plus preservation

For type systems, the slogan is “safety is progress plus preservation”. Because we’re using this only as a cute analogy and organizing framework, we’ll not get into the details. But for type systems:

  • Progress: “A well-typed term is [...] either [...] a value or it can take a step according to the evaluation rules.”

  • Preservation: “If a well-typed term takes a step of evaluation, then the resulting term is also well typed.”

(Both from (Pierce 2002).)

We also need progress and preservation in IDA. Roughly:

  • Progress: A question is easy enough to be answered directly or can be decomposed into easier subquestions.

  • Preservation: The answer from aggregating subquestion answers is just as good as answering the original question.

Let’s try to make this more precise.

There are several ways we might interpret “easier”. One that seems to have some intuitive appeal is that one question is easier than another if it can be answered with fewer computational resources[3].

Regardless, we’ll say that we satisfy if a question is decomposed into subquestions such that every subquestion in is not harder than and at least one is easier. This is the most obvious thing that IDA is supposed to provide—a way to make hard problems tractable.

But just noting the existence of such a decomposition isn’t enough. We also need to be able to find and carry out such a decomposition more easily than answering the original question. We’ll call this property . demands that we be able to find and carry out an aggregation of subquestion answers that’s easier than answering the original question.

Each of these three properties is necessary but they are not even jointly sufficient for progress[4]---it could be the case that each of decomposition, answering and aggregation is easier than answering the original question but that all three together are not.

We can also view this graphically. In the figure below representing a single step of decomposition and aggregation, we want it to be the case that the computation represented by the arrow from original to corresponding answer is harder than any of the computations represented by the other arrows.

A graph showing the relationships between original and decomposed questions and answers.

, and mean that the top arrow from to represents a more difficult computation than each of the bottom, left, and right arrows, respectively.

There are also several possible interpretations of “as good as”. To start with, let’s assume it means that one question and answer pair is just as good as another if they have exactly the same denotation.

We say that a decomposition satisfies if the denotations of and are identical where is a question and answer pair, is an ideal aggregation, and is an ideal answering algorithm. We say that an aggregation satisfies if the denotations of and are identical where is a question and answer pair, is an ideal decomposition, and is an ideal answering algorithm.

Explained differently, requires that the below diagram commute while assuming that answering and aggregation are ideal. requires that the diagram commute while assuming that answering and decomposition are ideal.

A graph showing the relationships between original and decomposed questions and answers.

means that the diagram commutes with an ideal bottom and right arrow. mean that the diagram commutes with an ideal bottom and left arrow.

actually isn’t sufficient for our purposes—it could be the case that a series of decompositions produce easier and easier questions but never actually produce questions that are simple enough for a human to answer directly. We name the requirement that our decompositions eventually produce human-answerable subquestions .

Now let’s relax our definition of “as good as” a bit since it’s quite demanding. Instead of requiring that the question and answer pairs have exactly the same denotation, we allow some wiggle room. We could do this in a variety of ways including: (1) suppose there is some metric space of meanings and require that the denotations are within of each other; (2) require that acting on either question-answer pair produces the same expected utility; (3) require that the utilities produced by acting on each question-answer pair are within of each other. For the sake of discussion let’s assume something like (1) or (3).

Hopefully, the mutatis mutandis for and with this new interpretation of “good enough” is clear enough. (Briefly, the aggregated, answered, decomposition should be within of the original answer.)

Unfortunately, the new interpretation means that the single-step (i.e. just one level of decomposition and aggregation) properties are no longer sufficient to guarantee multi-step preservation. It could be the case that each step introduces skew less than but that the cumulative skew between the original question and a fully decomposed set of human-answerable questions exceeds . We’ll call the requirement that the series of decompositions maintain skew less than , , and that the series of aggregations maintains skew less than , .

A graph showing the relationships between original and decomposed questions and answers.

means that the left hand side of the diagram doesn’t break commutativity. mean that the right-hand side doesn’t break commutativity.

Summary

For every question, there must be a full decomposition to human-answerable questions satisfying and each decomposed set of questions along the way must satisfy each of , , and . That full decomposition must satisfy and the corresponding full aggregation must satisfy . Each decomposition and aggregation along the way must satisfy and .

A graph showing the relationships between original and decomposed questions and answers.

and properties apply to single steps of decomposition and aggregation. and properties apply to repeated decomposition and aggregation.

References

Pierce, Benjamin C, and C Benjamin. 2002. Types and Programming Languages. MIT press.


  1. ↩︎

    Asking whether IDA problems have the optimal substructure and overlapping subproblems that dynamic programming requires also seems fruitful.

  2. ↩︎

    This should be okay because function approximation only makes the problems of progress and preservation harder.

  3. ↩︎

    Of course, “computational resources” is a leaky abstraction.

  4. ↩︎

    If we settled on a precise notion of “easier”, we could specify what would be sufficient. For example, if difficulty just adds, the overall requirement would be that the sum of difficulties from decomposition, aggregation and answering is no more than the difficulty from answering the original question in other ways.