Distance Functions are Hard

[Epistemic status: Describes a failed research approach I had a while ago, and my only purpose here is to warn people off from that way of thinking. Every now and then I see someone working on an AIS subproblem say “if only we had a distance function for things in domain X”, and my intuition is that they are probably doing a wrong-way reduction. But I only mean this as a soft guideline, and I’m only somewhat confident in my current thinking on this.]

~~~

Terminology: We use the terms distance or distance function to denote any function that intuitively tells us how “dissimilar” any two members of a set X are (regardless of the whether d is a metric).

Counterfactual Worlds

Consider the counterfactual “If Lincoln were not assassinated, he would not have been impeached”. If we would like to say this has a truth value, we need to imagine what such a counterfactual world would have looked like: was it because Lincoln (somehow) survived his wounds, John Wilkes Booth (somehow) missed, that the plot was (somehow) discovered the day before, etc. Somehow, we must pick out the world that is in some sense “closest” to our actual world, but it seems very difficult to compare any two such worlds in a principled way.

To formalize Functional Decision Theory (FDT), we likely need to have a better understanding of counterfactuals, although even in restricted mathematical contexts, we don’t have a satisfactory understanding of why “If 0 = 1...” simply returns incoherence, yet “If the Modularity Theorem were false...” seemingly conjures up a possible world that we feel we can reason about.

(Also, in terms of corrigibility, we are often interested in formalizing the notion of “low-impact” agents, and the naive idea one often has is to define a distance metric on counterfactual world-states, as in p. 5 of Concrete Problems in AI Safety).

Algorithmic Similarity

In the FDT framework, we do not view ourselves as a solitary agent, but as a function (or algorithm) that can be copied, modified, and read, and we wish to maximize the utility achieved by our algorithm. Minor details of our implementation that don’t affect our behavior (such as whether we are written in Java or Python) should not be decision-relevant, and if some algorithm does the same thing as us “most” of the time, then we would probably (e.g.) want to cooperate with it in a Prisoner’s Dilemma. Defining what it means for two algorithms to be similar remains an outstanding open problem.

At MSFP 2018, a small group (4-5) of us tried tackling this for a couple hours, had a few ideas that “felt” promising, but gradually realized that none of these made any sense, until ultimately we gave up with the feeling that we hadn’t made any intellectual advances. I only say this to give outside-view evidence of intractability, but it’s difficult for me to concisely communicate why its hard (I could say “try it yourself for an hour and you’ll see”, but part of my point is that hour is better spent). For those who insist on inside-view evidence, here’s an outline of one of the ideas we had and why it turned out to be unworkable:

We attempted to partition algorithm-space into equivalence classes that represent “conceptual similarity”, which should not be harder than defining a distance function on the space. By the Curry-Howard correspondence, we can rephrase this as asking when two proofs are similar (this felt easier for us to think about, but that’s entirely subjective). Suppose we have some proof A of size n, and we want to find proofs that “don’t use any fundamentally different ideas”. The obvious approach is to think of which proofs we can get to with minor edits. If we make some edit of size ϵ⋅n for some small ϵ and the result is still a valid proof, it should be more or less the same. If we take the closure under minor edits that preserve validity, it would seem superficially plausible that this would result in proofs that are similar. However, suppose we discover a one-line proof B that’s totally different from A: then we can append it to A as a minor edit, then gradually delete A with minor edits, until we have a drastically different proof (among other complications).

Adversarial Examples

Given some data point x correctly classified by an ML model, a new point x′:=x+ϵ is an adversarial example if it is now misclassified, despite only differing from x by a tiny amount ϵ (i.e. making relatively small RGB changes to a few pixels). For every state-of-the-art image classifier tested, if you give me:

  • Any image classified correctly by that model

  • Any target class you would like to have the model misclassify the image as

Then one can usually find some small perturbation of that image that the model believes is in the target class with high probability.

In the classic example we can have GoogLeNet classify a panda as a gibbon with 99% confidence. Moreover, these have been found to generalize very well across different models, even with very different architectures. Last year, a paper came out taking this further, by obtaining adversarial examples with the best cross-generalization, and giving these to humans who had only a few seconds to classify the image. Interestingly, the humans were “fooled” in the sense that their snap judgments—those formed by their pure visual system—differed from how they classified the images when given more time for reflection. In terms of robustness to these examples, it seems, our perceptual system by itself is not qualitatively better than today’s classifiers, but our lens can see its own flaws.

The paper was popularized in various places under a bolder headline, namely that there now existed full-blown adversarial examples for humans (reflection or not). This was showcased with a picture from a different part of the paper showing an image of a (somewhat dog-like) cat being given a tiny amount of noise, and subsequently looking like a dog to a human with any amount of visual processing and top-down feedback. This sparked controversy, with many pointing out that a small change (in RGB values) to some visual concept does not necessarily correspond to a small change in concept-space. The paper itself punted on this:

it is philosophically difficult to define the real object class for an image that is not a picture of a real object. In this work, we assume that an adversarial image is misclassified if the output label differs from the human-provided label of the clean image that was used as the starting point for the adversarial image. We make small adversarial perturbations and we assume that these small perturbations are insufficient to change the true class.

And in response to comments, co-author Ian Goodfellow acknowledged on Twitter:

While everyone else was scrambling to finish running experiments for ICML, my co-authors and I were having intense debates about philosophy and semantics and how to write the paper. Some of our open office colleagues were entertained by how surreal this sounded.

Making models robust against adversarial examples remains an outstanding and difficult topic with a considerable paper trail. The problem of merely verifying that a given model has no local adversarial examples (e.g. within a few RGB values of a given data point) has been the subject of some interesting formal verification work in the past couple years. But to even do this verification work, one needs a formal specification of what an adversarial example is, which in turn requires a formal specification of what a “small change” between (e.g.) images is, that somehow captures something about conceptual distance. It seems to me that even this smaller problem will be hard to solve in a philosophically satisfying way because of the inherent subjectivity/​fuzziness in defining “distance in concept-space” or anything that even comes close.

Distance Functions are Hard: The Evidence

What we are asking for, in all these instances, is some distance function precise enough to be mathematizable in some form, but robust enough to include many very fuzzy desiderata we have in mind. It seems natural to ask what distance functions of this form have been successfully developed before. The Encyclopedia of Distances comes out to over 700 pages, split roughly in half between those distances used in pure math (especially, as one would expect, topology, geometry, and functional analysis), and those used in applied math, computing disciplines, and the natural sciences.

Of the distance functions listed in the latter half, most were simply “the obvious thing one would do” given the preexisting mathematical structure around the topic in question (e.g. Levenshtein distance on strings). Others were less obvious, but usually because they used nontrivial mathematical machinery to answer specific mathematical questions, not to actually shed light on fuzzy philosophical questions one would have about it.

Getting to the social science section, where no existing mathematical formalism existed on most of the topics in the first place, virtually none of the distances particularly helped to remedy this fuzziness by themselves. Though I do not claim to have spent that much time flipping through this tome, never did I see a distance notion that struck me as a profound non-mathematical insight, or that even gestured at an “art of coming up with distance functions”.

Conclusions

I conclude, with medium confidence, that each of the questions posed in the first 3 sections will be particularly hard to answer in a satisfying way, and if they are, then probably this won’t be by thinking about distance functions directly.

As a general heuristic, I feel like if you’ve reduced a philosophical problem to “defining the appropriate distance function”, then it’s worth pausing to consider if you’ve made a wrong-way reduction. Chances are, the distance function you want is inherently value-laden, and so the problem of defining it inherits the difficulty of the value alignment problem itself.

I also think this heuristic is especially salient if you’re trying to capture something like “conceptual similarity/​distance”: if you could do this, then you’d have an objective map/​taxonomy of (a large fraction of) concept-space.