Yes, perhaps I should’ve been more clear. Learning certain distance functions is a practical solution to some things, so maybe the phrase “distance functions are hard” is too simplistic. What I meant to say is more like
Fully-specified distance functions are hard, over and above the difficulty of formally specifying most things, and it’s often hard to notice this difficulty
This is mostly applicable to Agent Foundations-like research, where we are trying to give a formal model of (some aspect of) how agents work. Sometimes, we can reduce our problem to defining the appropriate distance function, and it can feel like we’ve made some progress, but we haven’t actually gotten anywhere (the first two examples in the post are like this).
The 3rd example, where we are trying to formally verify an ML model against adversarial examples, is a bit different now that I think of it. Here we apparently need transparent, formally-specified distance function if we have any hope of absolutely proving the absence of adversarial examples. And in formal verification, the specification problem often is just philosophically hard like this. So I suppose this example is less insightful, except insofar as it lends extra intuitions for the other class of examples.
Yes, here: https://www.lesswrong.com/posts/QePFiEKZ4R2KnxMkW/posts-i-repent-of