Another interpretation of the no free lunch theorem by @davidad is that learning/optimization is too trivial under worst-case conditions, but also impractical, so you need to put more constraints to have an interesting solution:
Do you think this footnote conveys the point you were making?
As alignment research David Dalrymple points out, another “interpretation of the NFL theorems is that solving the relevant problems under worst-case assumptions is too easy, so easy it’s trivial: a brute-force search satisfies the criterion of worst-case optimality. So, that being settled, in order to make progress, we have to step up to average-case evaluation, which is harder.” The fact that designing solving problems for unnecessarily general environments is too easy crops up elsewhere, in particular in Solomonoff Induction. There, the problem is to assume a computable environment and predict what will happen next. The algorithm? Run through every possible computable environment and average their predictions. No algorithm can do better at this task. But for less general tasks, designing an optimal algorithm becomes much harder. But eventually, specialization makes things easy again. Solving tic-tac-toe is trivial. Between total generality and total specialization is where the most important, and most difficult, problems in AI lay.
Another interpretation of the no free lunch theorem by @davidad is that learning/optimization is too trivial under worst-case conditions, but also impractical, so you need to put more constraints to have an interesting solution:
https://www.lesswrong.com/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment#N3avtTM3ESH4KHmfN
Do you think this footnote conveys the point you were making?
Yes, it does convey the point accurately, according to me.
Great! I’ve added it to the site.