I am actually currently working on developing these ideas further, and I expect to relatively soon be able to put out some material on this (modulo the fact that I have to finish my PhD thesis first).
I also think that you in practice probably would have to allow some uninterpretable components to maintain competitive performance, at least in some domains. One reason for this is of course that there simply might not be any interpretable computer program which solves the given task (*). Moreover, even if such a program does exist, it may plausibly be infeasibly difficult to find (even with the help of powerful AI systems). However, some black-box components might be acceptable (depending on how the AI is used, etc), and it seems like partial successes would be useful even if the full version of the problem isn’t solved (at least under the assumption that interpretability is useful, even if the full version of interpretability isn’t solved).
I also think there is good reason to believe that quite a lot of the cognition that humans are capable of can be carried out by interpretable programs. For example, any problem where you can “explain your thought process” or “justify your answer” is probably (mostly) in this category. I also don’t think that operations of the form “do X, because on average, this works well” necessarily are problematic, provided that “X” itself can be understood. Humans give each other advice like this all the time. For example, consider a recommendation like “when solving a maze, it’s often a good idea to start from the end”. I would say that this is interpretable, even without a deeper justification for why this is a good thing to do. At the end of the day, all knowledge must (in some way) be grounded in statistical regularities. If you ask a sequence of “why”-questions, you must eventually hit a point where you are no longer able to answer. As long as the resulting model itself can be understood and reasoned about, I think we should consider this to be a success. This also means that problems that can be solved by a large ensemble of simple heuristics arguably are fine, provided that the heuristics themselves are intelligible.
(*) It is also not fully clear to me if it even makes sense to say that a task can’t be solved by an interpretable program. On an intuitive level, this seems to make sense. However, I’m not able to map this statement onto any kind of formal claim. Would it imply that there are things which are outside the reach of science? I consider it to at least be a live possibility that anything can be made interpretable.
I also don’t think that operations of the form “do X, because on average, this works well” necessarily are problematic, provided that “X” itself can be understood.
Yeah, I think I agree with this and in general with what you say in this paragraph. Along the lines of your footnote, I’m still not quite sure what exactly “X can be understood” must require. It seems to matter, for example, that to a human it’s understandable how the given rule/heuristic or something like the given heuristic could be useful. At least if we specifically think about AI risk, all we really need is that X is interpretable enough that we can tell that it’s not doing anything problematic (?).
I am actually currently working on developing these ideas further, and I expect to relatively soon be able to put out some material on this (modulo the fact that I have to finish my PhD thesis first).
I also think that you in practice probably would have to allow some uninterpretable components to maintain competitive performance, at least in some domains. One reason for this is of course that there simply might not be any interpretable computer program which solves the given task (*). Moreover, even if such a program does exist, it may plausibly be infeasibly difficult to find (even with the help of powerful AI systems). However, some black-box components might be acceptable (depending on how the AI is used, etc), and it seems like partial successes would be useful even if the full version of the problem isn’t solved (at least under the assumption that interpretability is useful, even if the full version of interpretability isn’t solved).
I also think there is good reason to believe that quite a lot of the cognition that humans are capable of can be carried out by interpretable programs. For example, any problem where you can “explain your thought process” or “justify your answer” is probably (mostly) in this category. I also don’t think that operations of the form “do X, because on average, this works well” necessarily are problematic, provided that “X” itself can be understood. Humans give each other advice like this all the time. For example, consider a recommendation like “when solving a maze, it’s often a good idea to start from the end”. I would say that this is interpretable, even without a deeper justification for why this is a good thing to do. At the end of the day, all knowledge must (in some way) be grounded in statistical regularities. If you ask a sequence of “why”-questions, you must eventually hit a point where you are no longer able to answer. As long as the resulting model itself can be understood and reasoned about, I think we should consider this to be a success. This also means that problems that can be solved by a large ensemble of simple heuristics arguably are fine, provided that the heuristics themselves are intelligible.
(*) It is also not fully clear to me if it even makes sense to say that a task can’t be solved by an interpretable program. On an intuitive level, this seems to make sense. However, I’m not able to map this statement onto any kind of formal claim. Would it imply that there are things which are outside the reach of science? I consider it to at least be a live possibility that anything can be made interpretable.
Yeah, I think I agree with this and in general with what you say in this paragraph. Along the lines of your footnote, I’m still not quite sure what exactly “X can be understood” must require. It seems to matter, for example, that to a human it’s understandable how the given rule/heuristic or something like the given heuristic could be useful. At least if we specifically think about AI risk, all we really need is that X is interpretable enough that we can tell that it’s not doing anything problematic (?).