The best we could possibly hope for with transparency techniques is: For anything that a neural net is doing, we are able to get the best possible human understandable explanation of what it’s doing, and what we’d have to change in the neural net to make it do something different. But this doesn’t help us if the neural net is doing things that rely on concepts that it’s fundamentally impossible for humans to understand, because they’re too complicated or alien. It seems likely to me that these concepts exist. And so systems will be much weaker if we demand interpretability.
That may be ‘the best we could hope for’, but I’m more worried about ‘we can’t understand the neural net (with the tools we have)’ than “the neural net is doing things that rely on concepts that it’s fundamentally impossible for humans to understand”. (Or, solving the task requires concepts that are really complicated to understand (though maybe easy for humans to understand), and so the neural network doesn’t get it.)
And so systems will be much weaker if we demand interpretability.
Whether or not “empirical contingencies work out nicely”, I think the concern about ’fundamentally impossible to understand concepts” is...something that won’t show up in every domain. (I also think that things do exist that people can understand, but it takes a lot of work, so people don’t do it. There’s an example from math involving some obscure theorems that aren’t used a lot for that reason.)
That may be ‘the best we could hope for’, but I’m more worried about ‘we can’t understand the neural net (with the tools we have)’ than “the neural net is doing things that rely on concepts that it’s fundamentally impossible for humans to understand”. (Or, solving the task requires concepts that are really complicated to understand (though maybe easy for humans to understand), and so the neural network doesn’t get it.)
Whether or not “empirical contingencies work out nicely”, I think the concern about ’fundamentally impossible to understand concepts” is...something that won’t show up in every domain. (I also think that things do exist that people can understand, but it takes a lot of work, so people don’t do it. There’s an example from math involving some obscure theorems that aren’t used a lot for that reason.)