Tree search reasoning is naturally legible: the “argument” is simply a sequence of board states. In contrast, the neural network is mostly illegible
You can express tree search in terms of a huge tree of board states. You can express neural nets as a huge list of arithmetic. Both are far too huge for a human to read all of.
I don’t think the intuition “both are huge” so “~ roughly equal” is correct.
Tree search is decomposable into specific sequence of a board states, which are easily readable; in practice trees are pruned, and can be pruned to human-readable sizes.
This isn’t true for the neural net. If you decompose the information in AlphaGo net into a huge list of arithmetic, if the “arithmetic” is the whole training process, the list is much larger than in the first case. If it’s just the trained net, it’s less interpretable than the tree.
You can express tree search in terms of a huge tree of board states. You can express neural nets as a huge list of arithmetic. Both are far too huge for a human to read all of.
I don’t think the intuition “both are huge” so “~ roughly equal” is correct.
Tree search is decomposable into specific sequence of a board states, which are easily readable; in practice trees are pruned, and can be pruned to human-readable sizes.
This isn’t true for the neural net. If you decompose the information in AlphaGo net into a huge list of arithmetic, if the “arithmetic” is the whole training process, the list is much larger than in the first case. If it’s just the trained net, it’s less interpretable than the tree.