If you have access to the training data, then DNNs are basically theory simulatable, since you can just describe the training algorithm and the initialization scheme. The use of random initialization seems like an obstacle, but we use pseudo-random numbers and can just learn the algorithms for generating those as well.
Yes, we would say that DNNs are theory simulatable given the training data. But that’s just another way of saying that the training procedure is transparent. We really care about understanding the model, not just the training process.
Also see another comment on here where I clarified that this defintion is not intended to be precise. It might be better described as “practically simulatable.”
If you have access to the training data, then DNNs are basically theory simulatable, since you can just describe the training algorithm and the initialization scheme. The use of random initialization seems like an obstacle, but we use pseudo-random numbers and can just learn the algorithms for generating those as well.
Yes, we would say that DNNs are theory simulatable given the training data. But that’s just another way of saying that the training procedure is transparent. We really care about understanding the model, not just the training process.
Also see another comment on here where I clarified that this defintion is not intended to be precise. It might be better described as “practically simulatable.”