Second, deep learning works in practice, using a reasonable amount of computational resources; meanwhile, even the most efficient versions of Solomonoff induction like speed induction run in exponential time or worse.
But doesn’t increasing the accuracy of DL outputs require exponentially more compute? It only “works” to the extent that labs have been able to afford exponential compute scaling so far.
But doesn’t increasing the accuracy of DL outputs require exponentially more compute? It only “works” to the extent that labs have been able to afford exponential compute scaling so far.