For @Random Developer, I agree that literal GOFAI is unlikely to make a comeback, because of the fuzziness problems that arise when you only have finite compute (though some of the GOFAI techniques probably will be reinvented), but I do think a weaker version of GOFAI that drops the pure determinism and embraces probabilistic programming languages (perhaps InfraBayes is involved?) (perhaps causal models as well) but still retains very high interpretability compared to modern AIs that lead to retargeting the search being viable is still likely to be possible.
The key insight I was missing was that while the world’s complexity is very high, so I agree with Random Developer that the complexity of the world is very high, it’s also pretty easy to decompose the complexity into low-complexity parts for specific tasks, and this lets us not need to cram all of the complexity of our world into our memory at once, and we can instead chunk it.
This is the part that convinced me that powerful AI with very interpretable models was possible at all, and the thing that made me update to thinking it’s likely is that the bitter lesson is now pretty easy to explain without invoking any special property of the world/AIs, simply by looking at the labor being constant but compute growing exponentially means that so long as uninterpretable AI is possible at all to scale, it will be invested in, and I’m a big fan of boring hypotheses relative to exciting/deep hypotheses (to human minds. Other minds would find other hypotheses boring, and other hypotheses exciting).
This is in a sense a restatement of the well known theorem that hypotheses that add conjuncts are never more likely than hypotheses that don’t add a conjunct.
For @Random Developer, I agree that literal GOFAI is unlikely to make a comeback, because of the fuzziness problems that arise when you only have finite compute (though some of the GOFAI techniques probably will be reinvented), but I do think a weaker version of GOFAI that drops the pure determinism and embraces probabilistic programming languages (perhaps InfraBayes is involved?) (perhaps causal models as well) but still retains very high interpretability compared to modern AIs that lead to retargeting the search being viable is still likely to be possible.
The key insight I was missing was that while the world’s complexity is very high, so I agree with Random Developer that the complexity of the world is very high, it’s also pretty easy to decompose the complexity into low-complexity parts for specific tasks, and this lets us not need to cram all of the complexity of our world into our memory at once, and we can instead chunk it.
This is the part that convinced me that powerful AI with very interpretable models was possible at all, and the thing that made me update to thinking it’s likely is that the bitter lesson is now pretty easy to explain without invoking any special property of the world/AIs, simply by looking at the labor being constant but compute growing exponentially means that so long as uninterpretable AI is possible at all to scale, it will be invested in, and I’m a big fan of boring hypotheses relative to exciting/deep hypotheses (to human minds. Other minds would find other hypotheses boring, and other hypotheses exciting).
This is in a sense a restatement of the well known theorem that hypotheses that add conjuncts are never more likely than hypotheses that don’t add a conjunct.
So I mostly agree with this hypothesis.