There was a matematician that developed a method for solving hard problems. Instead of attacking the problem frontally(trying to crack the nut) he started to build a framework around it, create all kinds of useful mathematical abstractions, gradually dissolving the question(==the nut) until the solution became evident.
This ties in with:
And you would not discover Bayesian networks, because you would have prematurely marked the mystery as known.
I guess that Bayesian networks, or at least bayesian thinking was invented before this application for AI. After that it was just one inferential step to apply this to AI. What if bayesianism hadn’t been invented yet? Would it make sense to bang the head against the problem, hoping to find a solution? In the same vein I have the suspicion that many of the remaining problems in AI might be too many inferential steps away to solve directly. In this case there will be need to improve the surrounding knowledge first.
There was a matematician that developed a method for solving hard problems. Instead of attacking the problem frontally(trying to crack the nut) he started to build a framework around it, create all kinds of useful mathematical abstractions, gradually dissolving the question(==the nut) until the solution became evident.
This ties in with: And you would not discover Bayesian networks, because you would have prematurely marked the mystery as known.
I guess that Bayesian networks, or at least bayesian thinking was invented before this application for AI. After that it was just one inferential step to apply this to AI. What if bayesianism hadn’t been invented yet? Would it make sense to bang the head against the problem, hoping to find a solution? In the same vein I have the suspicion that many of the remaining problems in AI might be too many inferential steps away to solve directly. In this case there will be need to improve the surrounding knowledge first.