I think the assumption of at least a relatively opaque AI is justified. Except for maybe k-NN, decision trees, and linear classifiers, everything else we currently have to work with is more opaque than Naïve Bayes.
For spam filtering, if we wanted to bump up the ROC AUC a few percent, the natural place to go might be a Support Vector Machine classifier. The solution is transparent in that it boils down to optimizing a quadratic function over a convex domain, something that we can do efficiently and non-mysteriously. On the other hand, the solution produced is either a linear decision boundary in a potentially infinite-dimensional space or an unspeakably complicated decision surface in the original feature space.
Something like Latent Dirichlet Allocation is probably a better example of what a mid-level tool-A(not G)I looks like today.
Edit: Please explain the downvote? I’d like to know if I’m making a technical mistake somewhere, because this is material I really ought to be able to get right.
I think the assumption of at least a relatively opaque AI is justified. Except for maybe k-NN, decision trees, and linear classifiers, everything else we currently have to work with is more opaque than Naïve Bayes.
For spam filtering, if we wanted to bump up the ROC AUC a few percent, the natural place to go might be a Support Vector Machine classifier. The solution is transparent in that it boils down to optimizing a quadratic function over a convex domain, something that we can do efficiently and non-mysteriously. On the other hand, the solution produced is either a linear decision boundary in a potentially infinite-dimensional space or an unspeakably complicated decision surface in the original feature space.
Something like Latent Dirichlet Allocation is probably a better example of what a mid-level tool-A(not G)I looks like today.
Edit: Please explain the downvote? I’d like to know if I’m making a technical mistake somewhere, because this is material I really ought to be able to get right.