There seems to be a confusion in terms. I don’t remember suggesting anything like that; you seem to be projecting viewpoints that would have a similar-looking output into this, instead of seeing the suggestion for what it is; a humble yet ambitious attempt to get objective information on what people like, and to provide it to them. Certainly, an incomplete, superficial progress in the scientific study of this field is better than no progress at all?
I was responding to “the mathematization and rationalization of the criteria for what makes fiction good or bad” which doesn’t sound humble and does sound a bit like solving the art form. And I was responding to “over subjective prejudices” (my emphasis), which does sound like you think subjectivity, the core issue, can be trumped. I still read your GP comment the same way, but if you think I misinterpreted then I won’t argue, of course.
I do think there is room for an approach which strictly distinguishes between “art” and “craft” aspects of an art form, and applies reductive or analytical methods to the latter.
If you think art forms weren’t ultimately “solvable” in some way, you’re putting a rather hard limit on the achievements an AI could make. That would be an interesting replacement to the Turing Test; “artificial, mathematic beings suffer from creative sterility; they can’t make good art, and they can’t tell good art from bad”. Is that what you’re suggesting?
As for trumping subjectivity, it’s more that I’d like to build a “critic” or “recommender” that isn’t burdened by personalBias Steamroller effects. Its bias would be the bias of the public at large, on average. It doesn’t “trump subjictivity”, but it mitigates its effect for the sake of recommending to people what they are most likely to like.
Recommender systems are a respectable topic in machine learning. That is quite different from “try[ing] to craft a written work with elements known to appeal to people” (OP) or “the mathematization and rationalization of the criteria for what makes fiction good or bad” (GGP).
No, I’m not suggesting any particular limit on what an AI might do in creative domains. I do think a domain like writing or reviewing fiction is probably AI-hard, meaning that sub-AI approaches like statistical machine learning won’t be enough.
There seems to be a confusion in terms. I don’t remember suggesting anything like that; you seem to be projecting viewpoints that would have a similar-looking output into this, instead of seeing the suggestion for what it is; a humble yet ambitious attempt to get objective information on what people like, and to provide it to them. Certainly, an incomplete, superficial progress in the scientific study of this field is better than no progress at all?
I was responding to “the mathematization and rationalization of the criteria for what makes fiction good or bad” which doesn’t sound humble and does sound a bit like solving the art form. And I was responding to “over subjective prejudices” (my emphasis), which does sound like you think subjectivity, the core issue, can be trumped. I still read your GP comment the same way, but if you think I misinterpreted then I won’t argue, of course.
I do think there is room for an approach which strictly distinguishes between “art” and “craft” aspects of an art form, and applies reductive or analytical methods to the latter.
If you think art forms weren’t ultimately “solvable” in some way, you’re putting a rather hard limit on the achievements an AI could make. That would be an interesting replacement to the Turing Test; “artificial, mathematic beings suffer from creative sterility; they can’t make good art, and they can’t tell good art from bad”. Is that what you’re suggesting?
As for trumping subjectivity, it’s more that I’d like to build a “critic” or “recommender” that isn’t burdened by personalBias Steamroller effects. Its bias would be the bias of the public at large, on average. It doesn’t “trump subjictivity”, but it mitigates its effect for the sake of recommending to people what they are most likely to like.
Recommender systems are a respectable topic in machine learning. That is quite different from “try[ing] to craft a written work with elements known to appeal to people” (OP) or “the mathematization and rationalization of the criteria for what makes fiction good or bad” (GGP).
No, I’m not suggesting any particular limit on what an AI might do in creative domains. I do think a domain like writing or reviewing fiction is probably AI-hard, meaning that sub-AI approaches like statistical machine learning won’t be enough.
I certainly agree with that, but the way to eat an elephant is; one mouthful at a time.