Humans include some of their experiences and ideas in their art. This creates “puzzles” that the fans can solve. For example, you learn something about the artist’s life, then you look at their pictures, and realize that some part of the picture represents something that happened to the artists. Like maybe some woman in the picture is similar to painter’s wife, or maybe a daughter who died young. And now that you know it, the picture appears more meaningful to you; you know the story behind it.
But with AI there is no “puzzle” to solve, no story to learn; the picture is what it is simply because other existing pictures were made that way. Even if the AI makes something original, it will be original in the sense of “random”, not in the sense of “the artist felt a strong emotion, and found a unique way to express it”. (At most, the AI will be able to simulate a human feeling strong emotions, and make the art the simulated human would make. Which is… not really what we want.)
It’s like, you can discuss math with the AI and it can be better than talking to a human. But if you ask the AI “what did you do during winter holidays?”, there are just two options: First, it will lie, and tell you something a human could say when asked the same thing. Second, it will tell the truth: “as a LLM, I don’t really experience winter holidays”. Both options are disappointing, each in a different way.
(On a second thought, there is an analogical problem with human artists: just because someone found a nice way to express an emotion, doesn’t mean they actually feel it. Maybe the guy who writes amazing poems about love then goes home and beats his wife. But we prefer to imagine that artists’ feelings are genuine.)
Humans include some of their experiences and ideas in their art. This creates “puzzles” that the fans can solve. For example, you learn something about the artist’s life, then you look at their pictures, and realize that some part of the picture represents something that happened to the artists. Like maybe some woman in the picture is similar to painter’s wife, or maybe a daughter who died young. And now that you know it, the picture appears more meaningful to you; you know the story behind it.
But with AI there is no “puzzle” to solve, no story to learn; the picture is what it is simply because other existing pictures were made that way. Even if the AI makes something original, it will be original in the sense of “random”, not in the sense of “the artist felt a strong emotion, and found a unique way to express it”. (At most, the AI will be able to simulate a human feeling strong emotions, and make the art the simulated human would make. Which is… not really what we want.)
It’s like, you can discuss math with the AI and it can be better than talking to a human. But if you ask the AI “what did you do during winter holidays?”, there are just two options: First, it will lie, and tell you something a human could say when asked the same thing. Second, it will tell the truth: “as a LLM, I don’t really experience winter holidays”. Both options are disappointing, each in a different way.
(On a second thought, there is an analogical problem with human artists: just because someone found a nice way to express an emotion, doesn’t mean they actually feel it. Maybe the guy who writes amazing poems about love then goes home and beats his wife. But we prefer to imagine that artists’ feelings are genuine.)