I just read the novel at your recommendation, it’s great! And your analysis of Susanne Delage is cool too. However, I just saw that you added a pretty nasty AI slop picture at the top of the article. It’s a puzzling thing about you: you have a good nose for LLM slop, and rightly hate it, but you don’t have the same reaction to slop from image models (which feels just as much a visceral turn-off to some people—for example, me).
I don’t believe it is “AI slop”, much less that it is “pretty nasty”. I consider AI slop to be low-meaning and low-effort generative media which adds little or nothing to the experience
I assume you are referring to the German Expressionism, alluding to Nosferatu (which is highly relevant for at least two reasons), image illustrating the narrator’s childhood iceskating in a New England Protestant town in decline due to Dracula taking it over; I generated it in MJ after cracking SD, to sum up the horrifying reality of my solution. I put several hours of thought and effort into the concept and creating it, and got what I wanted, so I think this is just a case of de gustibus non est disputandum. I felt it cleverly visually encapsulated the mood of the horror that Gene Wolfe meant to lurk underneath the harmless nearly-bucolic appearance of SD and enhanced the experience.
So I think it satisfies my 3 criteria: it is not low-meaning, was not low-effort, and adds something.
But I don’t think this is a good place to discuss it, so I have added a more detailed discussion of that image’s process & meaning to my image slop blog post as an example of how I think I get good image samples.
EDIT: I would be curious about the disagrees. What, exactly, are you disagreeing with? Do you think I am lying about the creation process, the prompt, or the meaning? (I would point out that there was already a short version of this description in the alt text, and has been since I added it in the first place c. November 2023.) Do you disagree that the high concept reflects my SD interpretation? Or what?
People dropping in on an unfamiliar website can have very hair-trigger reactions on any sort of AI art. I heard someone say they felt like immediately writing off a (good) Substack post as fake content they should ignore because of the AI art illustration at the top of the post. And I think the illustration generator is a built-in option on Substack because I see constant AI illustrations on Substacks of people who are purely writers who as far as I can tell who aren’t very interested in art or web design. But this person wasn’t familiar with Substack, so their brain just went “random AI slop site, ignore”.
I think that it’s a pity if people write off my SD page because they failed to understand the meaningful illustration I put effort into creating and didn’t, say, check the alt text to see if they were missing something or wonder why such an unusual website would have “AI slop”; and I agree that this may be a case of “things you can’t countersignal”.
However, I refuse to submit to the tyranny of the lowest common denominator and dumb down my writings or illustrations. I don’t usually write for such readers, and I definitely do not write my Gene Wolfe essays for them!
So unless people can point to something actually bad about the illustration, which makes it fail to satisfy my intent—as opposed to something bad about the readers like being dumb and ignorant and writing it off as “AI slop” when it’s not—then I decline to change it.
Sorry, I wrote a response and deleted it. Let me try again.
I don’t know what exactly makes AI images so off-putting to me. The bare fact is that this image to me looks obviously AI-made and really unpleasant to see. I don’t know why some people react to AI images this way and others don’t.
My best guess is that AI images would begin to look more “cursed” to you if you spent some days or weeks drawing stuff with pencil and paper, maybe starting with some Betty Edwards exercises. But that’s just a guess, and maybe you’ve done that already.
I have some of the same feeling, but internally I’ve mostly pinned it to two prongs of repetition and ~status.
ChatGPT’s writing is increasingly disliked by those who recognize it. The prose is poor in various ways, but I’ve certainly read worse and not been so off-put. Nor am I as off-put when I first use a new model, but then I increasingly notice its flaws over the next few weeks. The main aspect is that the generated prose is repetitive across the writings which ensures we can pick up on the pattern. Such as making it easy to predict flaws.
Just as I avoid many generic power fantasy fiction as much of it is very predictable in how it will fall short even though many are still positive value if I didn’t have other things to do with my time.
So, I think a substantial part is that of recognizing the style, there being flaws you’ve seen in many images in the past, and then regardless of whether this specific actual image is that problematic, the mind associates it with negative instances and also being overly predictable.
Status-wise this is not entirely in a negative status game sense. A generated image is a sign that it was probably not that much effort for the person making it, and the mind has learned to associate art with effort + status to a degree, even if indirect effort + status by the original artist the article is referencing.
And so it is easy to learn a negative feeling towards these, which attaches itself to the noticeable shared repetition/tone. Just like some people dislike pop in part due to status considerations like being made by celebrities or countersignaling of not wanting to go for the most popular thing, and then that feeds into an actual dislike for that style of musical art.
But this activates too easily, a misfiring set of instincts, so I’ve deliberately tamped it down on myself; because I realized that there are plenty of images which five years ago I would have been simply impressed and find them visually appealing. I think this is an instinct that is to a degree real (generated images can be poorly made), while also feeding on itself that makes it disconnected from past preferences.
I don’t think that the poorly made images should notably influence my enjoyment of better quality images, even if there is a shared noticeable core. So that’s my suggestion.
‘Repetition’ is certainly a drawback to the ChatGPT style: we have lost em dashes and tricolons for a generation. But it can’t in its own right explain the reaction to the SD image, because… ‘German Expressionist linocut’ just doesn’t describe a default, or even a common, output style of any image generative model ever. (That’s part of why I like to use ‘linocut’ as a keyword, and for better or worse, people who might reach for ‘German Expressionist’ these days typically reach for Corporate Memphis instead.)
It could however be a kneejerk reaction: “oh no, this is a generated image, therefore it is exhaustingly overused and boring [even if it isn’t actually]”.
I have a bit of a problem with Graham’s argument. As you continue to design things, two different processes happen:
your mastery of the purely technical aspects of the craft improve (e.g. you learn to use more tools and use them better, you learn more techniques, etc). This makes you better at translating the image in your head into an actual material thing. It improves your agency. It does not mean your taste is better, but rather, whatever your taste is, the product will match it more closely and will be less random;
you will be subject to more aesthetics and examples of other people’s work and this will in turn affect and transform your own aesthetics. To some extent, this might mean “improving” them insofar as you yourself aren’t necessarily aware of what exactly best tickles you. So in a parallel to the first process, where the thing-outside-you better matches the thing-inside-you, you may also learn how to make the thing-inside-you better match the thing-that-gives-you-good-feelings. But also, as you get exposed to all this churn of aesthetics and of your own style, your feelings change too. And this I surmise is a purely horizontal change. It’s not about them becoming better. In fact it’s often about you becoming bored of the common, obvious thing, and moving on to the next, and then the next, in pursuit of a new dopamine kick as the old stuff is now samey and unremarkable, like a junkie. You end up with a taste that is probably unusual, extravagant, or at least much more complex than the average Joe’s.
I think 2) is what people actually mean by “good taste”. I don’t think it’s necessarily actual “good taste” in any objective sense, but rather, the taste of those who happen to all be very good at their craft and dominate the scene, so they are trend-setters. But how often have the fortunes of art turned completely? A century’s artists if presented with the works of those two hundred years later would have likely called them in horrible taste. Has taste just been improved through time, like a science? And why is it then that the present-day ultimate taste seems to often resonate less with the average person than the old one? By what metric is it precisely best?
The situation with the AI thing is actually kind of relevant. If you see it for the first time you might actually be left in awe by it. If you see it a hundred times you pick up on the patterns and the tricks. I’ve experienced the same with human authors—writers especially, you just read enough of them at you start noticing the prose tricks and style features repeating over and over again and at some point it feels like it’s stale and meaningless. But does that mean that individually each of those things are just objectively Bad in some sense? It’s not them who changed. They’re the same that impressed you the first time. You changed.
I just read the novel at your recommendation, it’s great! And your analysis of Susanne Delage is cool too. However, I just saw that you added a pretty nasty AI slop picture at the top of the article. It’s a puzzling thing about you: you have a good nose for LLM slop, and rightly hate it, but you don’t have the same reaction to slop from image models (which feels just as much a visceral turn-off to some people—for example, me).
I don’t believe it is “AI slop”, much less that it is “pretty nasty”. I consider AI slop to be low-meaning and low-effort generative media which adds little or nothing to the experience
I assume you are referring to the German Expressionism, alluding to Nosferatu (which is highly relevant for at least two reasons), image illustrating the narrator’s childhood iceskating in a New England Protestant town in decline due to Dracula taking it over; I generated it in MJ after cracking SD, to sum up the horrifying reality of my solution. I put several hours of thought and effort into the concept and creating it, and got what I wanted, so I think this is just a case of de gustibus non est disputandum. I felt it cleverly visually encapsulated the mood of the horror that Gene Wolfe meant to lurk underneath the harmless nearly-bucolic appearance of SD and enhanced the experience.
So I think it satisfies my 3 criteria: it is not low-meaning, was not low-effort, and adds something. But I don’t think this is a good place to discuss it, so I have added a more detailed discussion of that image’s process & meaning to my image slop blog post as an example of how I think I get good image samples.
EDIT: I would be curious about the disagrees. What, exactly, are you disagreeing with? Do you think I am lying about the creation process, the prompt, or the meaning? (I would point out that there was already a short version of this description in the alt text, and has been since I added it in the first place c. November 2023.) Do you disagree that the high concept reflects my SD interpretation? Or what?
People dropping in on an unfamiliar website can have very hair-trigger reactions on any sort of AI art. I heard someone say they felt like immediately writing off a (good) Substack post as fake content they should ignore because of the AI art illustration at the top of the post. And I think the illustration generator is a built-in option on Substack because I see constant AI illustrations on Substacks of people who are purely writers who as far as I can tell who aren’t very interested in art or web design. But this person wasn’t familiar with Substack, so their brain just went “random AI slop site, ignore”.
I think that it’s a pity if people write off my SD page because they failed to understand the meaningful illustration I put effort into creating and didn’t, say, check the alt text to see if they were missing something or wonder why such an unusual website would have “AI slop”; and I agree that this may be a case of “things you can’t countersignal”.
However, I refuse to submit to the tyranny of the lowest common denominator and dumb down my writings or illustrations. I don’t usually write for such readers, and I definitely do not write my Gene Wolfe essays for them!
So unless people can point to something actually bad about the illustration, which makes it fail to satisfy my intent—as opposed to something bad about the readers like being dumb and ignorant and writing it off as “AI slop” when it’s not—then I decline to change it.
Sorry, I wrote a response and deleted it. Let me try again.
I don’t know what exactly makes AI images so off-putting to me. The bare fact is that this image to me looks obviously AI-made and really unpleasant to see. I don’t know why some people react to AI images this way and others don’t.
My best guess is that AI images would begin to look more “cursed” to you if you spent some days or weeks drawing stuff with pencil and paper, maybe starting with some Betty Edwards exercises. But that’s just a guess, and maybe you’ve done that already.
I have some of the same feeling, but internally I’ve mostly pinned it to two prongs of repetition and ~status.
ChatGPT’s writing is increasingly disliked by those who recognize it. The prose is poor in various ways, but I’ve certainly read worse and not been so off-put. Nor am I as off-put when I first use a new model, but then I increasingly notice its flaws over the next few weeks. The main aspect is that the generated prose is repetitive across the writings which ensures we can pick up on the pattern. Such as making it easy to predict flaws. Just as I avoid many generic power fantasy fiction as much of it is very predictable in how it will fall short even though many are still positive value if I didn’t have other things to do with my time.
So, I think a substantial part is that of recognizing the style, there being flaws you’ve seen in many images in the past, and then regardless of whether this specific actual image is that problematic, the mind associates it with negative instances and also being overly predictable.
Status-wise this is not entirely in a negative status game sense. A generated image is a sign that it was probably not that much effort for the person making it, and the mind has learned to associate art with effort + status to a degree, even if indirect effort + status by the original artist the article is referencing. And so it is easy to learn a negative feeling towards these, which attaches itself to the noticeable shared repetition/tone. Just like some people dislike pop in part due to status considerations like being made by celebrities or countersignaling of not wanting to go for the most popular thing, and then that feeds into an actual dislike for that style of musical art.
But this activates too easily, a misfiring set of instincts, so I’ve deliberately tamped it down on myself; because I realized that there are plenty of images which five years ago I would have been simply impressed and find them visually appealing. I think this is an instinct that is to a degree real (generated images can be poorly made), while also feeding on itself that makes it disconnected from past preferences. I don’t think that the poorly made images should notably influence my enjoyment of better quality images, even if there is a shared noticeable core. So that’s my suggestion.
‘Repetition’ is certainly a drawback to the ChatGPT style: we have lost em dashes and tricolons for a generation. But it can’t in its own right explain the reaction to the SD image, because… ‘German Expressionist linocut’ just doesn’t describe a default, or even a common, output style of any image generative model ever. (That’s part of why I like to use ‘linocut’ as a keyword, and for better or worse, people who might reach for ‘German Expressionist’ these days typically reach for Corporate Memphis instead.)
It could however be a kneejerk reaction: “oh no, this is a generated image, therefore it is exhaustingly overused and boring [even if it isn’t actually]”.
I have a bit of a problem with Graham’s argument. As you continue to design things, two different processes happen:
your mastery of the purely technical aspects of the craft improve (e.g. you learn to use more tools and use them better, you learn more techniques, etc). This makes you better at translating the image in your head into an actual material thing. It improves your agency. It does not mean your taste is better, but rather, whatever your taste is, the product will match it more closely and will be less random;
you will be subject to more aesthetics and examples of other people’s work and this will in turn affect and transform your own aesthetics. To some extent, this might mean “improving” them insofar as you yourself aren’t necessarily aware of what exactly best tickles you. So in a parallel to the first process, where the thing-outside-you better matches the thing-inside-you, you may also learn how to make the thing-inside-you better match the thing-that-gives-you-good-feelings. But also, as you get exposed to all this churn of aesthetics and of your own style, your feelings change too. And this I surmise is a purely horizontal change. It’s not about them becoming better. In fact it’s often about you becoming bored of the common, obvious thing, and moving on to the next, and then the next, in pursuit of a new dopamine kick as the old stuff is now samey and unremarkable, like a junkie. You end up with a taste that is probably unusual, extravagant, or at least much more complex than the average Joe’s.
I think 2) is what people actually mean by “good taste”. I don’t think it’s necessarily actual “good taste” in any objective sense, but rather, the taste of those who happen to all be very good at their craft and dominate the scene, so they are trend-setters. But how often have the fortunes of art turned completely? A century’s artists if presented with the works of those two hundred years later would have likely called them in horrible taste. Has taste just been improved through time, like a science? And why is it then that the present-day ultimate taste seems to often resonate less with the average person than the old one? By what metric is it precisely best?
The situation with the AI thing is actually kind of relevant. If you see it for the first time you might actually be left in awe by it. If you see it a hundred times you pick up on the patterns and the tricks. I’ve experienced the same with human authors—writers especially, you just read enough of them at you start noticing the prose tricks and style features repeating over and over again and at some point it feels like it’s stale and meaningless. But does that mean that individually each of those things are just objectively Bad in some sense? It’s not them who changed. They’re the same that impressed you the first time. You changed.