It’s hard to say what the largest literary flaw is. The story is a big mix of good and bad sides. But the final exam in particular felt weak to me even by its own standards: it’s not a good description of a difficult final encounter.
For me, the gold standard for a final encounter is the ending to the video game Veil of Darkness (I haven’t played it, but Ross Scott’s review recounts the whole thing). Basically we’re an average guy, and we need to defeat an elder vampire who had enslaved a valley of people for a thousand years and stolen all their sunlight. And here’s how the final battle goes: 1) we steal the vampire’s box containing all the stolen sunlight 2) we nail his coffin shut 3) wear a garlic necklace 4) eat a mushroom making us temporarily blind 5) confront him directly 6) he tries to mind-control us but fails due to our blindness 7) he tries to physically attack us but fails due to the necklace 8) we open the box with stolen sunlight at him, thus hitting him with an attack proportional to his age, nice detail isn’t it? 9) he staggers, turns into a bat and flies away to rest in the coffin 10) but the coffin is shut, so we finally stake him and cut off his head. There were a few other attacks too, but these are the main ones.
See what happened? The story makes the average guy defeating a thousand-year-old vampire sound actually believable. Because that’s how much work and preparation it takes to do something difficult. And this lesson you can apply in real life, unlike the lesson from HPMOR’s final exam.
That said, I don’t think the particular kind of satisfying conclusion you wanted to see works for rationalist fiction like HPMOR. After all, the premise is that all characters, protagonists and antagonists alike, have their own spark of optimization, genre savvy, and so on. So they know how these stories are supposed to go (the Hero wins, the Dark Lord loses, etc.), imagine how they could be defeated, and preempt those scenarios as best they can.
So in a rationalist version of that game finale, the elder vampire takes precautions against having his precious box stolen; protects his coffin from tampering; has overcome his weakness to garlic, or found a workaround (like a gust of wind spell or something), or faked having the weakness in the first place; etc. etc.
The most likely way for a prepared adversary to lose in such a situation is through a surprise, an out-of-sample error. That may not be as narratively satisfying, but it makes a lot more sense than for an elder vampire to die because an average human learned about his weaknesses. As if the vampire wasn’t aware of those weaknesses himself and didn’t have ample time to compensate for them.
An instructive (and fun) example is the case of Cazador Szarr (an antagonist in Baldur’s Gate 3).
(Spoilers, though not very important ones, below for anyone who hasn’t played BG3.)
Cazador is a vampire lord—old and very powerful. Astarion (one of the companion characters in the player’s party, and himself one of Cazador’s spawn, formerly[1] in the vampire lord’s thrall), in the course of telling the player character about Cazador (and explaining why Cazador never turns his spawn into full-fledged independent vampires—despite this being possible and indeed very easy—and instead keeps them as thralls under his absolute command), says that “the biggest threat to a vampire… is another vampire”.
In the normal course of events, it would be totally unbelievable for the player character to defeat Cazador. (Indeed, you would never even learn of his existence.) What makes Cazador’s downfall possible is the introduction of an Outside Context Problem, in the form of… well, the main plot device of the game.
However, the way things proceed is not just that Cazador is happily vampire-lording along, and then one day, bam! plot device’d right in the face! No, instead what happens is that the main plot device is injected into the normal state of affairs, things get shaken up, but what this does is allow for the possibility of Cazador being defeated, by radically changing the balance of forces in a way that he could not have foreseen. Then it’s up to the good guys (i.e., the player character & friends) to take advantage of being the right people in the right place at the right time, and exploit their sudden and temporary advantage, their brief window of opportunity, to take down Cazador.
Thus we get the best of both worlds: the enemy can be powerful and intelligent, but their defeat is nevertheless believable and satisfying.
The most likely way for a prepared adversary to lose in such a situation is through a surprise, an out-of-sample error.
Say a 1000 year old vampire that spent the first 500 years thinking of every possible adversary. They are well defended against anything that existed in the year 1500. Too bad they haven’t really kept up to date with modern tech.
Or, well most people don’t wear a bulletproof vest every day. Often cost and convenience trumps protection when people aren’t expecting to be attacked.
If a powerful antagonist is dumb or shortsighted enough, anyone can kill them, but what stories go out of their way to claim that their Big Bad is dumb? That’s usually the role of side characters or mooks, not of the Big Bad.
Plus it takes a certain kind of survival instinct to survive for 1000 years in the first place.
I agree with the tradeoff of safety vs. convenience, but there are many types of preparation that require a one-off investment, rather than an ongoing inconvenience. Cost, though, should not matter to most antagonists, since they typically far exceed the protagonists’ resources.
Hmm… this setup seems to cheat by withholding from vampires one of their most well-known and archetypal powers, namely the ability to turn into mist. (Dracula in Bram Stoker’s depiction can do this, vampires in D&D can do this, lots and lots of other examples.)
It also cheats by making the vampire stupid:
Why is the coffin in an accessible location—rather than, say, sealed away in a secret chamber that is accessible only via a small passage that can be navigated only by a creature the size of a bat? (Or, if we let the vampire have a mist form ability, a chamber accessible only via tiny, carefully concealed air holes, through which only a gaseous entity can pass.)
Why is there only one coffin, instead of several? (Once again, this particular failure mode is completely absent from Bram Stoker’s novel, for example, where Dracula, who needs to sleep in grave soil from the place where he was buried, has fifty containers of such soil distributed throughout the city; if one is compromised, well, he’ll just use the next one! Such tricks are likewise used by e.g. Strahd von Zarovich—D&D’s most famous vampire—and by many other fictional vampires.)
A thousand years is a long time to not have thought of such things…
Yeah. There were several other attacks that I omitted—something with holy water, something with a book, something with the vampire’s true name—maybe one of them did something about the mist form, or maybe not, I don’t know the lore that well tbh. And yeah, in a thousand years a vampire could probably figure out how to protect themselves pretty well, so to write a story where the average guy wins, there must be a bit of stretch somewhere. Anyway, my point is that this is still a more realistic depiction of how hard problems get solved. Or a more actionable one, at least.
And yeah, in a thousand years a vampire could probably figure out how to protect themselves pretty well, so to write a story where the average guy wins, there must be a bit of stretch somewhere.
Interestingly, this is another point in which Bram Stoker’s Dracula is very well thought-out. Stoker is well aware that with his rules, Dracula ought to be invincible… But Dracula has the liability that he’s been stultified mentally by centuries of quasi-imprisonment, and so hasn’t yet understood or experiments with his powers.
He is slowly waking up, and doing so, and starting to understand that he can eg. move his coffins himself without hirelings, but only right as the protagonists hunt him down. It is only by hours or minutes do they manage to cut him off from each resource. With another day or two, Dracula would have realized he could, say, just bury a bunch of coffins deep underground in the dirt, and he would be immune from discovery or attack.
Really, the novel is shockingly rationalist, and that’s why I call it ‘the Vampire Singularity’. Dracula is undergoing a hard takeoff, as it were, which is just barely interrupted by the protagonists.
I just read the novel at your recommendation, it’s great! And your analysis of Susanne Delage is cool too. However, I just saw that you added a pretty nasty AI slop picture at the top of the article. It’s a puzzling thing about you: you have a good nose for LLM slop, and rightly hate it, but you don’t have the same reaction to slop from image models (which feels just as much a visceral turn-off to some people—for example, me).
I don’t believe it is “AI slop”, much less that it is “pretty nasty”. I consider AI slop to be low-meaning and low-effort generative media which adds little or nothing to the experience
I assume you are referring to the German Expressionism, alluding to Nosferatu (which is highly relevant for at least two reasons), image illustrating the narrator’s childhood iceskating in a New England Protestant town in decline due to Dracula taking it over; I generated it in MJ after cracking SD, to sum up the horrifying reality of my solution. I put several hours of thought and effort into the concept and creating it, and got what I wanted, so I think this is just a case of de gustibus non est disputandum. I felt it cleverly visually encapsulated the mood of the horror that Gene Wolfe meant to lurk underneath the harmless nearly-bucolic appearance of SD and enhanced the experience.
So I think it satisfies my 3 criteria: it is not low-meaning, was not low-effort, and adds something.
But I don’t think this is a good place to discuss it, so I have added a more detailed discussion of that image’s process & meaning to my image slop blog post as an example of how I think I get good image samples.
EDIT: I would be curious about the disagrees. What, exactly, are you disagreeing with? Do you think I am lying about the creation process, the prompt, or the meaning? (I would point out that there was already a short version of this description in the alt text, and has been since I added it in the first place c. November 2023.) Do you disagree that the high concept reflects my SD interpretation? Or what?
People dropping in on an unfamiliar website can have very hair-trigger reactions on any sort of AI art. I heard someone say they felt like immediately writing off a (good) Substack post as fake content they should ignore because of the AI art illustration at the top of the post. And I think the illustration generator is a built-in option on Substack because I see constant AI illustrations on Substacks of people who are purely writers who as far as I can tell who aren’t very interested in art or web design. But this person wasn’t familiar with Substack, so their brain just went “random AI slop site, ignore”.
I think that it’s a pity if people write off my SD page because they failed to understand the meaningful illustration I put effort into creating and didn’t, say, check the alt text to see if they were missing something or wonder why such an unusual website would have “AI slop”; and I agree that this may be a case of “things you can’t countersignal”.
However, I refuse to submit to the tyranny of the lowest common denominator and dumb down my writings or illustrations. I don’t usually write for such readers, and I definitely do not write my Gene Wolfe essays for them!
So unless people can point to something actually bad about the illustration, which makes it fail to satisfy my intent—as opposed to something bad about the readers like being dumb and ignorant and writing it off as “AI slop” when it’s not—then I decline to change it.
Sorry, I wrote a response and deleted it. Let me try again.
I don’t know what exactly makes AI images so off-putting to me. The bare fact is that this image to me looks obviously AI-made and really unpleasant to see. I don’t know why some people react to AI images this way and others don’t.
My best guess is that AI images would begin to look more “cursed” to you if you spent some days or weeks drawing stuff with pencil and paper, maybe starting with some Betty Edwards exercises. But that’s just a guess, and maybe you’ve done that already.
I have some of the same feeling, but internally I’ve mostly pinned it to two prongs of repetition and ~status.
ChatGPT’s writing is increasingly disliked by those who recognize it. The prose is poor in various ways, but I’ve certainly read worse and not been so off-put. Nor am I as off-put when I first use a new model, but then I increasingly notice its flaws over the next few weeks. The main aspect is that the generated prose is repetitive across the writings which ensures we can pick up on the pattern. Such as making it easy to predict flaws.
Just as I avoid many generic power fantasy fiction as much of it is very predictable in how it will fall short even though many are still positive value if I didn’t have other things to do with my time.
So, I think a substantial part is that of recognizing the style, there being flaws you’ve seen in many images in the past, and then regardless of whether this specific actual image is that problematic, the mind associates it with negative instances and also being overly predictable.
Status-wise this is not entirely in a negative status game sense. A generated image is a sign that it was probably not that much effort for the person making it, and the mind has learned to associate art with effort + status to a degree, even if indirect effort + status by the original artist the article is referencing.
And so it is easy to learn a negative feeling towards these, which attaches itself to the noticeable shared repetition/tone. Just like some people dislike pop in part due to status considerations like being made by celebrities or countersignaling of not wanting to go for the most popular thing, and then that feeds into an actual dislike for that style of musical art.
But this activates too easily, a misfiring set of instincts, so I’ve deliberately tamped it down on myself; because I realized that there are plenty of images which five years ago I would have been simply impressed and find them visually appealing. I think this is an instinct that is to a degree real (generated images can be poorly made), while also feeding on itself that makes it disconnected from past preferences.
I don’t think that the poorly made images should notably influence my enjoyment of better quality images, even if there is a shared noticeable core. So that’s my suggestion.
‘Repetition’ is certainly a drawback to the ChatGPT style: we have lost em dashes and tricolons for a generation. But it can’t in its own right explain the reaction to the SD image, because… ‘German Expressionist linocut’ just doesn’t describe a default, or even a common, output style of any image generative model ever. (That’s part of why I like to use ‘linocut’ as a keyword, and for better or worse, people who might reach for ‘German Expressionist’ these days typically reach for Corporate Memphis instead.)
It could however be a kneejerk reaction: “oh no, this is a generated image, therefore it is exhaustingly overused and boring [even if it isn’t actually]”.
I have a bit of a problem with Graham’s argument. As you continue to design things, two different processes happen:
your mastery of the purely technical aspects of the craft improve (e.g. you learn to use more tools and use them better, you learn more techniques, etc). This makes you better at translating the image in your head into an actual material thing. It improves your agency. It does not mean your taste is better, but rather, whatever your taste is, the product will match it more closely and will be less random;
you will be subject to more aesthetics and examples of other people’s work and this will in turn affect and transform your own aesthetics. To some extent, this might mean “improving” them insofar as you yourself aren’t necessarily aware of what exactly best tickles you. So in a parallel to the first process, where the thing-outside-you better matches the thing-inside-you, you may also learn how to make the thing-inside-you better match the thing-that-gives-you-good-feelings. But also, as you get exposed to all this churn of aesthetics and of your own style, your feelings change too. And this I surmise is a purely horizontal change. It’s not about them becoming better. In fact it’s often about you becoming bored of the common, obvious thing, and moving on to the next, and then the next, in pursuit of a new dopamine kick as the old stuff is now samey and unremarkable, like a junkie. You end up with a taste that is probably unusual, extravagant, or at least much more complex than the average Joe’s.
I think 2) is what people actually mean by “good taste”. I don’t think it’s necessarily actual “good taste” in any objective sense, but rather, the taste of those who happen to all be very good at their craft and dominate the scene, so they are trend-setters. But how often have the fortunes of art turned completely? A century’s artists if presented with the works of those two hundred years later would have likely called them in horrible taste. Has taste just been improved through time, like a science? And why is it then that the present-day ultimate taste seems to often resonate less with the average person than the old one? By what metric is it precisely best?
The situation with the AI thing is actually kind of relevant. If you see it for the first time you might actually be left in awe by it. If you see it a hundred times you pick up on the patterns and the tricks. I’ve experienced the same with human authors—writers especially, you just read enough of them at you start noticing the prose tricks and style features repeating over and over again and at some point it feels like it’s stale and meaningless. But does that mean that individually each of those things are just objectively Bad in some sense? It’s not them who changed. They’re the same that impressed you the first time. You changed.
It’s hard to say what the largest literary flaw is. The story is a big mix of good and bad sides. But the final exam in particular felt weak to me even by its own standards: it’s not a good description of a difficult final encounter.
For me, the gold standard for a final encounter is the ending to the video game Veil of Darkness (I haven’t played it, but Ross Scott’s review recounts the whole thing). Basically we’re an average guy, and we need to defeat an elder vampire who had enslaved a valley of people for a thousand years and stolen all their sunlight. And here’s how the final battle goes: 1) we steal the vampire’s box containing all the stolen sunlight 2) we nail his coffin shut 3) wear a garlic necklace 4) eat a mushroom making us temporarily blind 5) confront him directly 6) he tries to mind-control us but fails due to our blindness 7) he tries to physically attack us but fails due to the necklace 8) we open the box with stolen sunlight at him, thus hitting him with an attack proportional to his age, nice detail isn’t it? 9) he staggers, turns into a bat and flies away to rest in the coffin 10) but the coffin is shut, so we finally stake him and cut off his head. There were a few other attacks too, but these are the main ones.
See what happened? The story makes the average guy defeating a thousand-year-old vampire sound actually believable. Because that’s how much work and preparation it takes to do something difficult. And this lesson you can apply in real life, unlike the lesson from HPMOR’s final exam.
I also criticized HPMOR’s Final Exam at the time, though for reasons of story consistency, rather than narrative.
That said, I don’t think the particular kind of satisfying conclusion you wanted to see works for rationalist fiction like HPMOR. After all, the premise is that all characters, protagonists and antagonists alike, have their own spark of optimization, genre savvy, and so on. So they know how these stories are supposed to go (the Hero wins, the Dark Lord loses, etc.), imagine how they could be defeated, and preempt those scenarios as best they can.
So in a rationalist version of that game finale, the elder vampire takes precautions against having his precious box stolen; protects his coffin from tampering; has overcome his weakness to garlic, or found a workaround (like a gust of wind spell or something), or faked having the weakness in the first place; etc. etc.
The most likely way for a prepared adversary to lose in such a situation is through a surprise, an out-of-sample error. That may not be as narratively satisfying, but it makes a lot more sense than for an elder vampire to die because an average human learned about his weaknesses. As if the vampire wasn’t aware of those weaknesses himself and didn’t have ample time to compensate for them.
An instructive (and fun) example is the case of Cazador Szarr (an antagonist in Baldur’s Gate 3).
(Spoilers, though not very important ones, below for anyone who hasn’t played BG3.)
Cazador is a vampire lord—old and very powerful. Astarion (one of the companion characters in the player’s party, and himself one of Cazador’s spawn, formerly[1] in the vampire lord’s thrall), in the course of telling the player character about Cazador (and explaining why Cazador never turns his spawn into full-fledged independent vampires—despite this being possible and indeed very easy—and instead keeps them as thralls under his absolute command), says that “the biggest threat to a vampire… is another vampire”.
In the normal course of events, it would be totally unbelievable for the player character to defeat Cazador. (Indeed, you would never even learn of his existence.) What makes Cazador’s downfall possible is the introduction of an Outside Context Problem, in the form of… well, the main plot device of the game.
However, the way things proceed is not just that Cazador is happily vampire-lording along, and then one day, bam! plot device’d right in the face! No, instead what happens is that the main plot device is injected into the normal state of affairs, things get shaken up, but what this does is allow for the possibility of Cazador being defeated, by radically changing the balance of forces in a way that he could not have foreseen. Then it’s up to the good guys (i.e., the player character & friends) to take advantage of being the right people in the right place at the right time, and exploit their sudden and temporary advantage, their brief window of opportunity, to take down Cazador.
Thus we get the best of both worlds: the enemy can be powerful and intelligent, but their defeat is nevertheless believable and satisfying.
It’s complicated.
Say a 1000 year old vampire that spent the first 500 years thinking of every possible adversary. They are well defended against anything that existed in the year 1500. Too bad they haven’t really kept up to date with modern tech.
Or, well most people don’t wear a bulletproof vest every day. Often cost and convenience trumps protection when people aren’t expecting to be attacked.
If a powerful antagonist is dumb or shortsighted enough, anyone can kill them, but what stories go out of their way to claim that their Big Bad is dumb? That’s usually the role of side characters or mooks, not of the Big Bad.
Plus it takes a certain kind of survival instinct to survive for 1000 years in the first place.
I agree with the tradeoff of safety vs. convenience, but there are many types of preparation that require a one-off investment, rather than an ongoing inconvenience. Cost, though, should not matter to most antagonists, since they typically far exceed the protagonists’ resources.
Hmm… this setup seems to cheat by withholding from vampires one of their most well-known and archetypal powers, namely the ability to turn into mist. (Dracula in Bram Stoker’s depiction can do this, vampires in D&D can do this, lots and lots of other examples.)
It also cheats by making the vampire stupid:
Why is the coffin in an accessible location—rather than, say, sealed away in a secret chamber that is accessible only via a small passage that can be navigated only by a creature the size of a bat? (Or, if we let the vampire have a mist form ability, a chamber accessible only via tiny, carefully concealed air holes, through which only a gaseous entity can pass.)
Why is there only one coffin, instead of several? (Once again, this particular failure mode is completely absent from Bram Stoker’s novel, for example, where Dracula, who needs to sleep in grave soil from the place where he was buried, has fifty containers of such soil distributed throughout the city; if one is compromised, well, he’ll just use the next one! Such tricks are likewise used by e.g. Strahd von Zarovich—D&D’s most famous vampire—and by many other fictional vampires.)
A thousand years is a long time to not have thought of such things…
Yeah. There were several other attacks that I omitted—something with holy water, something with a book, something with the vampire’s true name—maybe one of them did something about the mist form, or maybe not, I don’t know the lore that well tbh. And yeah, in a thousand years a vampire could probably figure out how to protect themselves pretty well, so to write a story where the average guy wins, there must be a bit of stretch somewhere. Anyway, my point is that this is still a more realistic depiction of how hard problems get solved. Or a more actionable one, at least.
Interestingly, this is another point in which Bram Stoker’s Dracula is very well thought-out. Stoker is well aware that with his rules, Dracula ought to be invincible… But Dracula has the liability that he’s been stultified mentally by centuries of quasi-imprisonment, and so hasn’t yet understood or experiments with his powers.
He is slowly waking up, and doing so, and starting to understand that he can eg. move his coffins himself without hirelings, but only right as the protagonists hunt him down. It is only by hours or minutes do they manage to cut him off from each resource. With another day or two, Dracula would have realized he could, say, just bury a bunch of coffins deep underground in the dirt, and he would be immune from discovery or attack.
Really, the novel is shockingly rationalist, and that’s why I call it ‘the Vampire Singularity’. Dracula is undergoing a hard takeoff, as it were, which is just barely interrupted by the protagonists.
I just read the novel at your recommendation, it’s great! And your analysis of Susanne Delage is cool too. However, I just saw that you added a pretty nasty AI slop picture at the top of the article. It’s a puzzling thing about you: you have a good nose for LLM slop, and rightly hate it, but you don’t have the same reaction to slop from image models (which feels just as much a visceral turn-off to some people—for example, me).
I don’t believe it is “AI slop”, much less that it is “pretty nasty”. I consider AI slop to be low-meaning and low-effort generative media which adds little or nothing to the experience
I assume you are referring to the German Expressionism, alluding to Nosferatu (which is highly relevant for at least two reasons), image illustrating the narrator’s childhood iceskating in a New England Protestant town in decline due to Dracula taking it over; I generated it in MJ after cracking SD, to sum up the horrifying reality of my solution. I put several hours of thought and effort into the concept and creating it, and got what I wanted, so I think this is just a case of de gustibus non est disputandum. I felt it cleverly visually encapsulated the mood of the horror that Gene Wolfe meant to lurk underneath the harmless nearly-bucolic appearance of SD and enhanced the experience.
So I think it satisfies my 3 criteria: it is not low-meaning, was not low-effort, and adds something. But I don’t think this is a good place to discuss it, so I have added a more detailed discussion of that image’s process & meaning to my image slop blog post as an example of how I think I get good image samples.
EDIT: I would be curious about the disagrees. What, exactly, are you disagreeing with? Do you think I am lying about the creation process, the prompt, or the meaning? (I would point out that there was already a short version of this description in the alt text, and has been since I added it in the first place c. November 2023.) Do you disagree that the high concept reflects my SD interpretation? Or what?
People dropping in on an unfamiliar website can have very hair-trigger reactions on any sort of AI art. I heard someone say they felt like immediately writing off a (good) Substack post as fake content they should ignore because of the AI art illustration at the top of the post. And I think the illustration generator is a built-in option on Substack because I see constant AI illustrations on Substacks of people who are purely writers who as far as I can tell who aren’t very interested in art or web design. But this person wasn’t familiar with Substack, so their brain just went “random AI slop site, ignore”.
I think that it’s a pity if people write off my SD page because they failed to understand the meaningful illustration I put effort into creating and didn’t, say, check the alt text to see if they were missing something or wonder why such an unusual website would have “AI slop”; and I agree that this may be a case of “things you can’t countersignal”.
However, I refuse to submit to the tyranny of the lowest common denominator and dumb down my writings or illustrations. I don’t usually write for such readers, and I definitely do not write my Gene Wolfe essays for them!
So unless people can point to something actually bad about the illustration, which makes it fail to satisfy my intent—as opposed to something bad about the readers like being dumb and ignorant and writing it off as “AI slop” when it’s not—then I decline to change it.
Sorry, I wrote a response and deleted it. Let me try again.
I don’t know what exactly makes AI images so off-putting to me. The bare fact is that this image to me looks obviously AI-made and really unpleasant to see. I don’t know why some people react to AI images this way and others don’t.
My best guess is that AI images would begin to look more “cursed” to you if you spent some days or weeks drawing stuff with pencil and paper, maybe starting with some Betty Edwards exercises. But that’s just a guess, and maybe you’ve done that already.
I have some of the same feeling, but internally I’ve mostly pinned it to two prongs of repetition and ~status.
ChatGPT’s writing is increasingly disliked by those who recognize it. The prose is poor in various ways, but I’ve certainly read worse and not been so off-put. Nor am I as off-put when I first use a new model, but then I increasingly notice its flaws over the next few weeks. The main aspect is that the generated prose is repetitive across the writings which ensures we can pick up on the pattern. Such as making it easy to predict flaws. Just as I avoid many generic power fantasy fiction as much of it is very predictable in how it will fall short even though many are still positive value if I didn’t have other things to do with my time.
So, I think a substantial part is that of recognizing the style, there being flaws you’ve seen in many images in the past, and then regardless of whether this specific actual image is that problematic, the mind associates it with negative instances and also being overly predictable.
Status-wise this is not entirely in a negative status game sense. A generated image is a sign that it was probably not that much effort for the person making it, and the mind has learned to associate art with effort + status to a degree, even if indirect effort + status by the original artist the article is referencing. And so it is easy to learn a negative feeling towards these, which attaches itself to the noticeable shared repetition/tone. Just like some people dislike pop in part due to status considerations like being made by celebrities or countersignaling of not wanting to go for the most popular thing, and then that feeds into an actual dislike for that style of musical art.
But this activates too easily, a misfiring set of instincts, so I’ve deliberately tamped it down on myself; because I realized that there are plenty of images which five years ago I would have been simply impressed and find them visually appealing. I think this is an instinct that is to a degree real (generated images can be poorly made), while also feeding on itself that makes it disconnected from past preferences. I don’t think that the poorly made images should notably influence my enjoyment of better quality images, even if there is a shared noticeable core. So that’s my suggestion.
‘Repetition’ is certainly a drawback to the ChatGPT style: we have lost em dashes and tricolons for a generation. But it can’t in its own right explain the reaction to the SD image, because… ‘German Expressionist linocut’ just doesn’t describe a default, or even a common, output style of any image generative model ever. (That’s part of why I like to use ‘linocut’ as a keyword, and for better or worse, people who might reach for ‘German Expressionist’ these days typically reach for Corporate Memphis instead.)
It could however be a kneejerk reaction: “oh no, this is a generated image, therefore it is exhaustingly overused and boring [even if it isn’t actually]”.
I have a bit of a problem with Graham’s argument. As you continue to design things, two different processes happen:
your mastery of the purely technical aspects of the craft improve (e.g. you learn to use more tools and use them better, you learn more techniques, etc). This makes you better at translating the image in your head into an actual material thing. It improves your agency. It does not mean your taste is better, but rather, whatever your taste is, the product will match it more closely and will be less random;
you will be subject to more aesthetics and examples of other people’s work and this will in turn affect and transform your own aesthetics. To some extent, this might mean “improving” them insofar as you yourself aren’t necessarily aware of what exactly best tickles you. So in a parallel to the first process, where the thing-outside-you better matches the thing-inside-you, you may also learn how to make the thing-inside-you better match the thing-that-gives-you-good-feelings. But also, as you get exposed to all this churn of aesthetics and of your own style, your feelings change too. And this I surmise is a purely horizontal change. It’s not about them becoming better. In fact it’s often about you becoming bored of the common, obvious thing, and moving on to the next, and then the next, in pursuit of a new dopamine kick as the old stuff is now samey and unremarkable, like a junkie. You end up with a taste that is probably unusual, extravagant, or at least much more complex than the average Joe’s.
I think 2) is what people actually mean by “good taste”. I don’t think it’s necessarily actual “good taste” in any objective sense, but rather, the taste of those who happen to all be very good at their craft and dominate the scene, so they are trend-setters. But how often have the fortunes of art turned completely? A century’s artists if presented with the works of those two hundred years later would have likely called them in horrible taste. Has taste just been improved through time, like a science? And why is it then that the present-day ultimate taste seems to often resonate less with the average person than the old one? By what metric is it precisely best?
The situation with the AI thing is actually kind of relevant. If you see it for the first time you might actually be left in awe by it. If you see it a hundred times you pick up on the patterns and the tricks. I’ve experienced the same with human authors—writers especially, you just read enough of them at you start noticing the prose tricks and style features repeating over and over again and at some point it feels like it’s stale and meaningless. But does that mean that individually each of those things are just objectively Bad in some sense? It’s not them who changed. They’re the same that impressed you the first time. You changed.
On the other hand, a vampire who had gone undefeated for a thousand years might also get overconfident and sloppy.