readers (who don’t know you,) who see this particular AI image and trust it to be vibe-accurate even though they know most AI images are worthless? Why would they make that judgement?
I think a crucial point here is that we’re not just getting an arbitrary AI-generated image; we’re getting an AI-generated image that the author of the blog post has chosen to include and is claiming to be a vibes-accurate reproduction of a real photo. If you think the author might be trying to trick you, then you should mistrust the image just as you would mistrust his verbal description. But I don’t think the image is meant to be proof of anything; it’s just another way for the author to communicate with a receptive reader. “The vibe was roughly like this [embedded image]” is an alternative to (or augmentation of) a detailed verbal description of the vibe, and you should trust it roughly as much as you would trust the verbal description.
I largely agree with your point here. I’m arguing more that in the case of a ghiblified image (even more so than a regular AI image), the signals a reader gets are this:
the author says “here is an image to demonstrate vibe”
the image is AI generated with obvious errors
For many people, #2 largely negates #1, because #2 also implies these additional signals to them:
the author made the least possible effort to show the vibe in an image, and
the author has a poor eye for art and/or bad taste.
Therefore, the author probably doesn’t know how to even tell if an image captures the vibe or not.
I think a crucial point here is that we’re not just getting an arbitrary AI-generated image; we’re getting an AI-generated image that the author of the blog post has chosen to include and is claiming to be a vibes-accurate reproduction of a real photo. If you think the author might be trying to trick you, then you should mistrust the image just as you would mistrust his verbal description. But I don’t think the image is meant to be proof of anything; it’s just another way for the author to communicate with a receptive reader. “The vibe was roughly like this [embedded image]” is an alternative to (or augmentation of) a detailed verbal description of the vibe, and you should trust it roughly as much as you would trust the verbal description.
I largely agree with your point here. I’m arguing more that in the case of a ghiblified image (even more so than a regular AI image), the signals a reader gets are this:
the author says “here is an image to demonstrate vibe”
the image is AI generated with obvious errors
For many people, #2 largely negates #1, because #2 also implies these additional signals to them:
the author made the least possible effort to show the vibe in an image, and
the author has a poor eye for art and/or bad taste.
Therefore, the author probably doesn’t know how to even tell if an image captures the vibe or not.