It is possible that jefftk put in a lot of effort to make sure the generated vibe is as accurate as could reasonably be
I didn’t. I picked out a photo that I was going to use to illustrate the piece, one host asked me not to use it because of privacy, another suggested Ghiblifying it and made one quickly on their phone. We looked at it and thought it gave the right impression despite the many errors.
I didn’t think you did, and wasn’t trying to imply you did. I was onl illustrating how it wouldn’t even matter if you had.
The vibe of the generated image is far closer to the real party than the image you linked.
Ok...? That’s fine, I guess, but irrelevant—my point is that until you stated this, deep in the comments, I could not have known it.
I am surprised & disappointe you responded in that way, since I’ve tried to be clear that I am not talking about whether or not the image you posted for that party is well representative of the party you attended. It makes no difference to anything I’m arguing whether it is or isn’t.
I am saying that no reader (who wasn’t at the event) can ever trust that any AI gen image attached to a blog post is meaningfully depicting a real event.
I am not sure if you’re seeing from outside your own perspective. From your view, comparing it to the original, it’s good enough. But you’re not the audience of your blog, right? A reader has none of that information. They just have an AI slop image (and I’m not trying to use that to be rude, but it fits the bill for the widely accepted term), and so they either accept it credulously as they accept most AI slop to be “true to the vibe”, whether it is or isn’t (which should be an obviously bad habit); or they throw it away as they do with most AI slop, to prevent it from polluting their mental model of reality. In this model, all readers are worse off for it being there. Where would a third category fit in of readers (who don’t know you,) who see this particular AI image and trust it to be vibe-accurate even though they know most AI images are worthless? Why would they make that judgement?
EDIT: I have no idea why this comment is received so negatively either. I think everything in it is consistent with all my other comments, and I’m also trying to wrangle the conversation back on topic repeatedly. I think I’ve been much more consistent and clear about my arguments than people responding to me, so this is all very confusing. It’s definitely feeling like I’m being downvoted ideologically for having a negative opinion of AI image generation.
Where would a third category fit in of readers (who don’t know you,) who see this particular AI image and trust it to be vibe-accurate even though they know most AI images are worthless? Why would they make that judgement?
The fact that the author decided to include it in the blog post is telling enough that the image is representative of the real vibes. There isn’t just an “AI slop image”, but also the author’s intent to use it as a quick glance into the real vibes, in a faster and more accurate way than just words would have done.
Sorry, I wrote my own reply (saying roughly the same thing) without having seen this. I’ve upvoted and strong agree voted, but the agreement score was in the negative before I did that. If the disagree vote came from curvise, then I’m curious as to why.[1]
It seems to me that moonlight’s comment gets to a key point here: you’re not being asked to trust the AI; you’re being asked to trust the author’s judgment. The author’s judgment might be poor, and the image might be misleading! But that applies just as well to the author’s verbal descriptions. If you trust the author enough that you would take his verbal description of the vibe seriously, why doesn’t his endorsement of the image as vibe-accurate also carry some weight?
Yes I did cast a disagree vote,: I don’t agree that “The fact that the author decided to include it in the blog post is telling enough that the image is representative of the real vibes” is true, when it comes to an AI generated image. My reasoning for that position is elaborated in a different reply in this thread.
readers (who don’t know you,) who see this particular AI image and trust it to be vibe-accurate even though they know most AI images are worthless? Why would they make that judgement?
I think a crucial point here is that we’re not just getting an arbitrary AI-generated image; we’re getting an AI-generated image that the author of the blog post has chosen to include and is claiming to be a vibes-accurate reproduction of a real photo. If you think the author might be trying to trick you, then you should mistrust the image just as you would mistrust his verbal description. But I don’t think the image is meant to be proof of anything; it’s just another way for the author to communicate with a receptive reader. “The vibe was roughly like this [embedded image]” is an alternative to (or augmentation of) a detailed verbal description of the vibe, and you should trust it roughly as much as you would trust the verbal description.
I largely agree with your point here. I’m arguing more that in the case of a ghiblified image (even more so than a regular AI image), the signals a reader gets are this:
the author says “here is an image to demonstrate vibe”
the image is AI generated with obvious errors
For many people, #2 largely negates #1, because #2 also implies these additional signals to them:
the author made the least possible effort to show the vibe in an image, and
the author has a poor eye for art and/or bad taste.
Therefore, the author probably doesn’t know how to even tell if an image captures the vibe or not.
Hell, I forgot about the easiest and most common (not by coincidence!) strategy: put emoji over all the faces and then post the actual photo.
EDIT: who is disagreeing with this comment? You may find it not worthwhile , in which case downvote , but what about it is actually arguing for something incorrect?
If I did that, people in photos would often be recognizable. It retains completely accurate posture, body shape, skin color, clothing, and height. I’ve often recognized people in this kind of image.
(I haven’t voted on your comment, but I suspect this is why it’s disagree voted)
That does make sense WRT disagreement. I wasn’t intending to fully hide identities even from people who know the subjects, but if that’s also a goal, it wouldn’t do that.
I didn’t. I picked out a photo that I was going to use to illustrate the piece, one host asked me not to use it because of privacy, another suggested Ghiblifying it and made one quickly on their phone. We looked at it and thought it gave the right impression despite the many errors.
The vibe of the generated image is far closer to the real party than the image you linked.
I didn’t think you did, and wasn’t trying to imply you did. I was onl illustrating how it wouldn’t even matter if you had.
Ok...? That’s fine, I guess, but irrelevant—my point is that until you stated this, deep in the comments, I could not have known it.
I am surprised & disappointe you responded in that way, since I’ve tried to be clear that I am not talking about whether or not the image you posted for that party is well representative of the party you attended. It makes no difference to anything I’m arguing whether it is or isn’t.
I am saying that no reader (who wasn’t at the event) can ever trust that any AI gen image attached to a blog post is meaningfully depicting a real event.
I am not sure if you’re seeing from outside your own perspective. From your view, comparing it to the original, it’s good enough. But you’re not the audience of your blog, right? A reader has none of that information. They just have an AI slop image (and I’m not trying to use that to be rude, but it fits the bill for the widely accepted term), and so they either accept it credulously as they accept most AI slop to be “true to the vibe”, whether it is or isn’t (which should be an obviously bad habit); or they throw it away as they do with most AI slop, to prevent it from polluting their mental model of reality. In this model, all readers are worse off for it being there. Where would a third category fit in of readers (who don’t know you,) who see this particular AI image and trust it to be vibe-accurate even though they know most AI images are worthless? Why would they make that judgement?
EDIT: I have no idea why this comment is received so negatively either. I think everything in it is consistent with all my other comments, and I’m also trying to wrangle the conversation back on topic repeatedly. I think I’ve been much more consistent and clear about my arguments than people responding to me, so this is all very confusing. It’s definitely feeling like I’m being downvoted ideologically for having a negative opinion of AI image generation.
The fact that the author decided to include it in the blog post is telling enough that the image is representative of the real vibes. There isn’t just an “AI slop image”, but also the author’s intent to use it as a quick glance into the real vibes, in a faster and more accurate way than just words would have done.
Sorry, I wrote my own reply (saying roughly the same thing) without having seen this. I’ve upvoted and strong agree voted, but the agreement score was in the negative before I did that. If the disagree vote came from curvise, then I’m curious as to why.[1]
It seems to me that moonlight’s comment gets to a key point here: you’re not being asked to trust the AI; you’re being asked to trust the author’s judgment. The author’s judgment might be poor, and the image might be misleading! But that applies just as well to the author’s verbal descriptions. If you trust the author enough that you would take his verbal description of the vibe seriously, why doesn’t his endorsement of the image as vibe-accurate also carry some weight?
No passive aggression intended here; I respect the use of a disagree vote instead of a karma downvote.
Yes I did cast a disagree vote,: I don’t agree that “The fact that the author decided to include it in the blog post is telling enough that the image is representative of the real vibes” is true, when it comes to an AI generated image. My reasoning for that position is elaborated in a different reply in this thread.
I think a crucial point here is that we’re not just getting an arbitrary AI-generated image; we’re getting an AI-generated image that the author of the blog post has chosen to include and is claiming to be a vibes-accurate reproduction of a real photo. If you think the author might be trying to trick you, then you should mistrust the image just as you would mistrust his verbal description. But I don’t think the image is meant to be proof of anything; it’s just another way for the author to communicate with a receptive reader. “The vibe was roughly like this [embedded image]” is an alternative to (or augmentation of) a detailed verbal description of the vibe, and you should trust it roughly as much as you would trust the verbal description.
I largely agree with your point here. I’m arguing more that in the case of a ghiblified image (even more so than a regular AI image), the signals a reader gets are this:
the author says “here is an image to demonstrate vibe”
the image is AI generated with obvious errors
For many people, #2 largely negates #1, because #2 also implies these additional signals to them:
the author made the least possible effort to show the vibe in an image, and
the author has a poor eye for art and/or bad taste.
Therefore, the author probably doesn’t know how to even tell if an image captures the vibe or not.
Hell, I forgot about the easiest and most common (not by coincidence!) strategy: put emoji over all the faces and then post the actual photo.
EDIT: who is disagreeing with this comment? You may find it not worthwhile , in which case downvote , but what about it is actually arguing for something incorrect?
If I did that, people in photos would often be recognizable. It retains completely accurate posture, body shape, skin color, clothing, and height. I’ve often recognized people in this kind of image.
(I haven’t voted on your comment, but I suspect this is why it’s disagree voted)
That does make sense WRT disagreement. I wasn’t intending to fully hide identities even from people who know the subjects, but if that’s also a goal, it wouldn’t do that.