DumbFckFinder (DFF)

AI image generation is good enough to be ~indistinguishable from real photos, and can be done by anyone, for $0.00. The epistemic consequences of this haven’t sunk in for everyone. Recently there has been a rise in actual instances of AI-generated-but-real-looking US politics content going viral.

“So that guy sprayed something on Ilhan Omar at her speech, yeah? That was staged. Fake as a Somali-Minnesotan daycare, bro. You could analyze the various suspicious facts: Ilhan and Spray Man’s brief eye contact prior to the attack (obviously a signal), pretending she would physically confront Spray Man instead of being scared (especially being a woman), and then continuing her speech instead of going to a hospital or even washing the mysterious liquid off (because she knew it wasn’t poison, surely). You could think about all that. OR, you could just look at this pic from Spray Man’s social media:”

This was roughly the discourse witnessed by a certain part of X regarding this incident, the image of Ilhan smiling with Spray Man accruing millions of views across several posts, before the incident was quickly left behind in the ever-shortening news cycle.

But, the crucial damning image…

Hold that thought.

Meet DumbFckFinder.

Many of the account’s posts are silly, obviously-parody images and videos. Note the “DFF” watermark. These seem like innocent fun—no harm done. But...

And the Spray Incident wasn’t the only time the account’s content has successfully “found dumbfcks”.

Some X posts with these images are later amended with a Community Note (but others including the same image aren’t). This sends a notification to users who liked/​retweeted the post. But realistically, this sometimes comes well after the fact, and clicking the sometimes verbose/​dense Note to see what post it’s even talking about is beyond many users’ pain-in-the-ass threshold. Not to mention, people who just scrolled past the post don’t get a notification. It is far from a perfect system. But elsewhere, on many repost sites, there is no Community Note system at all—the post is either up, or it’s taken down. Even if a commenter points it out, most users won’t see it. All that to say, tons of people who saw these photos have not gotten the memo, and think they’re real to this day.

It’s not always for lack of trying to find out if they’re real.

@grok is this real

Fortunately these “@grok is this real” posts do have snobbish LLM-whisperers objecting condescendingly in the comments. The message that LLMs don’t work like that will be relentlessly repeated to content creators making this category of mistake publicly.

Speaking of people making honest mistakes from misunderstanding AI, I would bet the person who made this image was not intentionally trolling like DFF:

This upscaled image of a blurry frame from a video of the shooting of Alex Pretti went viral, despite the upscaler’s mistake on the kneeling agent’s head, not to mention his battlebot right foot and the other agent’s crossbow.

Even after noticing that the image is fake—in hindsight, obviously fake, - despite that, to look at the detail of the flushed downturned face of a dying man is deeply sad.

It is oh so easy to believe what you see when it confirms your biases and suspicions. Even though you know you shouldn’t believe things too easily, the mood behind politics-as-leisure tends not to be one of explicitly calculating conditional probabilities and so on. There are so many posts to be angry-then-sad-then-laugh at. It takes conscious, active effort to resist the urge to go next—and instead thoughtfully digest the information, practicing epistemic hygiene.

I know this trap is easy to fall into because what inspired me to write this post was ~”falling for” one of the images above, which I saw in passing (I would have bet $1 it was real, but not $10). I only learned that the image was fake days after the fact, when it came up in a conversation with, ironically, an LLM. (That’s right; you’re reading the writing of a certified “dumbfck”.)

I am always delighted to have my misconceptions about science demonstrably proven wrong, but politics is a more personal, intimate genre of belief; saying something dead wrong and being corrected felt genuinely embarrassing (especially when the account name called me a “dumbfck”). “On closer inspection it does look kind of AI-y after all,” I coped sourly (I can tell from some of the pixels and from seeing quite a few slops in my time). It was a painful reminder of how difficult it is to allow zero non-truths to sneak into your world model.

But, while I was writing this post, I came across the image below, and before I could even start consciously debating its veracity, my brain aggressively defaulted to labeling it as “AI until proven otherwise”. (And it’s not because the man pictured is supposed to be dead, but that’s a different conversation.) It was a learned instinct—good old-fashioned “Fool me once, shame on you. Fool me twice, shame on me.”

There is a learning curve to living in a world where you always have to do more than “see it” to believe it. But I think we are beginning to move up that curve. Some of the people viewing posts like this as we speak will soon do what I did—be corrected, and have a genuine and natural epistemic self-reflection—a checkup of how consciously they are abiding by some up-to-date policy on the spectrum of “Security Mindset and Ordinary Paranoia”. Most people wouldn’t say it in those words, but they are capable of doing something along those lines—they will appropriately increase how skeptical they are.

One big impediment to this is the pattern of: <user engages with the content once, briefly, and then is never confronted with the fact that it was AI generated>. It is up to websites to implement systems like Community Notes to educate the masses about this disinformation hazard. (Putting more obvious watermarks on images from ChatGPT, Grok Imagine, etc. would curb the issue, but the genie is out of the bottle with open source models. But, if that bearded-Epstein image was generated with Gemini, could Google be involved in the detection of the image being used for disinformation?)

NYC’s Mayor Mamdani had this to say about viral AI images of himself with his “father” Jeffrey Epstein:

″...There’s the old adage about how quickly a lie can spread with comparison to the truth… We also have to work to ensure that we have a city, we have a state, we have a country that actually has a regulatory system when it comes to AI, because frankly what it looks like today is a system that is ill equipped for the speed and the reach of the technologies in front of us.”

I too see the current situation as a genuine problem. Enough people know how to use AI image generation that citizens in each district could use it for propaganda purposes in their own local/​state elections. And when viewed as a fifth generation warfare weapon, state actors using AI agents to research, brainstorm, create, and post, could interfere in elections more prolifically than ever. While there is legitimate First Amendment danger in anointing an arbiter of truth when it comes to censoring facts or claims (see: Hunter Biden’s laptop), images that are objectively and provably fake influencing people’s votes by deliberate deception en masse should be prevented if possible.

No comments.