I predict we are shortly going to see platforms using generative AI + A/B testing to make “hyperslop”.
Imagine a music service, or a TikTok-like platform with AI-generated shortform videos. The generator gets hooked up to an optimiser which tweaks its input parameters. These could be legible, such as “colour saturation”, “cuteness”, or “content variability”, or entirely opaque weights somewhere. If a tweak is statistically established to increase engagement, it is applied and another A/B test begins.
You could even have specific optimisers which gets run on various subgroups, like “female American teens 16-18” gets their own sub-optimiser, as well as every subculture and every little attractor basin you can identify. This could go all the way down to tweaks for each individual user if content is cheap enough to be personalised.
All the prerequisites for this already exist. We’ve already gotten a taste of it from YouTube thumbnails, which have been A/B gradient-descended for years on the minds of a billion of mostly children to plaster those inhuman staring open-mouthed faces everywhere. It’s just a matter of time before bulk AI generation gets cheap enough to speed it up thousands of times and apply it to the content itself.
Can someone explain why this comment is so unpopular? Is the reasoning/evidence/character of Michael Tracey flawed? If so, I’d like to know!
I’ve looked into it as well, and his thesis—that the Epstein story is vastly exaggerated—seems entirely reasonable. E.g. the systemic “blackmail” thing that the OP here just takes for granted has pretty much nothing supporting it. Certainly Tracey’s view seems enormously more directionally accurate than the “satanist cannibal cabal” stuff that gets promoted left and right.