It’s not exactly the same of course but Yudkowsky has been predicting that ASIs would be able to effectively hack people’s minds for a really long time.
This idea predates Yudkowsky by quite a bit, actually!
For the idea of a folie à deux between a human and an AI, there’s always Alfred Bester’s classic “Fondly Farenheit” (1954, content note: murder), which opens with one of the best lines in science fiction:
He doesn’t know which of us I am these days, but they know one truth.
For the more general type of AI-powered pursuasion, Vernor Vinge and Charles Stross wrote early stories where superintelligence “rewrote” human minds. Here’s Vinge in A Fire Upon the Deep (1992!). A character explains why smart people don’t mess with superintelligence (emphasis added):
“So they set up a base in the Transcend at this lost archive—if that’s what it was. They began implementing the schemes they found. You can be sure they spent most of their time watching it for signs of deception. No doubt the recipe was a series of more or less intelligible steps with a clear takeoff point. The early stages would involve computers and programs more effective than anything in the Beyond—but apparently well-behaved.”
“… Yeah. Even in the Slowness, a big program can be full of surprises.”
Ravna nodded. “And some of these would be near or beyond human complexity. Of course, the Straumers would know this and try to isolate their creations. But given a malignant and clever design … it should be no surprise if the devices leaked onto the lab’s local net and distorted the information there. From then on, the Straumers wouldn’t have a chance. The most cautious staffers would be framed as incompetent. Phantom threats would be detected, emergency responses demanded. More sophisticated devices would be built, and with fewer safeguards. Conceivably, the humans were killed or rewritten before the Perversion even achieved transsapience.”
Then we have Charles Stross, in “Antibodies” (2000). Here, police officers are cognitively subverted by a nascent superintelligence (that has shown that all NP problems are in P, and picked up the expected superpowers):
Houndstooth Man looked at me: orange light from his HUD stained his right eyeball with a basilisk glare and I knew in my gut that these guys weren’t cops anymore, they were cancer cells about to metastasize.
The mechanism here is an optimized visual attack designed to efficiently subvert the brain:
here we were trapped in the basement of a police station owned by zombies working for a newborn AI, which was playing cheesy psychedelic videos to us in an attempt to perform a buffer-overflow attack on our limbic systems; the end of this world was a matter of hours away and—
These days, I regularly feel like I’ve encountered those AI-compromised “zombies” recently.
Vernor Vinge revisits the idea of superhuman persuasion in Rainbows End (2006):
YGBM. That was a bit of science-fiction jargon from the turn of the century: You-Gotta-Believe-Me. That is, mind control. Weak, social forms of YGBM drove all human history. For more than a hundred years, the goal of irresistible persuasion had been a topic of academic study. For thirty years it had been a credible technological goal. And for ten, some version of it had been feasible in well-controlled laboratory settings.
Here, there is fear that some actor—a terrorist group, a rogue AI—had superhuman pursuasive technology.
It’s worth noting that these ideas substantially predate Yudkowsky’s warnings against superintelligence. In particular, the superintelligence in A Fire Upon the Deep (1992) is almost literally, to this day, the threat model behind If Anyone Builds It, Everyone Dies. This isn’t to invalidate Yudkowsky’s warnings: I think Vinge was right that anyone foolish enough to build superhuman minds risks losing control rapidly and having a very bad day, for much the same reason that adults frequently outsmart toddlers.
But some of us have been worried about this stuff for almost a quarter of a century now. Around 2007 or so, I originally expected things to start getting scary around 2025, mostly by extrapolating out Moore’s Law. By 2017, I breathed a sigh of relief: We’d made progress in AI, yes, but we didn’t seem to be on track for working machine intelligence any time soon. Since then, we made up the lost ground at breakneck speed.
Yudkowsky worked hard to warn people. But the potential threat of superintelligence was taken seriously by people before him.
It’s not exactly the same of course but Yudkowsky has been predicting that ASIs would be able to effectively hack people’s minds for a really long time.
This idea predates Yudkowsky by quite a bit, actually!
For the idea of a folie à deux between a human and an AI, there’s always Alfred Bester’s classic “Fondly Farenheit” (1954, content note: murder), which opens with one of the best lines in science fiction:
For the more general type of AI-powered pursuasion, Vernor Vinge and Charles Stross wrote early stories where superintelligence “rewrote” human minds. Here’s Vinge in A Fire Upon the Deep (1992!). A character explains why smart people don’t mess with superintelligence (emphasis added):
Then we have Charles Stross, in “Antibodies” (2000). Here, police officers are cognitively subverted by a nascent superintelligence (that has shown that all NP problems are in P, and picked up the expected superpowers):
The mechanism here is an optimized visual attack designed to efficiently subvert the brain:
These days, I regularly feel like I’ve encountered those AI-compromised “zombies” recently.
Vernor Vinge revisits the idea of superhuman persuasion in Rainbows End (2006):
Here, there is fear that some actor—a terrorist group, a rogue AI—had superhuman pursuasive technology.
It’s worth noting that these ideas substantially predate Yudkowsky’s warnings against superintelligence. In particular, the superintelligence in A Fire Upon the Deep (1992) is almost literally, to this day, the threat model behind If Anyone Builds It, Everyone Dies. This isn’t to invalidate Yudkowsky’s warnings: I think Vinge was right that anyone foolish enough to build superhuman minds risks losing control rapidly and having a very bad day, for much the same reason that adults frequently outsmart toddlers.
But some of us have been worried about this stuff for almost a quarter of a century now. Around 2007 or so, I originally expected things to start getting scary around 2025, mostly by extrapolating out Moore’s Law. By 2017, I breathed a sigh of relief: We’d made progress in AI, yes, but we didn’t seem to be on track for working machine intelligence any time soon. Since then, we made up the lost ground at breakneck speed.
Yudkowsky worked hard to warn people. But the potential threat of superintelligence was taken seriously by people before him.