The Strangest Thing An AI Could Tell You

Human beings are all crazy. And if you tap on our brains just a little, we get so crazy that even other humans notice. Anosognosics are one of my favorite examples of this; people with right-hemisphere damage whose left arms become paralyzed, and who deny that their left arms are paralyzed, coming up with excuses whenever they’re asked why they can’t move their arms.

A truly wonderful form of brain damage—it disables your ability to notice or accept the brain damage. If you’re told outright that your arm is paralyzed, you’ll deny it. All the marvelous excuse-generating rationalization faculties of the brain will be mobilized to mask the damage from your own sight. As Yvain summarized:

After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn’t actually her arm, it was her daughter’s. Why was her daughter’s arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter’s hand? The patient said her daughter had borrowed it. Where was the patient’s arm? The patient “turned her head and searched in a bemused way over her left shoulder”.

I find it disturbing that the brain has such a simple macro for absolute denial that it can be invoked as a side effect of paralysis. That a single whack on the brain can both disable a left-side motor function, and disable our ability to recognize or accept the disability. Other forms of brain damage also seem to both cause insanity and disallow recognition of that insanity—for example, when people insist that their friends have been replaced by exact duplicates after damage to face-recognizing areas.

And it really makes you wonder...

...what if we all have some form of brain damage in common, so that none of us notice some simple and obvious fact? As blatant, perhaps, as our left arms being paralyzed? Every time this fact intrudes into our universe, we come up with some ridiculous excuse to dismiss it—as ridiculous as “It’s my daughter’s arm”—only there’s no sane doctor watching to pursue the argument any further. (Would we all come up with the same excuse?)

If the “absolute denial macro” is that simple, and invoked that easily...

Now, suppose you built an AI. You wrote the source code yourself, and so far as you can tell by inspecting the AI’s thought processes, it has no equivalent of the “absolute denial macro”—there’s no point damage that could inflict on it the equivalent of anosognosia. It has redundant differently-architected systems, defending in depth against cognitive errors. If one system makes a mistake, two others will catch it. The AI has no functionality at all for deliberate rationalization, let alone the doublethink and denial-of-denial that characterizes anosognosics or humans thinking about politics. Inspecting the AI’s thought processes seems to show that, in accordance with your design, the AI has no intention to deceive you, and an explicit goal of telling you the truth. And in your experience so far, the AI has been, inhumanly, well-calibrated; the AI has assigned 99% certainty on a couple of hundred occasions, and been wrong exactly twice that you know of.

Arguably, you now have far better reason to trust what the AI says to you, than to trust your own thoughts.

And now the AI tells you that it’s 99.9% sure—having seen it with its own cameras, and confirmed from a hundred other sources—even though (it thinks) the human brain is built to invoke the absolute denial macro on it—that...

...what?

What’s the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?

(Some of my own answers appear in the comments.)