Extreme Security

What makes “password” a bad password?

You might say that it’s because everybody else is already using it, and maybe you’d be correct that everybody in the world deciding to no longer use the password “password” could eventually make it acceptable again.

But consider this: if everyone who was using that password for something changed it tomorrow, and credibly announced so, it would still be an English dictionary word. Any cracker that lazily included the top N words in an English dictionary in a cracking list would still be liable to break it, not because people actually use every English dictionary word, but simply because trying the top N English words is a commonly deployed tactic.

You could go one step further and say “password” is also a bad password because it’s in that broader set of English dictionary words, but really, you should just drop the abstraction altogether. “Password” is a bad password because hackers are likely to try it. Hackers are likely to try it because they’re trying to break your security, yes, but also because of a bunch of seemingly irrelevant details like what they tell humans to do on online password cracking resources, and the hashcat & JohnTheRipper programs’ standard features. If due to some idiosyncratic psychological quirk hackers were guaranteed never to think of “password” as a possible password and never managed to insert “password” in their cracking lists, it would be the Best Password. A series of 17 null bytes (which are used to signal the end of C strings) tends to this day to be an excellent choice, when the website accepts that, and simple derivations of that strategy will probably go on being an excellent choice despite my mentioning them in this LW post.

Generating a password with a high amount of “entropy” is just a way of ensuring that password crackers are very unlikely to break them without violating Shannon’s maxim. Nobody wants to have to violate Shannon’s maxim because it means they can’t talk about their cool tricks publicly, and their systems’ design itself will become a confidential secret in need of protection.

Unfortunately “not violating Shannon’s maxim” is not always a luxury you have. There are legitimate and recognized circumstances where modeling the particular opponents you’re liable to face becomes critical for adequate performance. High frequency traders who sometimes choose to ignore exploitable bugs for an extra millisecond saved in latency. Corporations with “assume breach” network stances like the kinds Crowdstrike enables. Racketeerers whose power comes from the amount of people they can trust and whom compete with other racketeers. Secret service organizations. There are nano-industries that I can’t even name because it’d be an outfo-hazard to explain how they’re vulnerable. As the black hole is to physicists, these industrial problems are to security engineers. Where conditions of strong performance requirements, pricey defenses, and predictable attacker behavior coexist, you gotta adopt a different, more complete model, even if it means you can’t make a DEFCON presentation.

My “clever-but-not-actually-useful-name” for these risk mitigation strategies is Extreme Security because it’s security in extremis—not because these systems are particularly secure, or even because the problems are particularly hard. They’re what happens when you’re forced to use the minimum security necessary instead of finding a “merely” sufficient configuration for a set of resources.

Whether it’s an extreme security situation that forced your hand, or simple laziness, there’s nothing about modeling your opponent that makes your security solution “wrong” or “incomplete”. The purpose of security is to (avoid getting) cut (by) the enemy. For most of human history, before computers, we just thought of threat modeling as a natural part of what security was—you don’t spend more money on walls if Igor is a dumbass and you could be spending them on internal political battles instead. The opportunity cost of playing defense or the asymmetry of most “security problems” meant you had to take into account all the infoz to defeat bandits. It was only until the U.S. built an armed forces the size of every other military, when computer programmers’ adversaries became “every hacker in the world that might target their systems”, when police and intelligence agencies began to put cameras armed with facial & license plate recognition on every street corner in New York & London, did we start to think of security situations that you can’t talk about in a journal as “insufficient”. But of course if you leave your door unlocked and no one robs you, by chance or something else, you’ve won.


Now, I see alignment conversations like this everywhere:

Alice: Suppose we had an interpretability mechanism that showed us what the AI was optimizing...

Bob: Ah, but what if the AI has modified itself in anticipation of such oversight and will unknowingly-to-it change its optimization when surveillance is lifted or we won’t be able to react?

Charlie: Suppose we have two AIs, each playing a zero sum debate game...

Bob: Ah, but what if they spontaneously learn Updateless Decision Theory and to read each others code and start making compacts with each other that harm humans?

Denice: Suppose we extensively catalog SOTA AI capabilities and use only lower-capability helper AIs to solve the harder alignment problems…

Bob: Ah, but what if we get an AI that hides its full power prior to us deploying it for safety research, or using it for some important social function, and then it performs a treacherous turn?

When someone asks “what’s the best movie”, the pedantically correct answer is to back up, explain that the question is underspecified, and then ask if they mean “what movie maximizes satsfaction for me”, or “what movie produces highest average satisfaction for a particular group of people”. And when someone asks “what’s the best possible security configuration”, you could interpret that as saying “what’s the security configuration that works against all adversaries”, but really they probably mean “what security configuration is most likely to work against the particular foe I’m liable to face”, and oftentimes that means you should ask them about their threat model.

Instead of asking clarifying questions, these conversations (and for the record I stand a good chance of involuntarily strawmanning the people that would identify as Bob) often feel to me like they take as granted that we’re fighting the mind shaped exactly the way it needs to be to kill us. Instead of treating AGI like an engineering problem, the people I see talk about alignment treat it like a security problem, where the true adversary is not a particularly defined superintelligence resulting from a particularly defined ML training process, but an imaginary Evil Supervillain designing this thing specifically to pass our checks and balances.

Alignment is still an extremely hard problem, but we shouldn’t pretend it’s harder than it is by acting like we don’t get to choose our dragon. A strong claim that an oversight mechanism is going to fail, as opposed to merely being extraordinarily risky, because the AGI will be configured a certain way, is a claim that needs substantiation. If the way we get this AGI is through some sort of complicated, layered reinforcement learning paradigm, where a wide variety of possible agents with different psychologies will be given the opportunity to influence their own or each others’ training processes, then maybe it really is highly likely that we’ll land on whichever one has the foresight and inclination to pull some complicated hijinks. But if the way we get AGI is something like the way we scale up GPT-style transformers, presumably the answer to why we didn’t get an oracle that was hiding its power level is because “that’s not how gradient descent works”.

I realize there is good reason to be skeptical of such arguments: they’re fragile. I don’t want to use this kind of logic, you don’t want to use this kind of logic, when trying to prevent something from ending the world. It relies on assumptions about when we will get AGI and how, and nobody today can confidently know the answers to those questions. We have strong theoretical reasons to believe most of the incidental ways you can scam computer hackers wouldn’t work on an unmuddled superintelligent AI capable of modifying its own source code and re-deriving arithmetic from the peano axioms. And we should do ourselves a FAVAR and try not to risk the world on these intuitions.

Yet these schemes I mentioned are currently pretty much the most sound proposals we have, and they’re fragile mostly when we unwittingly find ourselves in possession of particular minds. Determining whether or not we are likely to actually be overseeing those minds is a problem worth exploring! And conditioning such criticisms on plausible assumptions about the set of AIs that might actually come out of DeepMind, is both an important part of understanding when they fail, and when they might succeed anyways!