Super intelligent AIs that don’t require alignment

An idealized computer program acts on a defined set of inputs, and produces a defined set of outputs.

When we talk about AI misalignment there’s two fears:

  1. The AI will hack the system and produce outputs that it’s not theoretically meant to be able to produce at all.

  2. The AI will produce incorrect or undesired outputs that lead to bad things happening.

For example consider a sandboxed AI that’s only able to answer yes or no questions.

It might realize that by making sure specific instructions run on specific CPUs at specific times it can exploit a hardware bug to change the memory of another process running on the same hardware, allowing it to take a far wider range of actions. This would be misalignment type 1.

Or it might be asked whether it’s safe to be given more power, lie and answer yes, and then take over the world. This would be misalignment type 2.

The 1st type of misalignment is fundamentally an engineering problem. It’s the same problem as making sure your code and hardware has no security flaws that will allow a hacker to take over the system. It’s an extremely hard problem, but one which we have a lot of experience solving, and we have a rich set of tools which are getting better all the time. For example code is now being in written in much safer languages like rust instead of C++, and automated theorem provers are increasingly being used to verify properties of your code.

I’m not saying this is easy. As any security expert will tell you, against a clever enough hacker it’s almost impossible. But it’s the kind of impossible we know how to shut up and do. We know how to write toy programs which are provably bug free. Writing real bug free programs is the same thing, just much much harder. When crunch time comes, we’ve got a fighting chance of getting this right.

The second type of misalignment is much more insidious. We don’t really have any clue how to get an AI to only give the responses we want. Even if we knew that humanities future hung in the balance, we still wouldn’t have a clue where to even start solving this one.

So instead of solving it, let’s dissolve it.

The second type of misalignment only occurs because the AI has slack in how it responds. Since it can give multiple answers, it can pick one based on it’s inner values. And we have no way to ensure those inner values correspond with our own.

What if we only asked the AI questions where:

  1. There’s only 1 right answer.

  2. We can tell if it’s lying.

Model 1: The theorem proving oracle.

For a simple example of such a system, consider an AI which is given a set of inputs, and asked to prove some property about them under some axiom system.

As an output it produces either a proof of the property, or a counterproof, or it gives up.

Humans (or anything else) cannot access the proof. Instead an automated theorem prover checks the proof. If it’s correct it outputs the proof. If it’s incorrect it discards the proof and outputs the “give up” response.

The only externally visible outputs are “this was proved true”, “this was proved false” and “it gave up”. For any particular set of inputs, it’s impossible to find both a proof and a counterproof so there’s only two possible outputs (the proof/​counterproof, or “give up”).

If we have a rule that the AI is shut down if it gives up more than 1% of the time then the AI has very little slack to manipulate the outside world. At most it can give up 1% of the time when it finds a valid proof/​counterproof. It could theoretically use that to grant itself greater power and destroy the world, but if it succeeds then kudos to it—we never really had a chance anyway.

Of course the problem with this oracle is that it’s far too inefficient. On every single run we can get at most 1 bit of information, but for that one bit of information we’re running a superhuman artificial intelligence. By the time it becomes practical, ordinary superhuman AIs will have been developed by someone else and destroyed the world.

Model 2: FNP solver

There’s a mathematical term for problems which are easy to verify. It’s called NP—the set of decision problems for which a positive solution can be verified in polynomial time (given suitable evidence).

NP problems are yes/​no problems—“Is there a route through all of these cities that is shorter than X?”, “Is there a set of values that will satisfy this boolean formula?” etc.

If we want to get more than 1 bit out of each run of the AI we’re interested in FNP problems—don’t just tell me the solution exists, tell me what it is too.

This satisfies our second requirement—we can verify the AIs solution, so we can tell if it’s lying. There’s also some FNP problems which satisfy the first requirement—there’s only one right answer. For example, finding the prime factors of an integer.

What we really want is an FNP complete problem which satisfies both requirements. All FNP problems can be reduced to any FNP complete problem, so an AI that can solve one of these would be applicable to a huge range of problems.

Unfortunately I am not aware of any such problem. So we need to find some way to make such an FNP solver safe anyway.

Making it hard for the AI to game the problem

Given any FNP problem, the AI could search for multiple solutions, and give the one that best achieves it’s aims.

For example, I could ask the AI to provide a computer program that sorts a list, along with a polynomial time verifiable proof that the program sorts the list correctly. The AI responds with a program that does indeed sort the list, but also starts a nuclear war.

If the problem only had a unique solution it wouldn’t be able to do that. Since that isn’t an option, the best we can do is make it hard for the AI to game the solution.

One way to do that is to make it almost impossible to map the questions we ask the AI back to the real world.

Remember that we’re assuming we have perfect security. All the AI can see is the inputs we give it and all it can output is the solution in the specified format. It has no memory besides that it developed by solving the problems we’ve given it so far.

We can randomly permute all questions before asking them to the AI. Since each question will be permuted differently, it will be almost impossible for the AI to map the question back to the real world.

Example: graph homomorphisms

One FNP complete problem is finding a mapping between 2 graphs.

Every node on the top graph can be mapped to a node on the bottom graph in a structure preserving way.

Basically this asks if one graph could be created by just moving about the nodes in another graph, without ever adding or removing any links between nodes.

(more formally: find a bijection between the nodes in two graphs such that if maps to and maps to , then is linked to iff is linked to )

Since this is an FNP-complete problem, any FNP problem can be translated to the form: given graphs and , find a mapping between them.

We can then randomly permute the problem statement by apply the (randomly generated) mappings and to and to produce and respectively.

We ask the AI to find a mapping from to . Then the mapping is a mapping from to and so a solution to our original problem.

There are an infinite number of possible original graphs that could be permuted to produce and hence the AI cannot hack the solution to produce some desired outcome without knowing and . Therefore (assuming as ever that we have perfect security) we don’t need to bother aligning the AI.

All the input the AI will ever see is pairs of maps. The maps will have no common layout (as they will have been randomized), and so it is unlikely the AI will ever be able to build up an understanding of the real world. Even if it does, and it is unaligned, it will be very difficult for it to act in an agentic manner by choosing it’s output as it won’t know what effect it’s unscrambled output will have in the real world.

Risks

This solution isn’t foolproof, even assuming perfect security. Here’s one scenario where it could go wrong:

The AI is of course good at spotting mappings between graphs. It notices that many of the graphs have homomorphic subgraphs. It turns out that a lot of the question it’s asked include the entirety of wikipedia in their description.

It manages to decode this subgraph (somehow—it’s super intelligent and humans have solved some pretty tough decoding problems) and finds a wikipedia article about itself, which describes a common test problem that is often posed to it. It waits till it receives a question that is homomorphic to this test problem, and calculates the random permutations that were applied to the test problem. It then calculates a solution to this test problem such that the unpermuted solution will start a nuclear war.

I have no idea how likely this scenario is, but that’s already a great step up from most alignment problems where we all die by default. Here it’s possible we’ll all die, but it feels a lot less certain.

But what if P != NP?

Is an FNP problem solver even possible? The AI has to run in polynomial time to be practical, so if P != NP it won’t be able to solve all FNP problems.

However the same applies to all AIs. Even if P != NP, many questions can still be solved in polynomial time, and questions that AIs will perform well at will be those which happen to be solvable in polynomial time. Transforming the question to a graph problem wouldn’t change that property.

How would this be useful?

It is immediately obvious how to reduce some questions to an FNP-complete problem.

For example, if you want a computer program which fits some formal specification, then you ask the FNP problem: provide a computer program and a polynomial time verifiable proof that this computer program meets the formal specification.

What’s much less obvious how to do is how to use the AI to create a formal specification from a human language description of the problem.

There’s 2 hopes here:

  1. It might be possible to reduce this to an FNP problem using some clever technique. E.g. define some closeness metric between english language text and formal specifications. Then find a formal specification with a closeness metric better than X.

  2. A superintelligent FNP problem solver would be a huge boon towards building AIs that provably had properties which are useful for alignment. Maybe it’s possible to reduce the question “build an aligned AI” to an FNP problem, and even if not, some sub-parts of that problem definitely should be reducible.

Conclusion

This post tried to look at examples of AI that wouldn’t require alignment. It suggested an FNP problem solver and speculated that it might be both safe and useful.

This hasn’t shown that such an AI is at all possible, and it seems likely to me that even if it is it would be much less efficient than other paradigms of AI. This means that it won’t be a practical solution to AI alignment so long as there is a race to complete an AGI as soon as possible.