Superintelligence 13: Capability control methods
This is part of a weekly reading group on Nick Bostrom’s book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI’s reading guide.
Welcome. This week we discuss the thirteenth section in the reading guide: capability control methods. This corresponds to the start of chapter nine.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: “Two agency problems” and “Capability control methods” from Chapter 9
If the default outcome is doom, how can we avoid it? (p127)
We can divide this ‘control problem’ into two parts:
The first principal-agent problem: the well known problem faced by a sponsor wanting an employee to fulfill their wishes (usually called ‘the principal agent problem’)
The second principal-agent problem: the emerging problem of a developer wanting their AI to fulfill their wishes
How to solve second problem? We can’t rely on behavioral observation (as seen in week 11). Two other options are ‘capability control methods’ and ‘motivation selection methods’. We see the former this week, and the latter next week.
Capability control methods: avoiding bad outcomes through limiting what an AI can do. (p129)
Some capability control methods:
Boxing: minimize interaction between the AI and the outside world. Note that the AI must interact with the world to be useful, and that it is hard to eliminate small interactions. (p129)
Incentive methods: set up the AI’s environment such that it is in the AI’s interest to cooperate. e.g. a social environment with punishment or social repercussions often achieves this for contemporary agents. One could also design a reward system, perhaps with cryptographic rewards (so that the AI could not wirehead) or heavily discounted rewards (so that long term plans are not worth the short term risk of detection) (p131)
Anthropic capture: an AI thinks it might be in a simulation, and so tries to behave as will be rewarded by simulators (box 8; p134)
Stunting: limit the AI’s capabilities. This may be hard to do to a degree that avoids danger and is still useful. An option here is to limit the AI’s information. A strong AI may infer much from little apparent access to information however. (p135)
Tripwires: test the system without its knowledge, and shut it down if it crosses some boundary. This might be combined with ‘honey pots’ to attract undesirable AIs take an action that would reveal them. Tripwires could test behavior, ability, or content. (p137)
Brian Clegg reviews the book mostly favorably, but isn’t convinced that controlling an AI via merely turning it off should be so hard:
I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity.
Kevin Kelly also apparently doubts that AI will substantially impede efforts to modify it:
...We’ll reprogram the AIs if we are not satisfied with their performance...
...This is an engineering problem. So far as I can tell, AIs have not yet made a decision that its human creators have regretted. If they do (or when they do), then we change their algorithms. If AIs are making decisions that our society, our laws, our moral consensus, or the consumer market, does not approve of, we then should, and will, modify the principles that govern the AI, or create better ones that do make decisions we approve. Of course machines will make “mistakes,” even big mistakes – but so do humans. We keep correcting them. There will be tons of scrutiny on the actions of AI, so the world is watching. However, we don’t have universal consensus on what we find appropriate, so that is where most of the friction about them will come from. As we decide, our AI will decide...
This may be related to his view that AI is unlikely to modify itself (from further down the same page):
3. Reprogramming themselves, on their own, is the least likely of many scenarios.
The great fear pumped up by some, though, is that as AI gain our confidence in making decisions, they will somehow prevent us from altering their decisions. The fear is they lock us out. They go rogue. It is very difficult to imagine how this happens. It seems highly improbable that human engineers would program an AI so that it could not be altered in any way. That is possible, but so impractical. That hobble does not even serve a bad actor. The usual scary scenario is that an AI will reprogram itself on its own to be unalterable by outsiders. This is conjectured to be a selfish move on the AI’s part, but it is unclear how an unalterable program is an advantage to an AI. It would also be an incredible achievement for a gang of human engineers to create a system that could not be hacked. Still it may be possible at some distant time, but it is only one of many possibilities. An AI could just as likely decide on its own to let anyone change it, in open source mode. Or it could decide that it wanted to merge with human will power. Why not? In the only example we have of an introspective self-aware intelligence (hominids), we have found that evolution seems to have designed our minds to not be easily self-reprogrammable. Except for a few yogis, you can’t go in and change your core mental code easily. There seems to be an evolutionary disadvantage to being able to easily muck with your basic operating system, and it is possible that AIs may need the same self-protection. We don’t know. But the possibility they, on their own, decide to lock out their partners (and doctors) is just one of many possibilities, and not necessarily the most probable one.
1. What do you do with a bad AI once it is under your control?
Note that capability control doesn’t necessarily solve much: boxing, stunting and tripwires seem to just stall a superintelligence rather than provide means to safely use one to its full capacity. This leaves the controlled AI to be overtaken by some other unconstrained AI as soon as someone else isn’t so careful. In this way, capability control methods seem much like slowing down AI research: helpful in the short term while we find better solutions, but not in itself a solution to the problem.
However this might be too pessimistic. An AI whose capabilities are under control might either be almost as useful as an uncontrolled AI who shares your goals (if interacted with the right way), or at least be helpful in getting to a more stable situation.
Paul Christiano outlines a scheme for safely using an unfriendly AI to solve some kinds of problems. We have both blogged on general methods for getting useful work from adversarial agents, which is related.
2. Cryptographic boxing
Paul Christiano describes a way to stop an AI interacting with the environment using a cryptographic box.
3. Philosophical Disquisitions
Danaher again summarizes the chapter well. Read it if you want a different description of any of the ideas, or to refresh your memory. He also provides a table of the methods presented in this chapter.
4. Some relevant fiction
That Alien Message by Eliezer Yudkowsky
5. Control through social integration
Robin Hanson argues that it matters more that a population of AIs are integrated into our social institutions, and that they keep the peace among themselves through the same institutions we keep the peace among ourselves, than whether they have the right values. He thinks this is why you trust your neighbors, not because you are confident that they have the same values as you. He has several followup posts.
6. More miscellaneous writings on these topics
LessWrong wiki on AI boxing. Armstrong et al on controlling and using an oracle AI. Roman Yampolskiy on ‘leakproofing’ the singularity. I have not necessarily read these.
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser’s list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
Choose any control method and work out the details better. For instance:
Could one construct a cryptographic box for an untrusted autonomous system?
Investigate steep temporal discounting as an incentives control method for an untrusted AGI.
Are there other capability control methods we could add to the list?
Devise uses for a malicious but constrained AI.
How much pressure is there likely to be to develop AI which is not controlled?
If existing AI methods had unexpected progress and were heading for human-level soon, what precautions should we take now?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about ‘motivation selection methods’. To prepare, read “Motivation selection methods” and “Synopsis” from Chapter 9. The discussion will go live at 6pm Pacific time next Monday 15th December. Sign up to be notified here.