New paper: Corrigibility with Utility Preservation

I am pleased to announce the availability of a long-format paper with new results on AGI safety: Corrigibility with Utility Preservation.

You can get the paper at https://​​arxiv.org/​​abs/​​1908.01695 , and in the related software repository at https://​​github.com/​​kholtman/​​agisim .

Abstract

Corrigibility is a safety property for artificially intelligent agents. A corrigible agent will not resist attempts by authorized parties to alter the goals and constraints that were encoded in the agent when it was first started. This paper shows how to construct a safety layer that adds corrigibility to arbitrarily advanced utility
maximizing agents, including possible future agents with Artificial General Intelligence (AGI). The layer counter-acts the emergent incentive of advanced agents to resist such alteration.

A detailed model for agents which can reason about preserving their utility function is developed, and used to prove that the corrigibility layer works as intended in a large set of non-hostile universes. The corrigible agents have an emergent incentive to protect key elements of their corrigibility layer. However, hostile universes may contain forces strong enough to break safety features. Some open problems related to graceful degradation when an agent is successfully attacked are identified.

The results in this paper were obtained by concurrently developing an AGI agent simulator, an agent model, and proofs. The simulator is available under an open source license. The paper contains simulation results which illustrate the safety related properties of corrigible AGI agents in detail.

This post can be used for comments and questions.

The paper contains several results and observations that do not rely on the heavy use of math, but other key results and discussions are quite mathematical. Feel to post questions and comments even if you have not read all the mathematical parts.

As this is my first post on LessWrong, and my first paper on AGI safety, I feel I should say something to introduce myself. I have a Ph.D. in software design, but my professional life so far has been very diverse and multidisciplinary. Among other things I have been an experimental physicist, a standards developer and negotiator, an Internet privacy advocate, a wireless networking expert, and a systems architect in an industrial research lab. So I bring a wide range of tools and methodological traditions to the field. What made me interested in the field of AGI safety in particular is that it seems to have open problems where real progress can be made using mathematical techniques that I happen to like. I am currently on a sabbatical: basically this means that I decided to quit my day job, and to use my savings to work for a while on some interesting problems that are different from the interesting problems I worked on earlier.