AGI Chaining

WikiLast edit: 22 Oct 2012 10:09 UTC by Kaj_Sotala

Chaining God is Stuart Armstrong’s term for his proposed method of maintaining control over a superhuman AGI. It involves a chain of AGIs, each more advanced than the next. The idea is that even though humans might not be able to understand the most sophisticated AGI well enough to trust it, they can understand and trust the first AGI in the chain, which will in turn verify the trustworthiness of the next AGI, and so on.

Armstrong mentions a number of considerations:

This is a very conservative approach to AGI design, and presents a large opportunity cost. Armstrong believes the chain approach would be unlikely to produce anywhere near the best possible future, since the AGI chain would only learn from present human values. Each improved layer of AGI would be limited in improvement to ensure its creator could understand it. With supervision happening at each level, an AGI would take longer to develop and when starting over repeatedly the seed AI would always have to be humanly comprehensible. He believe an AGI chain is a simple way to create Friendly Artificial Intelligence, but enumerates a number of ways the concept might never work.

See also

References

No comments.