Differ­en­tial In­tel­lec­tual Progress

WikiLast edit: 24 Feb 2020 8:43 UTC by MichaelA

Differential intellectual progress was defined by Luke Muehlhauser and Anna Salamon as “prioritizing risk-reducing intellectual progress over risk-increasing intellectual progress”. They discuss differential intellectual progress in relation to Artificial General Intelligence (AGI) development (which will also be the focus of this article):

As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the philosophical, scientific, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop arbitrary superhuman AIs.

Muehlhauser and Salamon also note that differential technological development can be seen as a special case of this concept.

Risk-increasing Progress

Technological advances — without corresponding development of safety mechanisms — simultaneously increase the capacity for both friendly and unfriendly AGI development. Presently, most AGI research is concerned with increasing its capacity rather than its safety and thus, most progress increases the risk for a widespread negative effect.

The above developments could also help in the creation of Friendly AI. However, Friendliness requires the development of both AGI and Friendliness theory, while an Unfriendly Artificial Intelligence might be created by AGI efforts alone. Thus developments that bring AGI closer or make it more powerful will increase risk, at least if not combined with work on Friendliness.

Risk-reducing Progress

There are several areas which, when more developed, will provide a means to produce AGIs that are friendly to humanity. These areas of research should be prioritized to prevent possible disasters.

See Also

References

No comments.