AI al­ign­ment open problem

WikiLast edit: 6 Feb 2017 3:05 UTC by Eliezer Yudkowsky

A tag for pages that describe at least one major open problem that has been identified within the theory of value-aligned advanced agents, powerful artificial minds such that the effect of running them is good /​ nice /​ normatively positive (‘high value’).

To qualify as an ‘open problem’ for this tag, the problem should be relatively crisply stated, unsolved, and considered important.

No comments.