This is a good point. I think there’s one reason to give special attention to the intelligence explosion concept though… it’s part of the proposed solution as well as one of the possible problems.
The two main ideas here are:
Recursive self-improvement is possible and powerful
Human values are fragile; “most” recursive self-improvers will very much not do what we want
These ideas seem to be central to the utliity-maximizing FAI concept.
This is a good point. I think there’s one reason to give special attention to the intelligence explosion concept though… it’s part of the proposed solution as well as one of the possible problems.
The two main ideas here are:
Recursive self-improvement is possible and powerful
Human values are fragile; “most” recursive self-improvers will very much not do what we want
These ideas seem to be central to the utliity-maximizing FAI concept.