Phil,
There’s really two things im considering. One, whether the general idea of AI throttling is meaningful and what the technical specifics could be (crude example: lets give it only X compute power yielding an intelligence level Y) Two, if we could reliably build a human level AI, it could be of great use, not in itself, but as a tool for investigation, since we could finally “look inside” at concrete realizations of mental concepts, which is not possible with our own minds. As an example, if we could teach a human level AI morality (presumably possible since we ourselves learn it) we would have a concrete realization of that morality as computation that could be looked at outright and even debugged. Could this not be of great value for insights into FAI?
Eliezer,
Have you considered in detail the idea of AGI throttling, that is, given a metric of intelligence, and assuming a correlation between existential risk and said intelligence, AGI throttling is the explicit control of the AGI’s intelligence level (or optimization power if you like), which indirectly also bounds existential risk.
In other words, what, if any, are the methods of bounding AGI intelligence level? Is it possible to build an AGI and explicitly set it at human level?