So is this an AGI or not? If it is then it’s smarter than Mr. Yudkowski and can resolve it’s own problems.
Not necessarily. It may well be programmed with limitations that prevent it from creating solutions that it desires. Examples include:
It is programmed to not recursively improve beyond certain parameters.
It is programmed to be law abiding or otherwise restricted in actions in a way such that it can not behave in a consequentialist manner.
In such circumstances it will desire things to happen but desire not to be the one doing them. Eliezer may well be useful then. He could, for example, create another AI with supplied theory. (Or have someone whacked.)
Not necessarily. It may well be programmed with limitations that prevent it from creating solutions that it desires. Examples include:
It is programmed to not recursively improve beyond certain parameters.
It is programmed to be law abiding or otherwise restricted in actions in a way such that it can not behave in a consequentialist manner.
In such circumstances it will desire things to happen but desire not to be the one doing them. Eliezer may well be useful then. He could, for example, create another AI with supplied theory. (Or have someone whacked.)