Intelligence isn’t a magical single-dimensional quality. It may be generally smarter than EY, but not have the specific FAI theory that EY has developed.
Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.
The AI called EY because it’s stuck while trying to grow, so it hasn’t achieved its full potential yet. It should be able to comprehend any theory a human EY can comprehend; but I don’t see why we should expect it to be able to independently derive any theory a human could ever derive in their lifetimes, in (small) finite time, and without all the data available to that human.
So is this an AGI or not? If it is then it’s smarter than Mr. Yudkowski and can resolve it’s own problems.
Not necessarily. It may well be programmed with limitations that prevent it from creating solutions that it desires. Examples include:
It is programmed to not recursively improve beyond certain parameters.
It is programmed to be law abiding or otherwise restricted in actions in a way such that it can not behave in a consequentialist manner.
In such circumstances it will desire things to happen but desire not to be the one doing them. Eliezer may well be useful then. He could, for example, create another AI with supplied theory. (Or have someone whacked.)
So is this an AGI or not? If it is then it’s smarter than Mr. Yudkowski and can resolve it’s own problems.
Intelligence isn’t a magical single-dimensional quality. It may be generally smarter than EY, but not have the specific FAI theory that EY has developed.
Yay multidimensional theories of intelligence!
Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.
Well, maybe the theory is inobviously correct.
The AI called EY because it’s stuck while trying to grow, so it hasn’t achieved its full potential yet. It should be able to comprehend any theory a human EY can comprehend; but I don’t see why we should expect it to be able to independently derive any theory a human could ever derive in their lifetimes, in (small) finite time, and without all the data available to that human.
Its a seed AGI in the process of growing. Whether “Smarter than Yudkowski” ⇒ “Can resolve own problems” is still an open problem 8-).
I find this the most humorous bit in the post. Smarter than Yudokowsky? May be.
Not necessarily. It may well be programmed with limitations that prevent it from creating solutions that it desires. Examples include:
It is programmed to not recursively improve beyond certain parameters.
It is programmed to be law abiding or otherwise restricted in actions in a way such that it can not behave in a consequentialist manner.
In such circumstances it will desire things to happen but desire not to be the one doing them. Eliezer may well be useful then. He could, for example, create another AI with supplied theory. (Or have someone whacked.)