I don’t see how. Whatever truth it might hold for a 140⁄100 IQ gap, it doesn’t hold for arbitrarily smart beings, who could tell a lesser being all the self-modifying it would need to do to reach cognitive parity.
In any case, as those who have seen my posts here know, my warning lights go off whenever someone claims that something “can’t be explained, even given infinite time and space”.
The point being made seems to be the contradiction of a common AI-theorist, futurist position.
That position being that once a computer algorithm of effective AI IQ of 100 is produced, it can increase it’s intelligence to arbitrary levels by the addition of additional hard-drive space, ram, and processing power.
IMO the analogy slightly fails. It fails to include anything analogous to the increase in ram, which is a very important factor, as it allows complex concepts to be dealt with as a whole.
The original quote said, “the most difficult subjects can be explained to the most slow-witted man”. This contradicts in my opinion what Michael Anissimov (Media Director, SIAI) thinks to be the case, namely that “a person with an IQ of 100 cannot understand certain concepts that people with an IQ of 140 can”.
Michael Anissimov is responsible for compiling, distributing, and promoting SIAI media materials.
I was just being pedantic here but thought that highlighting this point would be good as other people, like Greg Egan, seem to disagree. This is an important question regarding the dangers posed by AI.
In any case, as those who have seen my posts here know, my warning lights go off whenever someone claims that something “can’t be explained, even given infinite time and space”.
Infinite time and space. That’s a lot of time and space. I suspect my warning lights would go off too. Do people make claims like that often?
I don’t see how. Whatever truth it might hold for a 140⁄100 IQ gap, it doesn’t hold for arbitrarily smart beings, who could tell a lesser being all the self-modifying it would need to do to reach cognitive parity.
In any case, as those who have seen my posts here know, my warning lights go off whenever someone claims that something “can’t be explained, even given infinite time and space”.
The point being made seems to be the contradiction of a common AI-theorist, futurist position.
That position being that once a computer algorithm of effective AI IQ of 100 is produced, it can increase it’s intelligence to arbitrary levels by the addition of additional hard-drive space, ram, and processing power.
IMO the analogy slightly fails. It fails to include anything analogous to the increase in ram, which is a very important factor, as it allows complex concepts to be dealt with as a whole.
The original quote said, “the most difficult subjects can be explained to the most slow-witted man”. This contradicts in my opinion what Michael Anissimov (Media Director, SIAI) thinks to be the case, namely that “a person with an IQ of 100 cannot understand certain concepts that people with an IQ of 140 can”.
I was just being pedantic here but thought that highlighting this point would be good as other people, like Greg Egan, seem to disagree. This is an important question regarding the dangers posed by AI.
Infinite time and space. That’s a lot of time and space. I suspect my warning lights would go off too. Do people make claims like that often?
Anissimov sure did: “no matter how many time and notebooks they have.”