I am reminded of an AI koan from AGI ’06, where the discussion turned (as it often does) to defining “intelligence”. A medium-prominent AI researcher suggested that an agent’s “intelligence” could be measured in the agent’s processing cycles per second, bits of memory, and bits of sensory bandwidth.
Surely (I said), an agent is less intelligent if it uses more memory, processing power, and sensory bandwidth to accomplish the same task?
With hindsight I think we can see that Yudkowsky missed the point here — the AI researcher was describing a vector of intelligence in line with the Scaling Hypothesis (2021).
Instead, Yudkowsky conflates this with efficiency of intelligence by adding the “to accomplish the same task” clause which is a wholly different thing.
With hindsight I think we can see that Yudkowsky missed the point here — the AI researcher was describing a vector of intelligence in line with the Scaling Hypothesis (2021).
Instead, Yudkowsky conflates this with efficiency of intelligence by adding the “to accomplish the same task” clause which is a wholly different thing.