That’s true, but not specific to AI. Where do you place chimp on the scale ? Low under human ? Ok, now consider an athletic man really good at brachiation, and a random chimp. It’s not just that the chimp has more strength or agility in arms. I’ll bet that the chimp will be definitely best at cognitive task like determining a good path across the trees, which branch to avoid etc. The chimp would also likely surpass most humans in recognition of some sorts of plants, fruits etc.
And what about savant autists like the guy who inspired Rain Man, or the twins John and Michael mentioned by Oliver Sacks in his book ‘The Man Who Mistook His Wife for a Hat’? Where are we supposed to place them on the curve ?
I’ve also read that bees can count up to 4 or 5, which surpasses some mammals or human children before the age of 2 or 3. Where do the bees go on the curve ?
Intelligence is likely not something that can be plotted on a simple curve. This could actually be advantageous for AI safety. Foom might be avoided if misaligned AIs have uneven cognitive capabilities and occasionally make significant errors in judgment.
That’s true, but not specific to AI. Where do you place chimp on the scale ? Low under human ? Ok, now consider an athletic man really good at brachiation, and a random chimp. It’s not just that the chimp has more strength or agility in arms. I’ll bet that the chimp will be definitely best at cognitive task like determining a good path across the trees, which branch to avoid etc. The chimp would also likely surpass most humans in recognition of some sorts of plants, fruits etc.
And what about savant autists like the guy who inspired Rain Man, or the twins John and Michael mentioned by Oliver Sacks in his book ‘The Man Who Mistook His Wife for a Hat’? Where are we supposed to place them on the curve ?
I’ve also read that bees can count up to 4 or 5, which surpasses some mammals or human children before the age of 2 or 3. Where do the bees go on the curve ?
Intelligence is likely not something that can be plotted on a simple curve. This could actually be advantageous for AI safety. Foom might be avoided if misaligned AIs have uneven cognitive capabilities and occasionally make significant errors in judgment.