A lot of your argument seems to be comparing an artifact of human technology with an evolved system. “is there a way to destroy the moon, given only the ability to post 10k characters to lesswrong.com?″
To make the discussion clearer, lets pick a particular evolved system and technology, say an aeroplane wing and a insect wing. Suppose that the aeroplane wing wins on some criteria, like speed, the insect wing wins on efficiency and it all balances out overall.
To say therefor that intelligence isn’t that great is a mixing of levels. There are two intelligences in the game, humans and evolution. Both have produced a great variety of highly optimized artifacts. Both are of roughly comparable power. By comparing two aeroplanes, you can also compare the skill of the designers, but it is meaningless to try to compare an aeroplane to an aeroplane designer. The insect is the plane, not the designer.
Some of your comparisons make even less sense, like ability to survive in extreme environments. Comparing a fish and an untooled human in ability to survive in the ocean is a straight contest of fish evolution vs human evolution. If the human drowns before they have a chance to think anything, the power of the human brain is not shown in the slightest.
Also comparing human intelligence between humans is like comparing the running speed of cheetahs, all your results will be similar. So one human beating another tells you little about intelligence.
So what would a real comparison of intelligence with something else look like? I think the question “Is intelligence good?” is not that meaningful.
What we can do is ask “is there a way to X given only Y” For instance “is there a way to make a fire, given only the ability to contract mucles of a human body in a forest?” or “is there a way to destroy the moon, given only the ability to post 10k charicters to lesswrong.com?″ These are totally formalizable questions and could in principle be answered by simulating an exponential number of universes.
We can also ask questions about which algorithms will actually find a way to achieve a goal. We know that there exists a pattern of electrical inputs that win the game pong, but want to know if some gradient descent based algorithm will find one.
We can then say there are a wide variety of tasks and goals that humans can fulfill given our primitive action of muscle contraction. Given that chimps have a similar musculature, but less intelligence and can’t do most of these tasks, and many of the routes to fulfillment of the goals go through layers of indirection, then it seems that an intelligence comparable to humans with some other output channel would be similarly good at achieving goals.
So what would a real comparison of intelligence with something else look like? I think the question “Is intelligence good?” is not that meaningful.
What we can do is ask “is there a way to X given only Y” For instance “is there a way to make a fire, given only the ability to contract mucles of a human body in a forest?” or “is there a way to destroy the moon, given only the ability to post 10k charicters to lesswrong.com?″ These are totally formalizable questions and could in principle be answered by simulating an exponential number of universes.
I agree with the first statement but not the later.
Unless we can ask “Is something good”, than why would we consider that subject to be important ?
Most thing that we hold to be of value, we do so because they are almost universally considered good (or because they are used to guard against something that’s universally considered bad).
We can certainly ask “Can <manipulation Y of class ABC of T-cell> be <good> ?” and we could get a pretty universal “Yes, because that will help us cure this specific type of tumor and this specific type of tumor, when viewed through the subjective lens of any given animal, is bad”.
We can then say there are a wide variety of tasks and goals that humans can fulfill given our primitive action of muscle contraction. Given that chimps have a similar musculature, but less intelligence and can’t do most of these tasks, and many of the routes to fulfillment of the goals go through layers of indirection, then it seems that an intelligence comparable to humans with some other output channel would be similarly good at achieving goals.
Again, here I think your analogy suffers of the problem I was trying to tackle, you are taking a human-centric view and assuming that chimps are inferior in the range of actions they can take.
Chimps can do feats of acrobatics that seem fun and impressive, with seemingly little risk and effort involved. I would love to be able to do that ? Would I love that more than being able to, say, not die from cancer since I have chemotherapy ? Or more than being able to drive a car ? I don’t know… I can certainly see a valid viewpoint that being able to spend my life swinging through trees in the Congos would be “better” than having cars and chemotherapy and the other 1001 wonders that our brains help produce.
Some of your comparisons make even less sense, like ability to survive in extreme environments. Comparing a fish and an untooled human in ability to survive in the ocean is a straight contest of fish evolution vs human evolution. If the human drowns before they have a chance to think anything, the power of the human brain is not shown in the slightest.
I was not though, I was comparing humans + the tools that they build using their intelligence to other forms of life.
A lot of your argument seems to be comparing an artifact of human technology with an evolved system. “is there a way to destroy the moon, given only the ability to post 10k characters to lesswrong.com?″
To make the discussion clearer, lets pick a particular evolved system and technology, say an aeroplane wing and a insect wing. Suppose that the aeroplane wing wins on some criteria, like speed, the insect wing wins on efficiency and it all balances out overall.
To say therefor that intelligence isn’t that great is a mixing of levels. There are two intelligences in the game, humans and evolution. Both have produced a great variety of highly optimized artifacts. Both are of roughly comparable power. By comparing two aeroplanes, you can also compare the skill of the designers, but it is meaningless to try to compare an aeroplane to an aeroplane designer. The insect is the plane, not the designer.
Some of your comparisons make even less sense, like ability to survive in extreme environments. Comparing a fish and an untooled human in ability to survive in the ocean is a straight contest of fish evolution vs human evolution. If the human drowns before they have a chance to think anything, the power of the human brain is not shown in the slightest.
Also comparing human intelligence between humans is like comparing the running speed of cheetahs, all your results will be similar. So one human beating another tells you little about intelligence.
So what would a real comparison of intelligence with something else look like? I think the question “Is intelligence good?” is not that meaningful.
What we can do is ask “is there a way to X given only Y” For instance “is there a way to make a fire, given only the ability to contract mucles of a human body in a forest?” or “is there a way to destroy the moon, given only the ability to post 10k charicters to lesswrong.com?″ These are totally formalizable questions and could in principle be answered by simulating an exponential number of universes.
We can also ask questions about which algorithms will actually find a way to achieve a goal. We know that there exists a pattern of electrical inputs that win the game pong, but want to know if some gradient descent based algorithm will find one.
We can then say there are a wide variety of tasks and goals that humans can fulfill given our primitive action of muscle contraction. Given that chimps have a similar musculature, but less intelligence and can’t do most of these tasks, and many of the routes to fulfillment of the goals go through layers of indirection, then it seems that an intelligence comparable to humans with some other output channel would be similarly good at achieving goals.
I agree with the first statement but not the later.
Unless we can ask “Is something good”, than why would we consider that subject to be important ?
Most thing that we hold to be of value, we do so because they are almost universally considered good (or because they are used to guard against something that’s universally considered bad).
We can certainly ask “Can <manipulation Y of class ABC of T-cell> be <good> ?” and we could get a pretty universal “Yes, because that will help us cure this specific type of tumor and this specific type of tumor, when viewed through the subjective lens of any given animal, is bad”.
Again, here I think your analogy suffers of the problem I was trying to tackle, you are taking a human-centric view and assuming that chimps are inferior in the range of actions they can take.
Chimps can do feats of acrobatics that seem fun and impressive, with seemingly little risk and effort involved. I would love to be able to do that ? Would I love that more than being able to, say, not die from cancer since I have chemotherapy ? Or more than being able to drive a car ? I don’t know… I can certainly see a valid viewpoint that being able to spend my life swinging through trees in the Congos would be “better” than having cars and chemotherapy and the other 1001 wonders that our brains help produce.
I was not though, I was comparing humans + the tools that they build using their intelligence to other forms of life.