Define intelligence how you like, there are a few capabilities not in this list which will significantly influence the capabilities of autonomous systems, including how beneficial, threatening or capable they are:
Sensory systems:
Both machines, and future collective intelligences of people and machines, are benefiting from an exponential explosion the ability to sense. Everything is improving at a vast pace: telescopes, microscopes, detection in every wavelength, sound, chemistry, DNA/RNA sequencing, mass spectrometry, surveillance in thousands of locations, internet of things, the list goes on and on.
If we want to understand the future of intelligence, we have to incorporate the explosion of sensory input into our thinking.
I am not ready to claim that spirituality is irrelevant to the discussion. The difference between factual knowledge, calculation speed and “wisdom” does seem relevant, as others have pointed out earlier in the thread.
However, we’ll have to re-frame questions about spiritual issues in some way to bring it in...
Suppose one aspires to become a Bodhisatva-a being who is capable of entering into an unfettered celestial existence, but who instead stays behind to help other people and sentient beings find their way. (If we are a bit less ambitious, perhaps we might, depending on our tradition, aspire to become a tzadik, a marja, or saintly but not a supernatural savior).
Among other things, the Bodhisatvas have removed themselves from ego and forms of physical desire. They are spiritually superior and ready to move outside of the cycle of reincarnation as lower beings. Yet, when they take the form of a human, they would eat as an instrumental goal toward accomplishing their end.
The Bodhisatva’s kind of spiritual superiority does seem to differ from what Bostrom and the rest of us call “Superintelligence.” It may, however, relate in some ways...
Just a thought experiment...
For example, a peculiar version of a Bodhisatva might be said to have a utility function-in fact a marvelous, selfless utility function. Also, it will succeed at bringing more people out of the darkness if it engages in recursive self-improvement of its capabilities.
One aspires to become a Buddha or a Bodhisatva, and the claim is made by some that they, others who they know, or historical figures have reached this condition. For the most part, however, this goal is aspirational in nature and works to improve people’s behavior toward one another, and toward other sentient beings, in this life.
Based on what we now know about neurophysiology, the human brain is going to have a lot of trouble reaching any form of true enlightenment on its own. Emotion and desire re-surface even in the most virtuous or contemplative among us. Seemingly, we are never going to be truly saintly for the rest of our lives, although perhaps we can manage to have some pretty virtuous days interspersed among our failures.
I am not ready to make a proposal, but if enlightenment or spiritual purification is our goal, we might have to resort to some kind of augmentation to make it to the next step in that direction. Perhaps a lot of meditation or ethical contemplation is one approach, perhaps we may think of others...
Strictly an augmentation of intelligence, however, is not necessarily going to bring about this outcome.
The ability for autonomous systems to manipulate objects in the real world may relate to other forms of intelligence, but understanding future threats and benefits requires that we break this set of skills out and consider it separately.
We already live in an age of autonomous vehicles. To understand the future, we have to forecast how this revolution will play in with other advances in intelligence.
The forward progress of robotics is different from the intelligences in this chapter. Robotics may benefit from AI planning and object recognition, but robots are hardware/software solutions.
Superintelligence will seem a lot more “super” if fundamental issues relating to object recognition, navigation indoors and object manipulation are solved first.
Are there forms of superintelligence Bostrom missed?
Define intelligence how you like, there are a few capabilities not in this list which will significantly influence the capabilities of autonomous systems, including how beneficial, threatening or capable they are:
Sensory systems:
Both machines, and future collective intelligences of people and machines, are benefiting from an exponential explosion the ability to sense. Everything is improving at a vast pace: telescopes, microscopes, detection in every wavelength, sound, chemistry, DNA/RNA sequencing, mass spectrometry, surveillance in thousands of locations, internet of things, the list goes on and on.
If we want to understand the future of intelligence, we have to incorporate the explosion of sensory input into our thinking.
I am not ready to claim that spirituality is irrelevant to the discussion. The difference between factual knowledge, calculation speed and “wisdom” does seem relevant, as others have pointed out earlier in the thread.
However, we’ll have to re-frame questions about spiritual issues in some way to bring it in...
Suppose one aspires to become a Bodhisatva-a being who is capable of entering into an unfettered celestial existence, but who instead stays behind to help other people and sentient beings find their way. (If we are a bit less ambitious, perhaps we might, depending on our tradition, aspire to become a tzadik, a marja, or saintly but not a supernatural savior).
Among other things, the Bodhisatvas have removed themselves from ego and forms of physical desire. They are spiritually superior and ready to move outside of the cycle of reincarnation as lower beings. Yet, when they take the form of a human, they would eat as an instrumental goal toward accomplishing their end.
The Bodhisatva’s kind of spiritual superiority does seem to differ from what Bostrom and the rest of us call “Superintelligence.” It may, however, relate in some ways...
Just a thought experiment...
For example, a peculiar version of a Bodhisatva might be said to have a utility function-in fact a marvelous, selfless utility function. Also, it will succeed at bringing more people out of the darkness if it engages in recursive self-improvement of its capabilities.
One aspires to become a Buddha or a Bodhisatva, and the claim is made by some that they, others who they know, or historical figures have reached this condition. For the most part, however, this goal is aspirational in nature and works to improve people’s behavior toward one another, and toward other sentient beings, in this life.
Based on what we now know about neurophysiology, the human brain is going to have a lot of trouble reaching any form of true enlightenment on its own. Emotion and desire re-surface even in the most virtuous or contemplative among us. Seemingly, we are never going to be truly saintly for the rest of our lives, although perhaps we can manage to have some pretty virtuous days interspersed among our failures.
I am not ready to make a proposal, but if enlightenment or spiritual purification is our goal, we might have to resort to some kind of augmentation to make it to the next step in that direction. Perhaps a lot of meditation or ethical contemplation is one approach, perhaps we may think of others...
Strictly an augmentation of intelligence, however, is not necessarily going to bring about this outcome.
The ability for autonomous systems to manipulate objects in the real world may relate to other forms of intelligence, but understanding future threats and benefits requires that we break this set of skills out and consider it separately.
We already live in an age of autonomous vehicles. To understand the future, we have to forecast how this revolution will play in with other advances in intelligence.
The forward progress of robotics is different from the intelligences in this chapter. Robotics may benefit from AI planning and object recognition, but robots are hardware/software solutions.
Superintelligence will seem a lot more “super” if fundamental issues relating to object recognition, navigation indoors and object manipulation are solved first.
The analysis improves if we separate out and take a closer look at the differences between memory and reasoning.