For fastest economic growth it is not necessary to achieve human-level intelligence. It is even hindering. Highly complex social behaviour to find a reproduction partner is not neccessary for economic success. A totally unbalanced AI character with highly superhuman skills in creativity, programming, engineering and cheating humans could beat a more balanced AI character and self-improve faster. Todays semantic big data search is already manitudes faster than human research in a library using a paper catalog. We have to state highly super-human performance to answer questions and low sub-human performance in asking questions. Strong AI is so complex that projects for normal business time frames go for the low hanging fruits. If the outcome of such a project can be called an AI it is with highest probability extremely imbalanced in its performance and character.
TRIZ-Ingenieur
I strongly support your idea to establish a collaborative work platform. Nick Bostroms book brings so many not yet debated aspects into public debate that we should support him with input and feed back for the next edition of this book. He threw his hat into the ring and our debate will push sales for his book. I suspect he prefers to get comments and suggestions for better explanations in a structured manner.
Our human cognition is mainly based on pattern recognition. (compare Ray Kurzweil “How to Create a Mind”). Information stored in the structures of our cranial neural network is waiting sometimes for decades until a trigger stimulus makes a pattern recognizer fire. Huge amounts of patterns can be stored while most pattern recognizers are in sleeping mode consuming very little energy. Quantum computing with incoherence time in orders of seconds is totally unsuitable for the synergistic task of pattern analysis and long term pattern memory with millions of patterns. IBMs newest SyNAPSE chip with 5.4 billion transistors on 3.5cm² chip and only 70mW power consumption in operation is far better suited to push technological development towards AI.
As long as a chatbot does not understand what it is chatting about it is not worth real debate. The “pass” is more an indication how easily we get cheated. When we think while speaking we easily start waffling. This is normal human behaviour, same as silly jumps in topic. Jumping between topics was this chatbots trick to hide its non-understanding.
Intelligometry
Opinions about the future and expert elicitation
Predictions of our best experts, statistically evaluated, are nonetheless biased. Thank you Katja for contributing additional results and compiling charts. But enlarging quantity of people being asked will not result in better predictive quality. It would be funny to see results of a poll on HLMI time forecast within our reading group. But this will only tell us who we are and nothing about the future of AGI. Everybody in our reading group is at least a bit biased by having read chapters of Nick Bostrums book. Groupthink and biased perception are the biggest obstacles when predicting the future. Expert elicitation is no scientific methodology. It is collective educated guessing.
Trend extrapolation
Luke Muehlhauser commented Ray Kurzweils success in predicting when for the first time a chess program would defeat the human World Champion:
Those who forecasted this event with naive trend extrapolation (e.g. Kurzweil 1990) got almost precisely the correct answer (1997).
Luke Muehlhauser opened my eyes by admitting:
Hence, it may be worth searching for a measure for which (a) progress is predictable enough to extrapolate, and for which (b) a given level of performance on that measure robustly implies the arrival of Strong AI. But to my knowledge, this has not yet been done, and it’s not clear that trend extrapolation can tell us much about AI timelines until such an argument is made, and made well.
For Weak AI problems trend extrapolation is working. In image processing research it is common to accept computing times of minutes for a single frame of a real-time video sequence: Hardware and software will advance and can be scaled. Within five years this new algorithm will become real time capable. Weak AI capability is easily measurable. Scaling efficiency of many Weak AI problems (e.g. if search trees are involved) is dominantly linear and therefore predictable.
For Strong AI let’s make trend prediction work! Let’s call our tool Intelligometry. I coined this term today and I hope it will bring us forward towards scientific methodology and predictability.
Intelligometry: Theory of multidimensional metrics to measure skills and intelligence. The field of intelligometry involves development and standardization of tests to get objective comparability between HI and AI systems.
Unfortunately the foundation of intelligence metrics is scarce. The anthropocentric IQ measure with a mean of 100 and standard deviation of 15 (by definition) is the only widely accepted intelligence metrics for humans. Short IQ tests cover only 2 sigma range. These tests can give results from 70 to 130. Extensive tests cover as well up to 160.
Howard Gardners theory of multiple intelligences could be a starting point for test designs. He identifies 9 intelligence modalities:Musical–rhythmic and harmonic
Visual–spatial
Verbal–linguistic
Logical–mathematical
Bodily–kinesthetic
Interpersonal
Intrapersonal
Naturalistic
Existential
Although there is some criticism and marginal empirical proof, education received stimulus by this theory. It could be that humans have highly intercorrelated intelligence modalities and the benefit of this differentiation is not so high. Applied to AI systems with various architectures we can expect to find significant differences.
Huge differences in AI capabilities compared to human and other AIs make a linear scale impractical. Artificial intelligence measures should be defined on a logarithmic scale. Two examples: To multiply two 8-digit numbers a human might need 100s. A 10MFlops smart phone processor would process 1E9 times as much multiplications. RIKENs K computer (4th on Top500) with 10PFlops is 1E18 times faster than a human. On the contrary: A firefighter can run though complex unknown rooms may be 100 times faster than a Robocup rescue challenge robot. The robot is 1E-2 times “faster”.
We shoud inspire other researchers to challenge humans with exact the same task they challenge their machines. They should generate solid data for statistical analysis. Humans of both sexes and all age classes should be tested. Joint AI and psychology research will bring synergistic effects.
It is challenging to design tests that are able to discriminate the advancement of an AI from very low capabilities, e.g. from 1E-6 to 1E-5. If the test consists of complex questions it could be that the AI answers 10% correctly by guessing. The advance from 100,001 correct answers to 100,010 correct ones means that true understanding of the AI improved by a factor of 10. The tiny difference probably remains undetected in the noise of guessing.
Intelligometry could supply methodology and data we need for proper predictions. AI research should manage to establish a standardized way of documentation. These standards shall be part of all AI curricula. Public funded AI related research projects should use standardized tests and documentation schemes. If we manage to move from educated guessing to trend extrapolation on solid data within the next ten years (3 PhD generations) we have managed a lot. This will be for the first time a reliable basis for predictions. These predictions will be the solid ground to guide our governments and research institutes regarding global action plans towards a sustainable future for us humans.
The ability to self-improve grows over time. Actually computer chips need extremely expensive masking steps in fabs. Maker tools for nanoassembled chips and parts will not be available so soon. AIs have to rely on human infrastructure support.
If an early bird project reaches HLMI by 2022 there is hardly any infratructure for radically transforming the world. Only projects that are currently running with high budget have a chance to meet this date. Human brain project with highest probability: No. Google brain project or Baidu brain project: maybe yes. The majority of projects are stealth ones: For sure NSA and other intelligence agencies are working on AIs targeting the decisive advantage. Financial firms would benefit very fast from decisive advantage. 3 of 4 friends of mine, working for different companies in the financial sector told me about ongoing AI projects in their companies. If a stealth project succeeds in 2022 we probably will not take notice. The AI will use its intelligence to hide its success and misinform about it. In 2022 an outbreaking AI would not gain enough momentum to prevent shutting it down. Only very few supercomputers in 2022 will have enough computational power for this AI. If we want, we can switch it off.The much higher risk arises if a further AI winter would come. Technology, infrastructure, excessive computing capacity, nanoassembly makers: everything is prepared, but nobody found the holy grail of intelligent software. All of a sudden a self improving AI could self-improve its initial inefficient software and jump above all thinkable measures into superintelligence. Billions of computers capable of running this AI are by then available. To infiltrate this mighty infrastructure will be easy. Millions of nanoassembly makers could be turned into replicating factories. Switching off billions of computers that are deeply interwoven with our daily life by then might be nearly impossible.
Follow the trail of money...
Nick Bostrum decided to draw an abstract picture. The reader is left on his or her own to find the players in the background. We have to look for their motives. Nobody is interested in HLMI except from universities. Companies want superhuman intelligence as fast as possible for the smallest budget. Any nice-to-have capability, making the AI more human-like, is causing delays and money.
Transparency and regulation are urgently needed. We should discuss it later.
If a child does not receive love, is not allowed to play, gets only instructions and is beaten—you will get a few years later a traumatized paranoic human being, unable to love, nihilistic and dangerous. A socialization like this could be the outcome of a “successful” self improving AI project. If humanity tries to develop an antagonist AI it could end in a final world war. The nihilistic paranoic AI might find a lose-lose strategy favorable and destroys our world.
That we did not receive any notion of extraterrestial intelligence tells us that obviously no other intelligent civilization has managed to survive a million years. Why they collapsed is pure speculation, but evil AI could speed things up.
It would collapse within apocalypse. It might trigger aggressive actions knowing to be eradicated itself. It wants to see the other lose. Dying is not connected with fear. If it can prevent the galaxy from being colonised by good AI it prefers perfect apocalypse.
Debating aftermath of apocalypse gets too speculative to me. I wanted to point out that current projects do not have the intention to create a balanced good AI character. Projects are looking for fast success and an evil paranoic AI might result in the far end.- 7 Nov 2014 20:34 UTC; -1 points) 's comment on Superintelligence 8: Cognitive superpowers by (
With AI winter I meant AGI winter. If current AGI projects (Ng/Baidu, Kurzweil/Google, human brain project and others) fail to deliver concepts for deep learning and fundamental understanding, financial support for AGI could be cut and funneled into less risky weak AI projects. Technology progresses, weak AI capabilities rise to high superintelligence in their domain. But like superheated water—without a nucleous for boiling—nothing happens. The heat rises above boiling point. One grain of salt into the superheated water creates the first bubble, triggering more bubbles into steam explosion.
AGI winter and intelligence explosion
If we let an AGI winter happen many diverse weak AIs might be developed, as depicted in the spiderweb chart. Being superintelligent in their domain these AIs have nearly no other skills. In this situation only a tiny nucleation stimulus is needed to start the intelligence explosion with a highly superintelligent AI. This stimulus could come from a small project that has no capabilties to engineer safeguarding measures.
AI has been so successful recently that enough financial support is available. We have to invest a significant amount into AGI and means for controlling and safeguarding AGI development. If we allow an AGI winter to happen we risk an uncontrollable intelligence explosion.
AI experts get their prediction on the basis of many aspects. Maybe it could be possible to compile a questionaire to document bias and reasons for their prediction.
You are right. With IBM we can follow how from the chess playing Big Blue (weak AI) over Jeopardy mastering Watson (stronger weak AI) they now pushed hardware development together with HRL and developed the newest SyNAPSE chip. IBM pushes their weak AIs to get stronger and stronger and now leave the path of von Neumann computers into the future to get even stronger AIs. I expect that IBM will follow the Watson/von Neumann and its new neurocomputational path in parallel for several more years.
I am expecting superintelligence to happen before HLMI. A superintelligence with decisive advantage does not need all human skills.
Genome sequencing improved over-exponentially in the years of highest investment. Competition, availability of several different technological approaches and global research trends funding nano technology enabled this steep improvement. We should not call this a jump because we might need this word if developments reach timescales of weeks.
Current high funding for basic research (human brain project, human connectome and others), high competition and the availability of many technological paths make an over-exponential development likely.
Human genome project researchers spent most of their time on improving technology. After achiving magnitudes in speed-up they managed to sequence the largest proportion in the final year.
I expect similar over-exponential AGI improvements once we understand our brain. A WBE does not need to simulate a human brain. To steal the technology it is sufficient to simulate a brain with a small neocortex. A human brain with larger neocortex is just an quantitative extension.
A team of human engineers with assistance of specialized engineering software and internet access to nearly unlimited amount of information can be viewed as an superintelligent entity.
Until a Seed AI reaches higher development speed in unsupervised self-improvement compared to human engineering it must have already highly superintelligent engineering skills.
A HLMI is like a lone developer not capable to manage the development complexity.
Organizations can become much more superintelligent than they are. A team of humans plus better and better weak AI has no upper limit to intelligence. Such a hybrid superintelligent organization can be the way to keep AI development under control.
Survival was and is the challenge of evolution. Higher intelligence gives more options to cope with deadly dangers.
To measure intelligence we should challenge AI entities using standardized tests. To develop these tests will become a new field of research. IQ tests are not suitable because of their anthropocentrism. Tests should analyze capabilities how good and fast real world problems are solved.
I scored an IQ of 60 at school. I was thinking too complex around the corner. Same experience I had with a Microsoft “computer driving license” test. I totally failed because I answered based on my knowledge of IT forensic possibilities. E.g. Question: If you delete a file in Windows trash bin: Is the file recoverable? If you want to pass this test you have to give the wrong answer no.
These examples show: We need cascaded test hierarchies:
classification test
test with adapted complexity level
Reading Steve Wozniaks biography iWoz I support your view that parents nowadays focus more on education in youngest years. Steve learned about electronic components even before he was four years old. His father explained to him many things about electronics before he was old enough for school. He learned to read at the age of three. This needs parents who assist. Steve Wozniak praised his father for explaining always on a level he could understand. Only one step at a time.
His exceptional high intelligence, he cited a test IQ > 200, is for sure not only inherited but consequence of loving care, teaching and challenging by his parents and peers.
This summary of already superhuman game playing AIs impressed me since two weeks. But only until yesterday. John McCarthy was attributed in Vardi(2012) to have said: “As soon as it works, no one calls it AI anymore.” (p13)
There is more truth in it than McCarthy expected it to be: A tailor made game playing algorithm, developed and optimized by generations of scientists and software engineers is no entity of AI. It is an algorithm. Human beings analyzed the rule set, found abstractions of it, developed evaluation schemes and found heuristics to prune the un-computable large search tree. With brute force and megawatts of computational evaluation power they managed to fill a database with millions of more or less favorable game situations. In direct competion of game playing algorithm vs. human being these pre-computed situations help to find short cuts in the tree search to achieve superhuman performance in the end.
Is this entity an AI or an algorithm?
Game concept development: human.
Game rule definition and negotiation: human.
Game rule abstraction and translation into computable form: human designed algorithm.
Evaluation of game situation: human designed algorithm, computer aided optimization.
Search tree heuristics: human designed algorithm, computer aided optimization.
Database of favorable situations and moves: brute force tree search on massive parallel supercomputer.
Detection of favorable situations: human designed algorithm for pattern matching, computer aided optimization.
Active playing: Full automatic use of algorithms and information of points 3-7. No human being involved.
Unsupervised learning, search optimization and pattern matching (points 5-7) make this class of entities weak AIs. A human being playing against this entity will probably attribute intelligence to it. “Kasparov claims to have seen glimpses of true intelligence and creativity in some of the computers moves” (p12, Newborn[2011]).
But weak AI is not our focus. Our focus is strong AI, HLAI and superintelligence. It is good to know that human engineered weak AI algorithms can achieve superhuman performance. Not a single game playing weak AI achieved human level of intelligence. The following story will show why:
Watch two children, Alice and Bob, playing in the street. They found white and black pebbles and a piece of chalk. Bob has a faint idea of checkers (other names: “draught” or German: “Dame”) from having seen his elder brother playing it. He explains to Alice: “Lets draw a grid of chalk lines on the road and place our pebbles into the fields. I will show you.” In joint effort they draw several strait lines resulting in a 7x9 grid. Then Bob starts to place his black pebbles into his starting rows as he remembered it. Alice follows suit—but she has not enough white pebbles to fill her starting rows. They discuss their options and searched for more white pebbles. After two minutes of unsuccessful search Bob said: “Lets remove one column and I take two of my black pebbles away.” Then Bob explained to Alice how to make moves with her pebbles on the now smaller 7x8 board game grid. They started playing and enjoyed their time. Bob did win most of the games. He changed the rules to give Alice a starting advantage. Alice did not care losing frequently. They laughed a lot. She loves Bob and is happy for every minute being next to him.
This is a real game. It is a full body experience with all senses. These young children manipulate their material world, create and modify abstract rules, develop strategies for winning, communicate and have fun together.
The German Wikipedia entry for “Dame_(Spiel)” lists 3 4 4 (3 + many more) 2 = 288+ orthogonal rule variants. Playing Doppelkopf (popular 4-player card game in Germany) with people you have never played with takes at least five minutes discussion about the rules in the beginning. This demonstrates that developing and negotiating rules is central part of human game play.
If you would tell 10 year old Bob: “Alice has to go home with me for lunch. Look, this is Roboana (a strong AI robot), play with her instead.” You guide your girl-alike robot to Bob.
Roboana: “Hi, I’m Roboana, I saw you playing with Alice. It seems to be very funny. What is the game about?”
You, member of the Roboana development team, leave the scene for lunch. Will your maybe-HLAI robot manage the situation with Bob? Will Roboana modify the rules to balance the game if her strategy is too superior before Bob gets annoyed and walks away? Will Bob enjoy his time with Roboana?
Bob is assumingly 10 years old and qualifies only for sub human intelligence. Within the next 20 years I do not expect any artificial entity to reach this level of general intelligence. To know that algorithms meet the core performance for game play is only the smallest part of the problem. Therefore I prefer calling weak AI what it is: Algorithm.
In our further reading we should try not to forget that aspects of creativity, engineering, programming and social interaction are in most cases more complex than the core problem. Some rules are imprinted into us human beings: how a face looks like, how a fearful face looks like, how a fearful mother smells, how to smile to please, how to scream to alert the mother, how spit out bitter tasting food to protect against intoxication. To play with the environment is imprinted into our brains as well. We enjoy to manipulate things and observe with our fullest curiosity its outcome. A game is a regulated kind of play. For AI development it is worth to widen the focus from game to playing.