I’m not very confident—primarily because we are talking ten years out—and the future fairly rapidly turns into a fog of possibilities which makes it difficult to predict.
Which brings us back to why you seem so confident. What facts, or observations are the ones you find which provide the most compelling evidence that intelligent machines are at least ten years off. Indeed, how do you know that the NSA doesn’t have such a machine chained up in its basement right now?
What facts, or observations are the ones you find which provide the most compelling evidence that intelligent machines are at least ten years off.
It hasn’t worked in sixty years of trying, and I see nothing in the current revival to suggest they have any ideas that are likely to do any better. To be specific, I mean people such as Marcus Hutter, Shane Legg, Steve Omohundro, Ben Goertzel, and so on—those are the names that come to me off the top of my head. And by their current ideas for AGI I mean Bayesian reasoning, algorithmic information theory, AIXI, Novamente, etc.
I don’t think any of these people are stupid or crazy (which is why I don’t mention Mentifex in the same breath as them), and I wouldn’t try to persuade any of them out of what they are doing unless I had something demonstrably better, but I just don’t believe that collection of ideas can be made to work. The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work. The basic ideas that people have tried can be classified as (1) crude imitation of the lowest-level anatomy (neural nets), (2) brute-forced mathematics (automated reasoning, logical or probabilistic), or (3) attempts to code up what it feels like to be a mind (the whole cognitive AI tradition).
Indeed, how do you know that the NSA doesn’t have such a machine chained up in its basement right now?
My estimates are unaffected by hypothetical possibilities for which there is no evidence, and are protected against that lack of evidence.
Besides, the current state of the world is not suggestive of the presence of AIs in it.
ETA: But this is becoming a digression from the purpose of the thread.
Thanks for sharing. As previously mentioned, we share a generally negative impression of the chances of success in the next ten years.
However, it appears that I give more weight to the possibility that there are researchers within companies, within government organisations, or within other countries who are doing better than you suggest—or that there will be at some time over the next ten years. For example, Voss’s estimate (from a year ago) was “8 years”—see: http://www.vimeo.com/3461663
We also appear to differ on our estimates of how important knowledge of how brains work will be. I think there is a good chance that it will not be very important.
Ignorance about NSA projects might not affect our estimates, but perhaps it should affect our confidence in them. An NSA intelligent agent might well remain hidden—on national security grounds. After all, if China’s agent found out for sure that America had an agent too, who knows what might happen?
They are the National Security Agency. Which of those areas presents the biggest potential threat to national security? With a machine intelligence, you could build all the quantum computers you would ever need.
The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work.
This is my sense as well. I also think there is a substantial limit on what we’re likely to learn about the brain given that we can’t study brain functionality with large scope, neuron-level definition, in real time given obvious ethical constraints. Does anyone know of any technologies on the horizon that could change this in the next ten years?
“One of [the Middle Ages’] characteristics was that ‘reasoning by analogy’ was rampant; another characteristic was almost total intellectual stagnation, and we now see why the two go together.
There’s no reason to spread such myths about medieval history.
The main characteristics of the Early Middle Ages were low population densities, very low urbanization rates, very low literacy rates, and almost zero lay literacy rates. Being in a reference class of times and places with such characteristics, it would be a miracle if any significant progress happened during Early Middle Ages.
China also springs to mind. I have listened to documentary about the Chinese empire and distinctly remember how advanced yet stagnant it seemed. At the time my explanation was authoritarianism.
But 1) I’m not sure anyone has a good grasp of what the properties we’re trying to duplicate are. I’m sure some people think they do and it is possible someone has stumbled on to the answer but I’m not sure there is enough evidence to justify any claims of this sort. How exactly would someone figure out what general intelligence is without ever seeing it in action? The interior experience of being intelligent? Socialization with other intelligences? An analogy to computers?
2) Lets say we do have or can come up with a clear conception of what the AGI project is trying to accomplish without better neuroscience. It isn’t then obvious to me that the way to create intelligence will be easy to derive without more neuroscience. Sure, from just from a conception of what flight is it is possible to come up with solutions to the problem of heavier than air flight. But for the most part humans are not this smart. Despite the ridiculous attempts at flight with flapping wings I suspect having birds to study—weigh, measure and see in action—sped up the process significantly. Same goes for creating intelligence.
(Prediction: .9 probability you have considered both these objections and rejected them for good reason. And .6 you’ve published something that rebuts at least one of the above. :-)
The NSA does have some scary machines chained in their “Basement,” yet I doubt any of them approach AGI. All of them(that I am aware of—so, that would be 2) are geared toward some pretty straightforward real-time data mining, and I am told that the other important gizmos do pretty much the same thing (except with crypto).
I doubt that they have anything in the NSA (or other spooky agencies) that significantly outstrips many of the big names in Enterprise. After all, the Government does go to the same names to buy its supercomputers that everyone else does. It’s just the code that would differ.
So: you have a hotline to the NSA, and they tell you about all their secret technology?!? This is one of the most secretive organisations ever! If you genuinely think you know what they are doing, that is probably because they have you totally hoodwinked.
Hardly a hotline… A long, long time ago, when I was very young, I wound up working with the NSA for about six months. I was supposed to have finished school and gone to work for them full time… But, I flaked when I discovered that I could get laid pretty easily (women seemed much more important than an education at the time).
I still keep in touch, and I have found that an awful lot of their work is not hard to find out about. They may have me hoodwinked, as my job was hoodwinking others. However, I don’t usually spend my time with any of my former co-workers talking about stuff that they shouldn’t be talking about. Most of it is about stuff that is out in the open, yet that most people don’t care about, or don’t know about (usually because it’s dead boring to most people).
And, I am not aware that I have stumbled onto any secret technology. Just two machines that I found to be freakishly smart. One of them did stuff that Google can probably now do (image recognition), and I am pretty sure that the other used something very similar to Mathematica. I was really impressed by them, but then I also did not know that things like Mathematica existed at the time. At the time I saw them, I was told by my handler than they were “Nothing compared to the monsters in the garage.”
Edit: Anyone may feel free to think that I am a nut-job if they wish. At this point, I have little to no proof of anything at all about my life due to the loss of everything I ever owned when my wife ran off. So, you may take my comments with a grain of salt until I am better known.
Shane Legg gives a 10% probability of that here:
http://www.churchofvirus.org/bbs/attachments/agi-prediction.png
My estimate here is a bit bigger—maybe around 15%:
http://alife.co.uk/essays/how_long_before_superintelligence/graphics/pdf_no_xp.png
You seem to be about ten times more confident than us. Is that down to greater knowledge—or overconfidence?
You seem to be about ten times less confident than me. Is that down to greater knowledge—or underconfidence?
I’m not very confident—primarily because we are talking ten years out—and the future fairly rapidly turns into a fog of possibilities which makes it difficult to predict.
Which brings us back to why you seem so confident. What facts, or observations are the ones you find which provide the most compelling evidence that intelligent machines are at least ten years off. Indeed, how do you know that the NSA doesn’t have such a machine chained up in its basement right now?
It hasn’t worked in sixty years of trying, and I see nothing in the current revival to suggest they have any ideas that are likely to do any better. To be specific, I mean people such as Marcus Hutter, Shane Legg, Steve Omohundro, Ben Goertzel, and so on—those are the names that come to me off the top of my head. And by their current ideas for AGI I mean Bayesian reasoning, algorithmic information theory, AIXI, Novamente, etc.
I don’t think any of these people are stupid or crazy (which is why I don’t mention Mentifex in the same breath as them), and I wouldn’t try to persuade any of them out of what they are doing unless I had something demonstrably better, but I just don’t believe that collection of ideas can be made to work. The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work. The basic ideas that people have tried can be classified as (1) crude imitation of the lowest-level anatomy (neural nets), (2) brute-forced mathematics (automated reasoning, logical or probabilistic), or (3) attempts to code up what it feels like to be a mind (the whole cognitive AI tradition).
My estimates are unaffected by hypothetical possibilities for which there is no evidence, and are protected against that lack of evidence.
Besides, the current state of the world is not suggestive of the presence of AIs in it.
ETA: But this is becoming a digression from the purpose of the thread.
Thanks for sharing. As previously mentioned, we share a generally negative impression of the chances of success in the next ten years.
However, it appears that I give more weight to the possibility that there are researchers within companies, within government organisations, or within other countries who are doing better than you suggest—or that there will be at some time over the next ten years. For example, Voss’s estimate (from a year ago) was “8 years”—see: http://www.vimeo.com/3461663
We also appear to differ on our estimates of how important knowledge of how brains work will be. I think there is a good chance that it will not be very important.
Ignorance about NSA projects might not affect our estimates, but perhaps it should affect our confidence in them. An NSA intelligent agent might well remain hidden—on national security grounds. After all, if China’s agent found out for sure that America had an agent too, who knows what might happen?
I would guess that the NSA is more interested in quantum computing than in AI.
They are the National Security Agency. Which of those areas presents the biggest potential threat to national security? With a machine intelligence, you could build all the quantum computers you would ever need.
This is my sense as well. I also think there is a substantial limit on what we’re likely to learn about the brain given that we can’t study brain functionality with large scope, neuron-level definition, in real time given obvious ethical constraints. Does anyone know of any technologies on the horizon that could change this in the next ten years?
http://lesswrong.com/lw/vx/failure_by_analogy/
From quote in that post:
There’s no reason to spread such myths about medieval history.
The main characteristics of the Early Middle Ages were low population densities, very low urbanization rates, very low literacy rates, and almost zero lay literacy rates. Being in a reference class of times and places with such characteristics, it would be a miracle if any significant progress happened during Early Middle Ages.
High and Late Middle Ages on the other hand had plenty of technological and intellectual progress.
I’m much more surprised why dense, urbanized, and highly literate Roman Empire was so stagnant.
China also springs to mind. I have listened to documentary about the Chinese empire and distinctly remember how advanced yet stagnant it seemed. At the time my explanation was authoritarianism.
All that is fine.
But 1) I’m not sure anyone has a good grasp of what the properties we’re trying to duplicate are. I’m sure some people think they do and it is possible someone has stumbled on to the answer but I’m not sure there is enough evidence to justify any claims of this sort. How exactly would someone figure out what general intelligence is without ever seeing it in action? The interior experience of being intelligent? Socialization with other intelligences? An analogy to computers?
2) Lets say we do have or can come up with a clear conception of what the AGI project is trying to accomplish without better neuroscience. It isn’t then obvious to me that the way to create intelligence will be easy to derive without more neuroscience. Sure, from just from a conception of what flight is it is possible to come up with solutions to the problem of heavier than air flight. But for the most part humans are not this smart. Despite the ridiculous attempts at flight with flapping wings I suspect having birds to study—weigh, measure and see in action—sped up the process significantly. Same goes for creating intelligence.
(Prediction: .9 probability you have considered both these objections and rejected them for good reason. And .6 you’ve published something that rebuts at least one of the above. :-)
The NSA does have some scary machines chained in their “Basement,” yet I doubt any of them approach AGI. All of them(that I am aware of—so, that would be 2) are geared toward some pretty straightforward real-time data mining, and I am told that the other important gizmos do pretty much the same thing (except with crypto).
I doubt that they have anything in the NSA (or other spooky agencies) that significantly outstrips many of the big names in Enterprise. After all, the Government does go to the same names to buy its supercomputers that everyone else does. It’s just the code that would differ.
So: you have a hotline to the NSA, and they tell you about all their secret technology?!? This is one of the most secretive organisations ever! If you genuinely think you know what they are doing, that is probably because they have you totally hoodwinked.
Hardly a hotline… A long, long time ago, when I was very young, I wound up working with the NSA for about six months. I was supposed to have finished school and gone to work for them full time… But, I flaked when I discovered that I could get laid pretty easily (women seemed much more important than an education at the time).
I still keep in touch, and I have found that an awful lot of their work is not hard to find out about. They may have me hoodwinked, as my job was hoodwinking others. However, I don’t usually spend my time with any of my former co-workers talking about stuff that they shouldn’t be talking about. Most of it is about stuff that is out in the open, yet that most people don’t care about, or don’t know about (usually because it’s dead boring to most people).
And, I am not aware that I have stumbled onto any secret technology. Just two machines that I found to be freakishly smart. One of them did stuff that Google can probably now do (image recognition), and I am pretty sure that the other used something very similar to Mathematica. I was really impressed by them, but then I also did not know that things like Mathematica existed at the time. At the time I saw them, I was told by my handler than they were “Nothing compared to the monsters in the garage.”
Edit: Anyone may feel free to think that I am a nut-job if they wish. At this point, I have little to no proof of anything at all about my life due to the loss of everything I ever owned when my wife ran off. So, you may take my comments with a grain of salt until I am better known.