Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
This claim can be broken into two separate parts:
Will we have human-level AI?
Once we have human-level AI, will it develop to become superhuman AI?
For 1: looking at current technology trends, Sandberg & Bostrom estimate that we should have the technology needed for whole brain emulation around 2030-2050 or so, at least assuming that it gets enough funding and that Moore’s law keeps up. Even if there isn’t much of an actual interest in whole brain emulations, improving scanning tools are likely to revolutionize neuroscience. Of course, respected neuroscientists are already talking about reverse-engineering of the brain as being within reach. If we are successful at reverse engineering the brain, then AI is a natural result.
As for two, as Eliezer mentioned, this is pretty much an antiprediction. Human minds are a particular type of architecture, running on a particular type of hardware: it would be an amazing coincidence if it just happened that our intelligence couldn’t be drastically improved upon. We already know that we’re insanely biased, to the point of people suffering death or collapses of national economies as a result. Computing power is going way up: with the current trends, we could in say 20 years have computers that only took three seconds to think 25 years’ worth of human thoughts.
Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).
Molecular nanotechnology is not needed. As our society grows more and more dependant on the Internet, plain old-fashioned hacking and social engineering probably becomes more than sufficient to take over the world. Lethal micro-organisms can AFAIK be manufactured via the Internet even today.
The likelihood of exponential growth versus a slow development over many centuries.
Hardware growth alone would be enough to ensure that we’ll be unable to keep up with the computers. Even if Moore’s law ceased to be valid and we were stuck with a certain level of tech, there are many ways of gaining an advantage.
That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.
Eliezer Yudkowsky is hardly the only person involved in SIAI’s leadership. Michael Vassar is the current president, and e.g. the Visiting Fellows program is providing a constant influx of fresh views on the topics involved.
As others have pointed out, SIAI is currently the only organization around that’s really taking care of this. It is not an inconceivable suggestion that another organization could do better, but SIAI’s currently starting to reach the critical mass necessary to really have an impact. E.g. David Chalmers joining in on the discussion, and the previously mentioned Visiting Fellow program motivating various people to start their own projects. This year’s ECAP conference will be featuring five conference papers from various SIAI-affiliated folks, and so on.
Any competing organization, especially if it was competing for the same donor base and funds, should have a well-argued case for what it can do that SIAI can’t or won’t. While SIAI’s starting to get big, I don’t think that its donor base is large enough to effectively support two different organizations working for the same goal. To do good, any other group would need to draw its primary funding from some other source, like the Future of Humanity Institute does.
Lethal micro-organisms can AFAIK be manufactured via the Internet even today.
Do you have a citation for this? You can get certain biochemical compounds synthesized for you (there’s a fair bit of a market for DNA synthesis) but that’s pretty far from synthesizing microorganisms.
Right, sorry. I believe the claim (which I heard from a biologist) was that you can get DNA synthesized for you, and in principle an AI or anyone who knew enough could use those services to create their own viruses or bacteria (though no human yet has that required knowledge). I’ll e-mail the person I think I heard it from and ask for a clarification.
This claim can be broken into two separate parts:
Will we have human-level AI?
Once we have human-level AI, will it develop to become superhuman AI?
For 1: looking at current technology trends, Sandberg & Bostrom estimate that we should have the technology needed for whole brain emulation around 2030-2050 or so, at least assuming that it gets enough funding and that Moore’s law keeps up. Even if there isn’t much of an actual interest in whole brain emulations, improving scanning tools are likely to revolutionize neuroscience. Of course, respected neuroscientists are already talking about reverse-engineering of the brain as being within reach. If we are successful at reverse engineering the brain, then AI is a natural result.
As for two, as Eliezer mentioned, this is pretty much an antiprediction. Human minds are a particular type of architecture, running on a particular type of hardware: it would be an amazing coincidence if it just happened that our intelligence couldn’t be drastically improved upon. We already know that we’re insanely biased, to the point of people suffering death or collapses of national economies as a result. Computing power is going way up: with the current trends, we could in say 20 years have computers that only took three seconds to think 25 years’ worth of human thoughts.
Molecular nanotechnology is not needed. As our society grows more and more dependant on the Internet, plain old-fashioned hacking and social engineering probably becomes more than sufficient to take over the world. Lethal micro-organisms can AFAIK be manufactured via the Internet even today.
Hardware growth alone would be enough to ensure that we’ll be unable to keep up with the computers. Even if Moore’s law ceased to be valid and we were stuck with a certain level of tech, there are many ways of gaining an advantage.
Eliezer Yudkowsky is hardly the only person involved in SIAI’s leadership. Michael Vassar is the current president, and e.g. the Visiting Fellows program is providing a constant influx of fresh views on the topics involved.
As others have pointed out, SIAI is currently the only organization around that’s really taking care of this. It is not an inconceivable suggestion that another organization could do better, but SIAI’s currently starting to reach the critical mass necessary to really have an impact. E.g. David Chalmers joining in on the discussion, and the previously mentioned Visiting Fellow program motivating various people to start their own projects. This year’s ECAP conference will be featuring five conference papers from various SIAI-affiliated folks, and so on.
Any competing organization, especially if it was competing for the same donor base and funds, should have a well-argued case for what it can do that SIAI can’t or won’t. While SIAI’s starting to get big, I don’t think that its donor base is large enough to effectively support two different organizations working for the same goal. To do good, any other group would need to draw its primary funding from some other source, like the Future of Humanity Institute does.
Do you have a citation for this? You can get certain biochemical compounds synthesized for you (there’s a fair bit of a market for DNA synthesis) but that’s pretty far from synthesizing microorganisms.
Right, sorry. I believe the claim (which I heard from a biologist) was that you can get DNA synthesized for you, and in principle an AI or anyone who knew enough could use those services to create their own viruses or bacteria (though no human yet has that required knowledge). I’ll e-mail the person I think I heard it from and ask for a clarification.