Say there are two artificial intelligences… When these machines want to talk to each other, my guess is they’ll get right next to each other so they can have very wide-band communication. You might recognize them as Sam and George, and you’ll walk up and knock on Sam and say, “Hi, Sam. What are you talking about?” What Sam will undoubtedly answer is, “Things in general,” because there’ll be no way for him to tell you. From the first knock until you finish the “t” in about, Sam probably will have said to George more utterances than have been uttered by all the people who have ever lived in all of their lives. I suspect there will be very little communication between machines and humans, because unless the machines condescend to talk to us about something that interests us, we’ll have no communication.
On whether advanced AIs will share our goals:
Eventually, no matter what we do there’ll be artificial intelligences with independent goals… There may be a way to postpone it. There may even be a way to avoid it, I don’t know. But it’s very hard to have a machine that’s a million times smarter than you as your slave.
On basement AI:
Today I can buy a machine for five dollars that’s better than one costing five million dollars twenty years ago… [One day] a paper boy with his route money will be able to save up in a month and buy such a machine. Thus anybody will have the necessary hardware to do Al pretty soon; it will be like a free commodity.
Now, under those circumstances, it’s possible that some mad genius, some Newton-like person, even a kid working by himself, could make tremendous progress. He could develop AI all by himself, relying on what others do, but building it in private rather than at a big institution like MIT. And the application of such a machine would be irresistible. How could you avoid this? You can’t license computers; that never was practical… If you made the use of electricity in any way a capital offense, worldwide and suddenly, and you did it immediately… then perhaps you could prevent this from happening. But anything short of that isn’t going to do it, because you won’t need a laboratory with big government funding very soon—that’s only a temporary phase we’re passing through. So what Joe Weizenbaum would like to do is impossible-its bringing time to a halt, and it can’t be done. What we can do is make the future more secure for human beings by being reasonable about how you bring AI about, and the only reasonable course is to work on this problem in a way that promises to be best for all of society, and not just for some singular mad genius.
On the risk of bad guys getting AI first:
What’s equally frightening is that the world has developed means for destroying itself in a lot of different ways, global ways. There could be a thermonuclear war or a new kind of biological hazard or what-have-you. That we’ll come through all this is possible but not probable unless a lot of people are consciously trying to avoid the disaster. McCarthy’s solution of asking an artificial intelligence what we should do presumes the good guys have it first. But the good guys might not. And pulling the plug is no way out. A machine that smart could act in ways that would guarantee that the plug doesn’t get pulled under any circumstances, regardless of its real motives… I mean, it could toss us a few tidbits, like the cure for this and that.
I think there are ways to minimize all this, but the one thing we can’t do is to say well, let’s not work on it. Because someone, somewhere, will. The Russians certainly will—they’re working on it like crazy, and its not that they’re evil, its just that they also see that the guy who first develops a machine that can influence the world in a big way may be some mad scientist living in the mountains of Ecuador. And the only way we’d find out about some mad scientist doing artificial intelligence in the mountains of Ecuador is through another artificial intelligence doing the detection. Society as a whole must have the means to protect itself against such problems, and the means are the very same things we’re protecting ourselves against.
On trying to raise awareness of AI risks:
I can’t persuade anyone else in the field to worry this way… They get annoyed when I mention these things. They have lots of attitudes, of course, but one of them is, “Well yes, you’re right, but it would be a great disservice to the world to mention all this.”...
...my colleagues only tell me to wait, not to make my pitch until it’s more obvious that we’ll have artificial intelligences. I think by then it’ll be too late. Once artificial intelligences start getting smart, they’re going to be very smart very fast. What’s taken humans and their society tens of thousands of years is going to be a matter of hours with artificial intelligences. If that happens at Stanford, say, the Stanford AI lab may have immense power all of a sudden. Its not that the United States might take over the world, it’s that Stanford AI Lab might.
Later in that chapter, McCorduck quotes Marvin Minsky as saying:
...we have people who say we’ve got to solve problems of poverty and famine and so forth, and we shouldn’t be working on things like artificial intelligence… [But I think] we should have a certain number of people worrying about… whether artificial intelligence will be a huge disaster some day or be one of the best events in the universe...
...You might be the only one who can help with the disaster that’s going to happen [decades from now], and if you don’t prepare yourself, and instead just go off into some social welfare project right now, who will do it then? …Yes, I feel that there’s a great enterprise going on which is making the world of the future all right.
...which sounds eerily like a pitch for MIRI.
Unfortunately, Minsky did not then rush to create the MIT AI Safety Lab.
I don’t think that’s a legitimate “Unfortunately”. If you’re not inspired and an approach doesn’t pop into your head, throwing money at the problem until you get some grad students who couldn’t get a postdoc elsewhere is not necessarily going to be productive, can indeed be counterproductive, and Minsky would legitimately know that.
Miles Brundage recently pointed me to these quotes from Ed Fredkin, recorded in McCorduck (1979).
On speed of thought:
On whether advanced AIs will share our goals:
On basement AI:
On the risk of bad guys getting AI first:
On trying to raise awareness of AI risks:
Later in that chapter, McCorduck quotes Marvin Minsky as saying:
...which sounds eerily like a pitch for MIRI.
Unfortunately, Minsky did not then rush to create the MIT AI Safety Lab.
I don’t think that’s a legitimate “Unfortunately”. If you’re not inspired and an approach doesn’t pop into your head, throwing money at the problem until you get some grad students who couldn’t get a postdoc elsewhere is not necessarily going to be productive, can indeed be counterproductive, and Minsky would legitimately know that.
Okay, then: “Unfortunately, Minsky was not then inspired, by a reasonable approach to the problem, to create the MIT AI Safety Lab.”