I know that there were times when it was very controversial whether computers would ever be able to beat humans in chess
Douglas Hofstadter being one on the wrong side: well, to be exact, he predicted (in his book GEB) that any computer that could play superhuman chess would necessarily have certain human qualities, e.g., if you ask it to play chess, it might reply, “I’m bored of chess; let’s talk about poetry!” which IMHO is just as wrong as predicting that computers would never beat the best human players.
I thought you were exaggerating there, but I looked it up in my copy and he really did say that: pg684-686:
To conclude this Chapter, I would like to present ten “Questions and Speculations” about AI. I would not make so bold as to call them “Answers”—these are my personal opinions. They may well change in some ways, as I learn more and as AI develops more...
Question: Will there be chess programs that can beat anyone?
Speculation: No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. “Do you want to play chess?” “No, I’m bored with chess. Let’s talk about poetry.” That may be the kind of dialogue you could have with a program that could beat everyone. That is because real intelligence inevitably depends on a total overview capacity—that is, a programmed ability to “jump out of the system”, so to speak—at least roughly to the extent that we have that ability. Once that is present, you can’t contain the program; it’s gone beyond that certain critical point, and you just have to face the facts of what you’ve wrought.
I wonder if he did change his opinion on computer chess before Deep Blue and how long before? I found two relevant bits by him, but they don’t really answer the question except they sound largely like excuse-making to my ears and like he was still fairly surprised it happened even as it was happening; from February 1996:
Several cognitive scientists said Deep Blue’s victory in the opening game of the recent match told more about chess than about intelligence. “It was a watershed event, but it doesn’t have to do with computers becoming intelligent,” said Douglas Hofstadter, a professor of computer science at Indiana University and author of several books about human intelligence, including “Godel, Escher, Bach,” which won a Pulitzer Prize in 1980, with its witty argument about the connecting threads of intellect in various fields of expression. “They’re just overtaking humans in certain intellectual activities that we thought required intelligence. My God, I used to think chess required thought. Now, I realize it doesn’t. It doesn’t mean Kasparov isn’t a deep thinker, just that you can bypass deep thinking in playing chess, the way you can fly without flapping your wings.”...In “Godel, Escher, Bach” he held chess-playing to be a creative endeavor with the unrestrained threshold of excellence that pertains to arts like musical composition or literature. Now, he says, the computer gains of the last decade have persuaded him that chess is not as lofty an intellectual endeavor as music and writing; they require a soul. “I think chess is cerebral and intellectual,” he said, “but it doesn’t have deep emotional qualities to it, mortality, resignation, joy, all the things that music deals with. I’d put poetry and literature up there, too. If music or literature were created at an artistic level by a computer, I would feel this is a terrible thing.”
Kelly said to me, “Doug, why did you not talk about the singularity and things like that in your book?” And I said, “Frankly, because it sort of disgusts me, but also because I just don’t want to deal with science-fiction scenarios.” I’m not talking about what’s going to happen someday in the future; I’m not talking about decades or thousands of years in the future...And I don’t have any real predictions as to when or if this is going to come about. I think there’s some chance that some of what these people are saying is going to come about. When, I don’t know. I wouldn’t have predicted myself that the world chess champion would be defeated by a rather boring kind of chess program architecture, but it doesn’t matter, it still did it. Nor would I have expected that a car would drive itself across the Nevada desert using laser rangefinders and television cameras and GPS and fancy computer programs. I wouldn’t have guessed that that was going to happen when it happened. It’s happening a little faster than I would have thought, and it does suggest that there may be some truth to the idea that Moore’s Law [predicting a steady increase in computing power per unit cost] and all these other things are allowing us to develop things that have some things in common with our minds. I don’t see anything yet that really resembles a human mind whatsoever. The car driving across the Nevada desert still strikes me as being closer to the thermostat or the toilet that regulates itself than to a human mind, and certainly the computer program that plays chess doesn’t have any intelligence or anything like human thoughts.
To be fair, people expected a chess playing computer to play chess in the same way a human does, thinking about the board abstractly and learning from experience and all that. We still haven’t accomplished that. Chess programs work by inefficiently computing every possible move, so many moves ahead, which seemed impossible before computers got exponentially faster. And even then, deep blue was a specialized super-computer and had to use a bunch of little tricks and optimizations to get it just barely past human grand master level.
I was going to point that out too as I think it demonstrates an important lesson. They were still wrong.
Almost all of their thought processes were correct, but they still got to the wrong result because they looked at solutions too narrowly. It’s quite possible that many of the objections to AI, rejuvenation, cryonics, are correct but if there’s another path they’re not considering, we could still end up with the same result. Just like a Chess program doesn’t think like a human, but can still beat one and an airplane doesn’t fly like a bird, but can still fly.
Douglas Hofstadter being one on the wrong side: well, to be exact, he predicted (in his book GEB) that any computer that could play superhuman chess would necessarily have certain human qualities, e.g., if you ask it to play chess, it might reply, “I’m bored of chess; let’s talk about poetry!” which IMHO is just as wrong as predicting that computers would never beat the best human players.
I thought you were exaggerating there, but I looked it up in my copy and he really did say that: pg684-686:
I wonder if he did change his opinion on computer chess before Deep Blue and how long before? I found two relevant bits by him, but they don’t really answer the question except they sound largely like excuse-making to my ears and like he was still fairly surprised it happened even as it was happening; from February 1996:
And from January 2007:
I suspect the thermostat is closer to the human mind than his conception of the human mind is.
To be fair, people expected a chess playing computer to play chess in the same way a human does, thinking about the board abstractly and learning from experience and all that. We still haven’t accomplished that. Chess programs work by inefficiently computing every possible move, so many moves ahead, which seemed impossible before computers got exponentially faster. And even then, deep blue was a specialized super-computer and had to use a bunch of little tricks and optimizations to get it just barely past human grand master level.
I was going to point that out too as I think it demonstrates an important lesson. They were still wrong.
Almost all of their thought processes were correct, but they still got to the wrong result because they looked at solutions too narrowly. It’s quite possible that many of the objections to AI, rejuvenation, cryonics, are correct but if there’s another path they’re not considering, we could still end up with the same result. Just like a Chess program doesn’t think like a human, but can still beat one and an airplane doesn’t fly like a bird, but can still fly.