AlphaGo system won first game. Not a go player, but the commentary I’ve seen suggests it was quite close until the very end.
Hypothesis 1: The cluster plays to maximize odds of a win, not magnitude of a win, and is exploiting a class of close wins that humans have a hard time with. Expect sweeping near wins.
Hypothesis 2: The cluster and the champion are indeed evenly matched. Expect wins and losses. May imply that the game saturates at high levels of analysis, and that there is no such thing as a ‘superhuman’ go player because the best humans hit the point of diminishing returns.
*EDIT: evidence accumulating in favor of #1.
*EDIT2: final results suggest something between the two.
Perhaps (2) because the AlphaGo people wanted to have a match as soon as they put a high probability on them winning and they are accurately able to calculate their program’s strength.
Hypothesis 2: The cluster and the champion are indeed evenly matched. Expect wins and losses. May imply that the game saturates at high levels of analysis, and that there is no such thing as a ‘superhuman’ go player because the best humans hit the point of diminishing returns.
That wasn’t true for backgammon, chess, or checkers, to name 3 solved games, so why would that be true for Go?
Allegedly Cho Chikun was asked how many stones he would want from God and said “about four”.
I’m not sure what the corresponding figure would be for chess. (Nor actually what its “units” would be—chess doesn’t have a handicapping system as straightforward as go does, and I wonder whether Elo-like ratings go awry if one player is playing absolutely perfectly.)
I’m not sure what the corresponding figure would be for chess.
You can actually calculate this now. Regan has noted that for computer chess, they’re getting to the point where they are effectively perfect and equivalent; so whatever that gap between them and the best human player ever is can be turned into a piece advantage. (Not that I know how to do this, but I assume anyone already somewhat familiar with ELO and chess engines can take the ELO difference and figure out the corresponding material advantage. Regan thinks it’s probably somewhere ~3600 ELO. Apparently chess AIs can now offer at least “pawn and move, pawn, exchange, and four-move odds.” and still beat US champions & grandmasters like Hikaru Nakamura.)
But maybe that was a little hard to answer, so let me put the question the other way: has there ever been a case where a strategy game played seriously & competitively (ie. not tic-tac-toe or blackjack) by adult humans was solved to perfect or superhuman play levels by AI researchers, and the perfect or superhuman play turned out to be identical or so close to the top human’s play level that human could win regularly?
A game like that could occur between humans and A.I. with online collectible card games. (I’m specifying online because the rules are streamlined and mass competition is far more available.)
People who engage in such goalpost-moving have already written down their bottom line, most likely because AI risk pattern-matches to the literary genre of science fiction. I wouldn’t expect such people to be swayed by any sort empirical evidence short of the development of strong AGI itself. Any arguments they offer against strong AGI amount to little more than rationalization. (Of course, that says nothing about the strengths of the arguments themselves, which must be evaluated on their own merits.)
Most of the arguments I’ve seen against AI risk I’ve seen (in popular media, that is) take the form of arguments against AGI, full-stop. Naturally there exist more nuanced arguments (though personally I’ve yet to see any I find convincing), but I was referring to the arguments made by a specific part of the population, i.e. “people who engage in such goalpost-moving”—and in my (admittedly limited) experience, those sorts of people don’t usually put forth very deep arguments.
In any case I think you have unnecessarily limited yourself to considering viewpoints expressed in media that tend to act as echo chambers. It’s not very interesting or relevant what a bunch of talking heads say with respect to a technical question.
The Time article doesn’t say anything interesting.
Goertzel’s article (the first link you posted) is worth reading, although about half of it doesn’t actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren’t goal-driven.
I updated my earlier comment to say “against AI x-risk positions” which I think is a more accurate description of the arguments. There are others as well, e.g. Andrew Ng, but I think Goertzel does the best job at explaining why the AI x-risk arguments themselves are possibly flawed. They are simplistic in how they model AGIs, and therefore draw simple conclusions that don’t hold up in the real world.
And yes, I think more LW’ers and AI x-risk people should read and respond to Goertzel’s super-intelligence article. I don’t agree with it 100%, but there are some valid points in there. And one doesn’t become effective by only reading viewpoints you agree with...
AlphaGo system won first game. Not a go player, but the commentary I’ve seen suggests it was quite close until the very end.
Hypothesis 1: The cluster plays to maximize odds of a win, not magnitude of a win, and is exploiting a class of close wins that humans have a hard time with. Expect sweeping near wins.
Hypothesis 2: The cluster and the champion are indeed evenly matched. Expect wins and losses. May imply that the game saturates at high levels of analysis, and that there is no such thing as a ‘superhuman’ go player because the best humans hit the point of diminishing returns.
*EDIT: evidence accumulating in favor of #1.
*EDIT2: final results suggest something between the two.
Perhaps (2) because the AlphaGo people wanted to have a match as soon as they put a high probability on them winning and they are accurately able to calculate their program’s strength.
That wasn’t true for backgammon, chess, or checkers, to name 3 solved games, so why would that be true for Go?
Allegedly Cho Chikun was asked how many stones he would want from God and said “about four”.
I’m not sure what the corresponding figure would be for chess. (Nor actually what its “units” would be—chess doesn’t have a handicapping system as straightforward as go does, and I wonder whether Elo-like ratings go awry if one player is playing absolutely perfectly.)
You can actually calculate this now. Regan has noted that for computer chess, they’re getting to the point where they are effectively perfect and equivalent; so whatever that gap between them and the best human player ever is can be turned into a piece advantage. (Not that I know how to do this, but I assume anyone already somewhat familiar with ELO and chess engines can take the ELO difference and figure out the corresponding material advantage. Regan thinks it’s probably somewhere ~3600 ELO. Apparently chess AIs can now offer at least “pawn and move, pawn, exchange, and four-move odds.” and still beat US champions & grandmasters like Hikaru Nakamura.)
But maybe that was a little hard to answer, so let me put the question the other way: has there ever been a case where a strategy game played seriously & competitively (ie. not tic-tac-toe or blackjack) by adult humans was solved to perfect or superhuman play levels by AI researchers, and the perfect or superhuman play turned out to be identical or so close to the top human’s play level that human could win regularly?
A game like that could occur between humans and A.I. with online collectible card games. (I’m specifying online because the rules are streamlined and mass competition is far more available.)
I also don’t know of any.
That strikes me as right on the money.
Is there any commentary by a Go pro available?
Michael Redmond (only english speaking top pro) is on stream.
This video has commentary from a Korean 9p.
I wonder if / how that win will affect estimates on the advent of AGI within the AI community.
I’ve already seen some goalpost-moving at Hacker News. I do hope this convinces some people, though.
People who engage in such goalpost-moving have already written down their bottom line, most likely because AI risk pattern-matches to the literary genre of science fiction. I wouldn’t expect such people to be swayed by any sort empirical evidence short of the development of strong AGI itself. Any arguments they offer against strong AGI amount to little more than rationalization. (Of course, that says nothing about the strengths of the arguments themselves, which must be evaluated on their own merits.)
It is entirely possible to firmly believe in the inevitability of near-term AGI without subscribing to AI risk fears. I wouldn’t conflate the two.
Most of the arguments I’ve seen against AI risk I’ve seen (in popular media, that is) take the form of arguments against AGI, full-stop. Naturally there exist more nuanced arguments (though personally I’ve yet to see any I find convincing), but I was referring to the arguments made by a specific part of the population, i.e. “people who engage in such goalpost-moving”—and in my (admittedly limited) experience, those sorts of people don’t usually put forth very deep arguments.
Here’s some arguments against AI x-risk positions from an expert source rather than the popular media:
http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials
http://time.com/3641921/dont-fear-artificial-intelligence/
In any case I think you have unnecessarily limited yourself to considering viewpoints expressed in media that tend to act as echo chambers. It’s not very interesting or relevant what a bunch of talking heads say with respect to a technical question.
The Time article doesn’t say anything interesting.
Goertzel’s article (the first link you posted) is worth reading, although about half of it doesn’t actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren’t goal-driven.
I updated my earlier comment to say “against AI x-risk positions” which I think is a more accurate description of the arguments. There are others as well, e.g. Andrew Ng, but I think Goertzel does the best job at explaining why the AI x-risk arguments themselves are possibly flawed. They are simplistic in how they model AGIs, and therefore draw simple conclusions that don’t hold up in the real world.
And yes, I think more LW’ers and AI x-risk people should read and respond to Goertzel’s super-intelligence article. I don’t agree with it 100%, but there are some valid points in there. And one doesn’t become effective by only reading viewpoints you agree with...