After Go, what games should be next for DeepMind?
So chess and Go are both games of perfect information. How important is it for the next game that DeepMind is trained on to be a game of perfect information?
How would the AI perform on generalized versions of both chess and Go? What about games like poker and Magic the Gathering?
How realistic do you think it’s possible to train DeepMind on games of perfect information (full-map-reveal) against top-ranked players on games like Starcraft, AOE2, Civ, Sins of a Solar Empire, Command and Conquer, and Total War, for example? (in all possible map settings, including ones people don’t frequently play at—e.g. start at “high resource” levels). How important is it for the AI to have a diverse set/library of user-created replays to test itself against, for example?
I’m also thinking… Shitty AI has always held back both RTS and TBS games.. Is it possible that we’re only a few years away from non-shitty AI in all RTS and TBS games? Or is the AI in many of these games too hard-coded in to actually matter? (e.g. I know some people who develop AI for AOE2, and there are issues with AI behavior in the game being hard-coded in—e.g. villagers deleting the building they’re building if you simply attack them).
Foldit.
Demis Hassabis has already announced that they’ll be working on a Starcraft bot in some interview.
This interview, dated yesterday, doesn’t go quite that far—he mentions Starcraft as a possibility, but explicitly says that they won’t necessarily pursue it.
The game I’d like to see an AI for is Diplomacy. ;)
Oh no! The AI would make us hate each other before betraying us.
Almost any game that their AI can play against itself is probably going to work. Except stuff like Pictionary where it’s really important how a human, specifically, is going to interpret something.
I know a little bit about training neural networks, and I think it would be plausible to train one on a corpus of well-played StarCraft games to give it an initial sense of what it’s supposed to do, and then having achieved that, let it play against itself a million times. But I don’t think there’s any need to let it watch how humans play. If it plays enough games against itself, it will internalize a perfectly sufficient sense of “the metagame”.
If we’re talking about AI in RTS games, I’ve always dreamed of the day when I can “give orders” in an RTS and have the units carry the orders out in a relatively common-sense way instead of needing to be micromanaged down to the level of who they’re individually shooting at.
It could become better than people at playing Pictionary, by drawing images that are most likely to be correctly recognized rather than the human way of translating the model in its head into a picture, and by analyzing what models are most likely to produce a picture rather than the human way of translating the picture into a model in its head. Except if you mean that it playing against itself would make it diverge into its own language of pictures.
Although it might optimize in a direction that doesn’t follow the spirit of the game, anologous to writing out the name of its task.
Actually that could be interesting—could it invent a language that is maximally efficient at communicating concepts?
To your last one, you might enjoy a MOBA where individual players have only information about stuff in their line of sight, but there’s an extra player whose job it is to see everything and give “orders”. I think there was one like that...
Demis Hassabis mentioned StarCraft as something they might want to do next. Video.
RTS is a bit of a special case because a lot of the skill involved is micromanagement and software is MUCH better at micromanagement than humans.
I don’t expect to see highly sophisticated AI in games (at least adversarial, battle-it-out games) because there is no point. Games have to be fun which means that the goal of the AI is to gracefully lose to the human player after making him exert some effort.
You might be interested in Angband Borg.
I’m not sure about that. A common complaint about these kinds of games is that the AI’s blatantly cheat, especially on higher difficulty levels. I could very well see a market for an AI that could give the human a challenge without cheating.
Several years ago, Backgammon AI was at the point where it could absolutely demolish humans without cheating. My impression is that people hated it, and even if they rolled the dice for the AI and input the results themselves they were pretty sure that it had to be cheating somehow.
May have been a vocal minority. You get some people incorrectly complaining about AI cheating in any game that utilizes randomness (Civilization and the new XCOMs are two examples I know of); usually this leads to somebody running a series of tests or decompiling the source code to show people that no, the die rolls are actually fair or (as is commonly the case) actually actively biased in the human player’s favor.
This never stops some people from complaining nonetheless, but a lot of others find the evidence convincing enough and just chalk it up to their own biases (and are less likely to suspect cheating when they play the next game that has random elements).
The Civ 5 AI does cheat insofar as it doesn’t have to deal with the fog of war, IIRC.
The XCOM AI seems to cheat because they’ve don’t report the actual probability.
Not just that, especially on higher difficulty levels.
Right, I meant that Civ doesn’t cheat when it comes to die rolls—e.g. if it displays a 75% chance for the player to win a battle, then the probability really is at least 75%.
It does cheat in a number of other ways.
That’s why I said “AI that could give the human a challenge” not “AI that would demolish a human”. Better yet, have the game difficulty setting actually control the intelligence of the AI, rather than how much the AI cheats.
What that complaint usually means is “The AI is too hard, I would like easier wins”.
And you think the game industry is blind and does not see that market?
That may be true in some cases, but in many other cases the AI really does cheat, and it cheats because it’s not smart enough to offer a challenge to good players without cheating.
My answer did not imply that the AI doesn’t cheat :-/
The interesting questions here involve the perception of fairness and the illusion of competing with a more-or-less equal in single-player games. When people say the AI cheats they mean that it’s not bound by the rules applied to the human player, but why should it be? Consider MMORGs—do mobs cheat, e.g. by using abilities that the player does not have? Do raid bosses cheat by having a gazillion HP, gaining temporary invulnerability, spawning adds, and generally being a nuisance?
In MMORPGS, the game and setting are usually asymmetrical by design—there’s no assumption that the human knight should have an equal amount of hit points as the ancient dragon, and it would actually violate the logic of the setting if that were the case.
The games where people do complain about AI cheating tend to put the enemies in a more symmetrical role—e.g. in something like Civilization or Starcraft, the game designers work to actively maintain an illusion that the AI players are basically just like human players and operating under the same rules.
If you break that illusion too blatantly, players will be reasonably annoyed, because they feel like the game is telling them one thing when the truth is actually different.
This may even have in-game ramifications: e.g. if I’m playing against a human opponent in a multiplayer match, I might want to keep my units hidden from him so that he doesn’t know what I’m up to, but this is pointless against an AI opponent that sees the entire map all the time. (IIRC, in the original Red Alert, the Soviet player could construct buildings that recreated the shroud of war in areas that the enemy had already explored—and which were totally useless in single player, since the AI was never subject to the shroud of war!) In that case it’s not just the player feeling cheated, it actively screws up the player’s idea of what exactly would be a good idea against the AI.
And yet, humans currently have the edge in Brood War. Humans are probably doomed once StarCraft AIs get AlphaGo-level decision-making, but flawless micro—even on top of flawless* macro—won’t help you if you only have zealots when your opponent does a muta switch. (Zealots can only attack ground and mutalisks fly, so zealots can’t attack mutalisks; mutalisks are also faster than zealots.)
*By flawless, I mean macro doesn’t falter because of micro elsewhere; often, even at the highest levels, players won’t build new units because they’re too busy controlling a big engagement or heavily multitasking (dropping at one point, defending a poke elsewhere, etc). If you look at it broadly, making the correct units is part of macro, but that’s not what I’m talking about when I say flawless macro.
Zealots/muta/dragoons/Hydralisks is just a standard rock/paper/scissors game theory thing, and it shouldn’t be too hard to calculate an approximate nash equlibrium. The problem is that there is micro, macro, game theory, imperfect information, and an AI has to tie all these different aspects together (as well as perhaps some perceptual chunking to reduce the complexity) so its a real challange for combining different cognitive modules. This is too close to AGI for comfort IMO.
Pretty sure it’s still comfortably narrow AI. People used to think that chess required AGI-levels of intelligence, too.
Nobody said that flawless micro is sufficient and figuring out the rock/paper/scissors dynamic is not hard. Plus, given that it has enough “attention” for everything, an AI is likely to keep a dancing scout or two around the enemy base and see those mutalisks early enough.
The problem is that most RTS games stand no chance against me or any other half-descent player, unless they are cheating. And when they cheat, the game is very much brute force vs strategy.
I’ve been playing “Ultimate general: Gettysburg”, which was touted as having put a lot off effort into AI, and which paid off—when I play it on the highest difficulty settings, I can still win convincingly, but it does feel like I am playing an incompitant human, rather than an artificial stupidity. Its far more enjoyable to play.
Sure. Consider that the game has to run on your sucky home computer (or, forbid, a console), most likely without a GPU. The strategy/tactics/behaviour code has to share the CPU cycles with a large variety of things including the uninteresting but vital functions like pathfinding and it has to make its decisions within the tick time which is a fraction of second. AND many players prefer the AI to be a pushover, anyway.
I think gaming machines generally do have GPUs…
Of course, the GPU is also running the graphics, but the computer doesn’t need to play well enough to beat world champions—I’m pretty sure that Alpha Go running on one CPU+GPU could play at a strong amateur level.
Of course, but mass-market games like Starcraft are designed to perform decently on the run-of-the-mill machines with integrated graphics.
The micro capabilities of the AI could be limited so they’re more or less equivalent to a human pro gamer’s, forcing the AI to win via build choice and tactics.
It’s going to be a mess. Even if you, say, limit the AI’s click-per-minute rate, it still has serious advantages. It knows how many fractions of a second can these units stay in the range of enemy artillery and still be able to pull back to recover. It knows whether those units will arrive in time to reinforce the defense or they’ll be too late and should do something else instead.
Build choice is not all that complicated and with tactics you run right into micro.
Make the AI control a robot that looks at a physical screen and operates a physical mouse. Then it will be fair. ;)
The point of the exercise is NOT to devise a handicapping system which will produce a fair match.
Human-like uncertainty could be inserted into the AI’s knowledge of those things, but yeah, as you say, it’s going to be a mess. Probably best to pick another kind of game to beat humans at.
Or the game could be played on its slowest mode.
RTS is special because it’s realtime. An AI that’s only ‘good enough’ in terms of strategy or tactics could still win by being far better at parallelizing and reaction speed. The bigger the game world, the more this is true.
Human Starcraft players need to have a basic skill of taking hundreds of actions per minute before they can bring their superior strategy or tactics into play.
Something like this?
Most games are real-time: FPSes, MMORGs, MOBAs, etc.
Right.
I just meant that if it wasn’t realtime but turn-based, AIs would lose their advantage.
And in all of these, AFAIK, when AI is better than humans, it’s because it can do things humans simply can’t: perfect aiming and movement (of the kind that’s considered cheating when humans use software aids to achieve it in FPSs), coordinating a team that can’t see each other because sharing info digitally over the ‘chat’ channel is very efficient, remembering perfectly a very complex maze, etc. Micromanagement is another of these.
That computers are much better at some things than humans isn’t a surprise. It’s very important, but it’s hard to compare it directly to games like Go or chess.
Humans also can’t run massive searches on deep trees or hold a huge library of opening moves in their memory.
AIs solve problems differently from humans. Software is much better at some things (from micromanagement to aimbotting to doing things quickly) and is much worse, so far, at other things. The interesting place is the edge—where software and human capabiilties are currently of the same magnitude. That’s why aimbots are boring and a machine playing Go is oh so cool.
Is Alphabet stock a good proxy for owning a piece of DeepMind? Alphabet hasn’t gained much at all since AlphaGo started winning. Maybe a few percent, but within the normal fluctuations. Of course this might be because all the smart money knew AlphaGo was going to win.
If there was any movement in Alphabet, it should’ve been in January when the news came out. Markets don’t move on anticipated events but unexpected events, and judging from the various betting markets an Alphago victory was not that surprising; the victory also didn’t mean much because the widely held opinion was that Alphago can be expected to improve steadily over time and so even if Lee Sedol won, he would lose in the coming months (I believe Sedol said something like that before the games started, and Ke Jie has also revised his earlier comments and is now saying that he would lose to Alphago in a few months too), in which case the meaning of the match is reduced to a slight shift in the improvement rate—along the lines of ‘Alphago didn’t improve quite as fast as Deepmind expected’. Which is not something which is meaningful to Google’s bottom line.
(The real point of the match was to prove a point to the muggles and AI-deniers and get good publicity, of course.)
It is not a good proxy. Deepmind is a small team and there are many more teams within Alphabet doing machine learning. Remember that the market cap of Goog is $500 billion. (Although if one wants to invest in AI in general I think it is a cheap stock)
I propose a game where there are resources to be identified (using these DNN computer vision algorithms), collected, and deposited at drop-off points. To advance embodied cognition, players get small robot drones of some sort, perhaps like a roomba with a robot arm attached.
The resources include dirty socks and plates, and the game is called “tidy skeptical_lurker’s house, because he can’t be bothered”
Just outsource it to Pentagon.
Why isn’t it obvious?
I know what I’d do.
Run the algorithm on the Bitcoin market, and then on the stock market.
That’s pretty darn far from perfect information.
Even so, I highly doubt the best human traders are anywhere close to optimal. It’d be interesting to see how much better a machine-learning approach would fare.
Many of the successful trading firms are powered by ML, of both the price-watching and NLP news-watching variety. I don’t think Deepmind has a comparative advantage against them, but I do expect that people at those firms are trying out deep learning approaches.
Here you go
As if there aren’t tons of other people using neural nets on the stock market.
Yeah, that.
http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/
https://www.reddit.com/r/quotes/comments/1e4mh9/be_like_a_finger_pointing_at_the_moon_but_do_not/
They’ve successfully trained related AI’s to play retro games, I believe including some with non-perfect information.
links to code etc in the youtube video description.
https://www.youtube.com/watch?v=V1eYniJ0Rnk
The video games are far more interesting than just violating perfect information: the AI has to figure out the rules of the game.
(Actually, they probably don’t violate perfect information, which refers to the two players having access to different information and only makes sense when you think of both players as optimizing agents.)
Computers can play one-on-one Limit Hold ‘em pretty close to “perfectly”; a very good approximation to the Nash equilibrium strategy has been computed, and computers can follow it. The standard tournament game of no-limit 8-player Hold ’Em is a lot more computationally intensive to solve, though, and I don’t think computers are especially good at it.
What about chess? See if a DNN based AI beats a conventional chess AI running on the same processor power. Many people are interested in chess, and if it could push forwards chess theory, then that would be very interesting.
Why not check out the AGI capabilities of Alphago… It might be possible to train chess without architectural modifications. Each chessboard square could be modelled by a 2x2 three-state Go field storing information about chess figure type. How good can Alphago get? How much of its Go playing abilities will it loose?
This isn’t at all the same thing, but it might amuse you: Gess the game.
http://arxiv.org/abs/1509.01549 is relevant.
Contract Bridge is one of the big human strategy games—how good are AIs at that?
That isn’t a formally specified game. For example, it is illegal to make up complicated (“synthetic”) bidding systems.
Here is something I’d like to see: You give the machine the formally specified ruleset of a game (go, chess, etc), wait while the reinforcement learning does its job, and out comes a world-class computer player.
How about Risk?
Oooh. If we’re going to look at boardgames, the best one ever designed (IMO, of course) is : 1817
This I would love to see.
Collectible card games is interesting to me. You get the imperfect information of poker, as well as a deckbuilding component that it seems like the AI should be good at (build a bunch of decks, play itself a few million times).
Personally, I″m waiting for an AI that can outperform experts in Fantasy Football.
No small feat either. The sheer amount of data that needs to be processed is tremendous (think about all of the physical possibilities across all the football teams/games). Humans have the benefit of heuristics. Chess and Go are one thing. But being able to draft a winning fantasy team is a lot harder than it seems.
I would be very of that prediction. Do you know how the best AI’s perform at Fantasy Football?
To my knowledge there hasn’t been much involvement in AI Fantasy Football. However, I would imagine that existing AIs perform fairly poorly. They could probably beat your average player, but not a seasoned football fan who religiously follows the entire league.
I could be wrong though. If there are any examples of AIs performing well at Fantasy Football I’d love to see them!
People who create AI for Fantasy Football that perform fairly poorly are quite free at being open about their AI. On the other hand why should someone who has a well-performing AI at Fantasy Football be public about the fact that he has the AI? That person could lose a lot of money by being open.
Random link: AI in Minecraft.
I’ll be scared, when they do Counter Strike.