Poker-playing AI program first to beat pros at no-limit Texas hold 'em

DeepStack uses artificial intuition to outplay human professionals in a historic first that has implications far beyond the poker table.

For the first time ever, an artificial intelligence program has beaten human poker professionals at heads-up, no-limit Texas hold 'em.

It's a historic result in artificial intelligence that has implications far beyond the poker table, from helping make more robust medical treatment recommendations to developing better strategic defence planning.

DeepStack, created by the University of Alberta's Computer Poker Research Group, bridges the gap between approaches used for games of perfect information-like those used in checkers, chess, and Go where both players can see everything on the board-with those used for imperfect information games by reasoning while it plays, using "intuition" honed through deep learning to reassess its strategy with each decision.

"Poker has been a long-standing challenge problem in artificial intelligence," said computing scientist Michael Bowling, professor in the University of Alberta's Faculty of Science and principal investigator on the study. "It is the quintessential game of imperfect information in the sense that the players don't have the same information or share the same perspective while they're playing."

Artificial intelligence researchers have long used parlour games to test their theories because the games are mathematical models that describe how decision-makers interact.

"We need new AI techniques that can handle cases where decision-makers have different perspectives," said Bowling.

"Think of any real-world problem. We all have a slightly different perspective of what's going on, much like each player only knowing their own cards in a game of poker."

This latest discovery builds on previous research findings about artificial intelligence and imperfect information games stretching back to the creation of the U of A Computer Poker Research Group in 1996. Bowling, who became the group's principal investigator in 2006, and his colleagues developed Polaris in 2008, beating top poker players at heads-up limit Texas hold'em poker. They went on to solve heads-up limit hold'em with Cepheus in 2015.

DeepStack extends the ability to think about each situation during play to imperfect information games using a technique called continual re-solving. This allows DeepStack to determine the correct strategy for a particular poker situation by using its "intuition" to evaluate how the game might play out in the near future without thinking about the entire game.

"We train our system to learn the value of situations," said Bowling. "Each situation itself is a mini poker game. Instead of solving one big poker game, it solves millions of these little poker games, each one helping the system to refine its intuition of how the game of poker works. And this intuition is the fuel behind how DeepStack plays the full game."

Thinking about each situation as it arises is important for complex problems like heads-up no-limit hold'em, which has vastly more unique situations than there are atoms in the universe, largely due to players' ability to wager different amounts including the dramatic "all-in."

Despite the game's complexity, DeepStack takes action at human speed-with an average of only three seconds of "thinking" time-and runs on a simple gaming laptop.

To test the approach, DeepStack played last December against a pool of professional poker players recruited by the International Federation of Poker. Thirty-three players from 17 countries were asked to play a 3,000-hand match over a period of four weeks. DeepStack beat each of the 11 players who finished their match, with only one outside the margin of statistical significance.

DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker is published online in the journal Science.