Thinking machines – A history of AI in Chess

chess

Chess, a game long considered a bastion of human intellect, has become a fascinating battleground in the pursuit of artificial intelligence (AI). While the Turing Test is a benchmark for general machine intelligence, chess offered a specific challenge: creating programs that could rival, or even surpass, the world’s best human players. This pursuit has been remarkably successful, with AI programs now achieving grandmaster-level play. However, the story behind this achievement is more nuanced than simply brute force computing power.

Early Aspirations: Brute Force vs. Strategic Thinking

The connection between chess and intelligence has deep roots. In his seminal paper, Alan Turing even proposed using chess as a test for machine intelligence. Pioneering computer scientists like Claude Shannon envisioned programs that could not only play chess but also understand the underlying strategic complexities [Shannon50].

Shannon proposed two key approaches:

  • Type-A Programs: These programs rely on brute force, evaluating a vast number of possible moves and employing a min-max search algorithm to identify the most favorable outcome.
  • Type-B Programs: These programs utilize specialized heuristics and strategic AI, focusing on a smaller set of promising moves based on pre-programmed knowledge and decision-making strategies.

During the early days of computing, with limited processing power, Type-B programs held favor. However, a shift occurred in the 1970s. The developers of the successful “Chess” program series transitioned to a Type-A approach, prioritizing raw computing power over strategic complexity [WikiChess+]. This shift, while leading to stronger programs, left some researchers disappointed, as Type-B programs offered a potential path towards a deeper understanding of the game itself.

Several factors contributed to the dominance of Type-A programs. Firstly, their effectiveness was directly tied to processing speed, which was rapidly increasing. Building a stronger Type-A program simply meant giving it more computational muscle. In contrast, Type-B programs required ongoing development of new rules and strategies, a process not readily accelerated by faster hardware. Additionally, developers of Type-B programs faced the challenge of unpredictable behavior arising from complex rule sets.

Today, even though intelligent Type-B programs still exist, Type-A brute-force approaches reign supreme. The sheer computing power readily available makes it easier to create exceptionally strong programs by simply throwing processing power at the problem. Grandmaster-level Type-B programs remain elusive, requiring further research on how to effectively translate human chess knowledge into a form usable by machines.

A Landmark Victory: Deep Blue and the Ascendancy of AI

Perhaps the most iconic Type-A program is IBM’s Deep Blue. In 1997, Deep Blue faced off against then-world chess champion Garry Kasparov, ultimately achieving a historic victory (3.5 to 2.5 match win). While not a decisive outcome, it served as a powerful symbol of AI’s growing capabilities.

However, the nature of Deep Blue’s victory sparked debate. Deep Blue’s strength stemmed from its ability to evaluate an astounding 200 million positions per second, with an average search depth of 8-12 plies (reaching up to 40 plies under specific conditions) [CambleHsuHoane02]. In contrast, humans typically analyze around 50 moves at varying depths. A Type-B program exhibiting human-like strategic thinking would have presented a more compelling case for AI’s advancement.

This reliance on brute force, rather than strategic understanding, also raised questions about the true capabilities of AI. Despite this, Deep Blue’s win had a significant financial impact on IBM. Estimates suggest the free advertising value from the match reached a staggering $500 million for IBM supercomputers, even driving their stock price to an all-time high [ScaefferPlatt97].

Beyond Chess: The Next Frontier for Game-Playing AI

Chess research continues, with a focus on creating even more powerful machines with less specialized hardware. However, the quest for understanding chess intelligence, embodied by Type-B programs, has largely been sidelined. With chess seemingly “solved” by AI, researchers are now setting their sights on new challenges: games where human dominance persists. The ancient Asian strategy game Go has emerged as a frontrunner in this pursuit.

Go presents a unique challenge for AI due to its vast branching factor (the number of possible moves at any given point). Unlike chess with a branching factor of around 40, Go boasts a staggering 200 [Muller00]. Additionally, Go lacks a clear victory condition like checkmate in chess. Determining the end-state in Go requires mutual agreement between players and extensive analysis of complex board positions. Even fundamental concepts like identifying “alive” or “dead” groups of stones remain algorithmic hurdles.

Despite ongoing research and a million-dollar prize offered for defeating a strong amateur Go player (the Ing prize), the best AI programs still struggle against even modest human competition. However, advancements are being made, particularly in the development of deep learning algorithms that can analyze vast libraries of human Go games. These algorithms are beginning to learn complex patterns and strategies, and some researchers believe they hold the key to achieving human-level or even superhuman Go playing ability in the not-too-distant future.

The Go Challenge: A New Frontier for AI

The progress in Go AI primarily lies in the development of Type-B programs. One promising approach utilizes mathematical morphology, a branch of image processing, to help computers interpret board positions in a way that resembles human visual processing [Bouzy03]. Unfortunately, brute-force approaches using massive parallelization are also showing some success in marginally improving Go program strength [DoshayMcDowel05].

This return of brute-force methods in Go highlights a critical debate in AI research: is raw computing power enough to achieve true intelligence, or do we need fundamentally new approaches that mimic human understanding?

The Limits of Brute Force and the Quest for True AI

The success of Deep Blue and the current state of Go AI demonstrate the limitations of brute-force approaches. While these programs can achieve superhuman performance, they lack the ability to truly understand the games they play. They cannot adapt their strategies based on novel situations or learn from their experiences in a meaningful way. This raises questions about whether such programs can be considered truly intelligent.

This dilemma is similar to the “Chinese Room Argument” by John Searle. Imagine a person locked in a room who receives questions written in Chinese and responds with pre-programmed answers that appear grammatically correct. This person wouldn’t understand Chinese, despite successfully simulating conversation. Similarly, some argue that chess and Go programs, despite their impressive play, lack genuine understanding of the games they play.

The pursuit of true AI in game playing requires a shift towards approaches that capture the essence of human strategic thinking. This might involve developing programs that can learn from experience, adapt to new situations, and even exhibit creativity in their decision-making.

The Road Ahead: Towards a Broader Understanding of Intelligence

The journey of AI in chess, from early strategic programs to the dominance of brute force, offers valuable lessons. While raw computing power has achieved remarkable results, it is not a substitute for true understanding. As researchers tackle games like Go, the focus should be on developing AI that not only plays well but also exhibits genuine strategic thinking and the ability to learn and adapt. This pursuit holds the potential to not only revolutionize game playing but also contribute to a broader understanding of intelligence, both artificial and human.