Befuddling AI Go Systems: MIT, UC Berkeley & FAR AI’s Adversarial Policy Achieves a >99% Win Rate Against KataGo

Reinforcement learning based on self-play has enabled artificial intelligence agents to outperform human expert-level performance in the popular computer game Dota and board games such as chess and Go. Despite the strong performance results, recent research has suggested that self-play may not be as strong as previously thought. A question naturally arises: are such self-playing agents vulnerable to adversary attacks?

in the new newspaper Adversary Policy defeated the Go AIs at a professional level, a research team from MIT, UC Berkeley and FAR AI employs a new adversarial policy to attack the advanced AI Go system KataGo. The team believes the first successful end-to-end attack against an AI Go system playing at the level of a human pro.

The team summarizes its main contribution as follows:

  1. We propose a new attack method, hybridizing the attack of Gleave et al. (2020) and AlphaZero-style training (Silver et al., 2018).
  2. We demonstrate the existence of an adversarial policy against the state-of-the-art Go AI system, KataGo.
  3. We find that the adversary employs a simple strategy that tricks the victim into predicting victory, causing them to prematurely switch.
Also Read :  M3GAN Reminds Us Dolls Are Scary and Technology is Evil

This work focuses on exploiting the AI ​​Go policy at a professional level with a discrete operating space. The team is attacking the most powerful AI Go system available to the public, KataGo, albeit not at full strength. Unlike KataGo, which trains using self-play games, the team trained their agent in games played against a fixed victim agent, using only data from the opponent’s move turns. This “victim game” training approach encourages the model to exploit the victim, not imitate him.

The team also introduces two separate families of Monte Carlo Tree Search (A-MCTS)—Sample (A-MCTS-S) and Recursive (A-MCTS-R)—to avoid the agent modeling its opponent’s moves in its policy. Network. Instead of using random bootstrapping, the team uses a curriculum that trains the agent against stronger versions of the victim.

Also Read :  Kennewick Fire Department prepares to implement hands-free CPR device

In their empirical studies, the team used their opponent’s policy to attack KataGo with no search (level 100 top European player), and KataGo with 64 visits (“almost superhuman level”). The proposed policy achieved a win rate of more than 99 percent without a search and a win rate of more than 50 percent against 64 KataGo visits.

Although this work suggests that self-play learning is not as robust as expected and that adversarial policies can be used to beat top Go AI systems, the results have been questioned by the machine learning and Go communities. Reddit discussions involving KataGo authors and developers focused on the particularities of the Tromp-Taylor scoring system used in the experiments – while the proposed agent achieves its wins by “tricking KataGo into ending the game prematurely”, it was argued that this tactic would lead to devastating losses under common Go rules More.

Also Read :  Creating a rough draft with your phone -

The open source implementation is on GitHub, and sample games are available on the project webpage. the paper Adversary Policy defeated the Go AIs at a professional level Found in arXiv.


author: Hecate He | editor: Michael Sarzan


We know you don’t want to miss any news or research breakthrough. Sign up for our popular newsletter Synchronized Artificial Intelligence Weekly To receive weekly AI updates.

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button