A human player has comprehensively defeated a top-ranked AI system at the board game Go, in a surprise reversal of the 2016 computer victory that was seen as a milestone in the rise of artificial intelligence.
Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support.
The triumph, which has not previously been reported, highlighted a weakness in the best Go computer programs that is shared by most of today’s widely used AI systems, including the ChatGPT chatbot created by San Francisco-based OpenAI.
The tactics that put a human back on top on the Go board were suggested by a computer program that had probed the AI systems looking for weaknesses. The suggested plan was then ruthlessly delivered by Pelrine.
“It was surprisingly easy for us to exploit this system,” said Adam Gleave, chief executive of FAR AI, the Californian research firm that designed the program. The software played more than 1 million games against KataGo, one of the top Go-playing systems, to find a “blind spot” that a human player could take advantage of, he added.
The winning strategy revealed by the software “is not completely trivial, but it’s not super-difficult” for a human to learn and could be used by an intermediate-level player to beat the machines, said Pelrine. He also used the method to win against another top Go system, Leela Zero.
The decisive victory, albeit with the help of tactics suggested by a computer, comes seven years after AI appeared to have taken an unassailable lead over humans at what is often regarded as the most complex of all board games.
AlphaGo, a system devised by Google-owned research company DeepMind, defeated the world Go champion Lee Sedol by four games to one in 2016. Sedol attributed his retirement from Go three years later to the rise of AI, saying that it was “an entity that cannot be defeated.” AlphaGo is not publicly available, but the systems Pelrine prevailed against are considered on a par.
In a game of Go, two players alternately place black and white stones on a board marked out with a 19×19 grid, seeking to encircle their opponent’s stones and enclose the largest amount of space. The huge number of combinations means it is impossible for a computer to assess all potential future moves.
The tactics used by Pelrine involved slowly stringing together a large “loop” of stones to encircle one of his opponent’s own groups, while distracting the AI with moves in other corners of the board. The Go-playing bot did not notice its vulnerability, even when the encirclement was nearly complete, Pelrine said.
“As a human it would be quite easy to spot,” he added.
The discovery of a weakness in some of the most advanced Go-playing machines points to a fundamental flaw in the deep-learning systems that underpin today’s most advanced AI, said Stuart Russell, a computer science professor at the University of California, Berkeley.
The systems can “understand” only specific situations they have been exposed to in the past and are unable to generalize in a way that humans find easy, he added.
“It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines,” Russell said.
The precise cause of the Go-playing systems’ failure is a matter of conjecture, according to the researchers. One likely reason is that the tactic exploited by Pelrine is rarely used, meaning the AI systems had not been trained on enough similar games to realize they were vulnerable, said Gleave.
It is common to find flaws in AI systems when they are exposed to the kind of “adversarial attack” used against the Go-playing computers, he added. Despite that, “we’re seeing very big [AI] systems being deployed at scale with little verification.”