SciTech

Google’s DeepMind AI learns to play Go on its own

A new artificial intelligence (AI) has reached a new landmark in mastering the strategy game of Go, shattering records using a fraction of its predecessor’s time and processing power.

Go is an ancient Chinese board game several thousand years old, with strategies and goals similar to chess. In Go, the players win by surrounding opponents’ pieces with their own.

AlphaGo, a computer program that plays the board game Go, made revolutionary achievements last year when it beat two of the world’s greatest Go players: South Korea’s Lee Se-dol and China’s Ke Jie. AlphaGo, developed by Google DeepMind in London, has been in the limelight for months, partially because its victory over human players came much earlier than expected. Because Go is an extremely complex mind game with more possible legal moves than there are atoms in the universe, AlphaGo’s win marked a huge milestone in the development of AI.

But just a few days ago, a new program called AlphaGo Zero vastly surpassed the feats made by AlphaGo. AlphaGo Zero “began with a blank Go board and no data apart from the rules, and then played itself,” according to BBC News. Within 72 hours of being given the rules of Go, AlphaGo Zero beat AlphaGo 100 games to zero. While AlphaGo took a few months for it to be developed enough to beat professional Go players, AlphaGo Zero took just three days, despite taking less processing power than its predecessor.

Board games like Go are one of the most effective ways to test the potential of AI software. Because these games involve responding to a separate player’s moves, winning the game requires more than just the average algorithms used in computer software; the software needs to mimic processes that take place in the human brain.

In addition to AlphaGo and AlphaGo Zero, Google DeepMind has also developed another AI: an algorithm known as the deep Q-network. The deep Q-network was presented at the 2016 WIRED event, a large-scale conference for business professionals to present industry developments. DeepMind’s co-founder Mustafa Suleyman showed that the deep Q-network had learned to play 49 Atari 2600 video games given only the bare minimum amount of background information. The AI agent had mastered various types of games, from martial arts games to boxing games to 3D car racing games.

These are all significant improvements in the field of artificial intelligence and computer programming, indicating a more promising future for areas other than just board games where AI can have an influence. According to Demis Hassabis, a DeepMind co-founder and chief executive, researchers’ knowledge of complex algorithms has impacts far beyond beating world-class game professionals. “This is just games, but it could be stock market data,” Hassabis said. “DeepMind has been combining two promising areas of research — a deep neural network and a reinforcement-learning algorithm — in a really fundamental way. We’re interested in algorithms that can use their learning from one domain and apply that knowledge to a new domain.”

From DeepMind’s relatively older efforts in developing artificial intelligence to the more recent achievements by AlphaGo Zero, work done by researchers in AI is yielding results with a clear trend: computer software is beginning to outsmart humans. Earlier versions of Go software based their moves upon human strategies; AlphaGo Zero developed techniques that professional players had never seen before. In other words, the constraints of human knowledge no longer restrict the potential of AI — it is able to create new knowledge from a completely blank slate, with minimal human input.

Though this is a huge achievement deserving worldwide attention and applause, at this point we should be asking ourselves: what does this mean for the future of humanity?