There’s no stopping Google’s DeepMind AI. Earlier this year the artificial intelligence platform recently shut out two professional StarCraft players in a 5-0 defeat. And now a report on The Verge says DeepMind is better than 99.8 of all human players. That last 0.2 percent will be our last hope against the war tactics of the AI uprising. In the meantime, we bet Blizzard would love for DeepMind’s brain power to figure out the solution to its ongoing Hong Kong troubles.
Considered one of the most challenging real-time strategy (RTS) games, Blizzard’s StarCraft is also one of the longest-played esports of all time.
The game debuted at E3 in 1996, followed by the StarCraft II franchise: StarCraft II: Wings of Liberty (2010), StarCraft II: Heart of the Swarm (2013), and StarCraft II: Legacy of the Void (2015).
“Although there have been significant successes in video games,” the DeepMind team wrote in a blog post, “until now, AI techniques have struggled to cope with the complexity of StarCraft.”
Enter AlphaStar, a deep neural network trained directly from raw game data via supervised learning and reinforcement learning.
Receiving data from anonymized human games released by Blizzard, AlphaStar determined, through imitation, the basic micro and macro strategies used by players.
Additional artificial adversaries were then added to the DeepMind league, each learning from games against other competitors and developing counter-strategies.
“We believe that this advanced model will help with many other challenges in machine learning research that involve long-term sequence modelling and large output spaces such as translation, language modelling, and visual representations,” DeepMind said.
The AlphaStar league ran for 14 days, with each agent experiencing up to 200 years of real-time StarCraft play. Every strategy discovered over those two weeks was then mixed together to create the ultimate operator.
In a series of test matches held in December, AlphaStar beat Team Liquid’s Grzegorz “MaNa” Komincz—one of the world’s strongest professional StarCraft players—and Dario “TLO” Wünsch.
“I was surprised by how strong the agent was,” TLO, a top professional Zerg player and GrandMaster level Protoss player, said in a statement. “AlphaStar takes well-known strategies and turns them on their head. The agent demonstrated strategies I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet.”
Despite moving at a significantly slower pace than the pros—an average of 280 actions per minute (APM)—the AI’s maneuvers are probably more precise.
“I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected,” MaNa, one of the world’s strongest StarCraft II players, said.
“I’ve realized how much my gameplay relies on forcing mistakes and being able to exploit human reactions, so this has put the game in a whole new light for me,” he added. “We’re all excited to see what comes next.”
Last month, more than a year after DeepMind introduced its first self-taught, world champion algorithm, CEO Demis Hassabis & Co. revealed the full evaluation of AlphaZero, published in the journal Science.
More on Geek.com:
- This Wearable AI Translator Lets You Talk in Different Languages
- Pencil-Sized AI Device to Protect Wildlife From Poachers
- Machine Learning May Predict How Well You’ll Age
from Geek.com https://ift.tt/2N4TjMI
via IFTTT
0 comments:
Post a Comment