Last week, OpenAI bots played the Dota 2; this week, it was Google DeepMind’s AI bots playing Quake III. Researchers at DeepMind trained AI agents to play the game of capturing the flags defeating human players.
AI playing video games is not surprising, but training them to navigate a complex 3D environment was the real challenge. DeepMind researchers with this new breakthrough have set new standards in AI training and the method used by them was “Reinforcement Learning” - training by trial-and-error data at a huge scale.
AI agents are not given instructions on how to play; instead, they are built to meet the strategies needed to win. DeepMind went an extra-mile to train 30 agents, each with a different set of play styles, and this would have included half a million gaming styles, each lasting five minutes. DeepMind’s bots did not only learn the basic rules of capturing the flag but also strategies like guarding your own flag, camping at the opponent’s base, teaming-up to attack the enemy and much more tricks. These bots were also trained to work on different map layouts.
This doesn’t prove that DeepMind’s bots are much powerful than OpenAI’s, as Dota 2 is a much more complex game than Quake III. DeepMind went on to further test the agents against human-only and human-bot teams, and observed that the bot-only teams performed better than the other tie-ups, with a 74% win probability. However, the larger the number of bots on a team, the worse they did.
FAU-G - The Made In India PUBG Rival Is Now Live On The Play Store
CRED Secures $80 M in Series C Funding, Valuation Now $800 M
Whatsapp Shopping Button Now Live In India
Indian Users Can Now Make Payments via WhatsApp
APAC Retailers Can Now List Their Products on Google For Free
© 2020 CIO Bulletin. All rights reserved.