It was only a matter of time before someone taught an artificial intelligence to play soccer. deepmind is on it

  • 23

Whether you like football more or less, whether you’re a passionate fan or you don’t care about the progress of the League, you do have to admit something: it’s not easy to play well. It is not for us humans. And much less for the machines. Moving around the field with a ball at your feet requires coordination and balance, but also skills to compete as a team.

It may seem like a no-brainer to you, something affordable for anyone who trains hard enough. To an artificial intelligence (AI), not so much. If we manage to get one to assume the fundamentals that are needed to move freely through a field, perhaps we will have facilitated, in fact, that in the future there will be robots capable of move more naturallymore similar to that of humans.

At DeepMind, a subsidiary of Alphabet — the parent company of Google — they know this and have been training an AI so that perhaps one day it can become a digital Luka Modrić. How? Well following a dynamic not very different from the one that would be used in any soccer school to teach a child to play. Only starting from scratch and at a much faster pace.

At Alphabet they began by giving the AI ​​control over digital figures with a human shape and joint movements somewhat similar to those we have. Step by Step —never better said— the scientists taught them to walk, dribble, throw balls and, finally, play with other figures, starting with basic competitions with teams of only two members.

From school to games

During the first phase the figures acquired Basic abilities of locomotion and handling of the ball. The process consumed the first 24 hours of training, but it amounted to about a year and a half of simulated matches. Already in a second stage of the experiment, cooperation and teamwork behaviors typical of a real match began to be applied.

New Scientist Accurate that teamwork skills, key to the AI ​​being able to sense where a pass was going to be received, required a little more time: around two or three decades of simulated games, which is equivalent to approximately two or three weeks in the world real.

“Our agents acquired skills such as agile locomotion, passing and the division of laboras evidenced by a number of statistics, including metrics used in real-world sports analysis,” explains deepmind. Among the abilities that the AI ​​shows, it stands out, for example, the ability to anticipate the behavior of its teammates.

“At the beginning of training, all the agents were just running towards the ball. After a few days we saw that they realized that one of his teammates had control of the ball, they would turn around and run down the field, anticipating that he would try to score or maybe pass the ball, ” explains Wired Guy Leverpart of the team that has shaped their studies in Science Robotics.

About five years ago researchers already tried to teach articulated figures how to tackle an obstacle course. The experiment left lessons on the advantages of the trial and error system and the reinforcement learning (RL), but his movement patterns were “unnatural”, with a certain comical edge. The problem, deepmind confessesis that “would not be practical” for robotics.

The challenge is not so much to have an AI capable of moving figures imitating humans as to achieve “well-regulated behavior”, essential for walking on rough terrain or even handling fragile objects. “The nervous movements they can damage the robot itself or its environment or at least drain the battery”, comments the Alphabet subsidiary. Hence the efforts to achieve robots with a “safe and efficient” behavior that respond to the orders they receive.

And in that effort the game of Maradona and Pelé can become an unexpected ally. “Soccer has long been a challenge for embodied intelligence research, as it requires individual skills and coordinated team play,” detailed in DeepMind.

football2

To achieve their purpose, the scientists used the neural probabilistic motor primitives (NPMP), which bases learning on movement patterns taken from humans and animals and helps translate control commands. “We had already shown that coordinated behavior can emerge in teams that compete with each other. The NPMP allowed us to observe a similar effect in a scenario that required motor control. significantly more advanced”, company abounds.

“It basically biases his motor control towards realistic human behavior, realistic human movements. And that is learned from motion capture, in this case, human actors playing soccer.” the team tells Wired. As part of the same process, the AI ​​was also rewarded for not deviating from the strategies set for each scenario.

Does that mean that Google wants to become the absolute champion of the Robo Cup?

Well, for now it indicates that it is working so that tomorrow we can see AI with more efficient movements. After all… Who said that football is just a game?

Pictures | deep mind Y Science Robotics

Whether you like football more or less, whether you’re a passionate fan or you don’t care about the progress of…

Whether you like football more or less, whether you’re a passionate fan or you don’t care about the progress of…

Leave a Reply

Your email address will not be published.