Robots that Learn – the Future of Robotics
In our days, robots surround people everywhere; they are involved in almost every sphere and make human life much easier. We created them in order to perform tasks but according to the predictions of filmmakers, the robot apocalypse is coming. Blade Runner, Ex Machina, I, Robot and other films of this genre describe not very optimistic future for the human civilization and many people are really afraid of the robot apocalypse.
I have some doubts concerning the robot revolution but the ability of robots to learn is the topic worth our attention.
Unlike the films where everything is so super cool and robots are extremely developed and live with people side by side, the reality is quite different. We have taught robots to perform certain tasks and this process requires a huge amount of time, long codes and many tutorials. The robots of Boston Dynamics can do backflips and open the door, and this is the result of long and hard work. These actions seem so simple for the human but in the world of robots, everything is much complicated. When the robot attempts to do a task it hasn’t been taught, it will give up and meet a dead end.
Some progress in this field took place in UC Barkley, their research makes the learning process easier not only for the machine but also for the human. Based on its previous experience, PR2, some kind of a humanoid robot, watches how a person picks up an apple and drops it in a bowl, after that the robot can repeat this action itself even if it saw the fruit for the first time. Of course, described operation is pretty simple but it provides engineers with a reliable basis to make machine’s adaptation to human needs faster.
When the child sees how parents clean their teeth with a brush she learns a specific sequence of actions. Later the child will use this background information and learn how to floss (she knows about the teeth and gaps between them and has an instrument). In the case of traditional robots, they can’t apply a previous experience and require two separate algorithms of commands. On the one hand, most systems of machine learning start their learning from scratch. On the other one, they don’t accumulate any knowledge, for that reason, researchers have to work with almost a clean sheet every time.
Chelsea Finn, a machine learning researcher at UC Berkeley, applies another approach. The point of Finn’s system is that they collect videos on how humans do various tasks, the next step is to gather demonstrations how robots do the same tasks via teleoperation. It is trained in the way that after watching the video of human activity the robot learns to reproduce what it saw. The key fact of the approach is that there is no need to very precisely track the human and other objects in the scene, much important thing is to conclude what the human is doing and the aim of the task. The next experiment illustrates the statement.
The robot observes by means of its camera how the human pushes the container toward the robot’s left arm and doesn’t touch the box of tissues. If the same objects but arranged differently show to the robot, it can determine the right one and even push the container to its left arm with its right arm. Ultimately, the robot becomes flexible and more like a human. This principle plays a crucial role and determines a new generation of robots that will live with us in the near future. Nobody wants to teach his or her robot to use everything in the house. The team of UC Barkley believes that thanks to their work the average person will able to cope with the machine easily. Learning is the most appropriate way, it has rather bigger potential than a joystick control.
Researchers of MTI also make similar investigations in this field and do their best to teach robots to do some household duties such as making coffee. They produce a video in which the humanoid robot takes a mug and uses the coffee machine. Besides, they aim to expand their scope and work with the videos of somebody doing tasks on Youtube.