UC Science Today

Robots a step closer to autonomous behavior


Listen Later

Scientists have developed a new deep learning technique that enables robots to learn motor tasks from scratch. Study leader Pieter Abbeel of the University of California, Berkeley says the robot learns based on a reward system, but is given no prior knowledge of its surroundings or how to complete the task, such as screwing a cap onto a bottle.
"If it's doing well, it gets a high reward. If it's doing poorly, it gets a low reward. And over time it figures out what behavior results in high rewards and what behavior results in low rewards. Let's say we want it to screw a cap onto a bottle. We would give it a high reward for bringing the cap very close to the top of the bottle, even higher reward when it's kind of placing it nicely onto the top of the bottle. And then, even higher if it's screwing the cap onto the bottle, and really low if it holds the cap very far away from the bottle."
Abbeel says it’s the first time a robot has learned a new task throughout the entire visual motor path, which takes about three hours of trial and error.
"We want robots to learn from their own trial and error, all the way from pixels coming into the camera to motor torques, so the entire visual motor path."
...more
View all episodesView all episodes
Download on the App Store

UC Science TodayBy University of California