New Research Helps Robots See in 3-D

New Research Helps Robots See in 3-D

12:04pm Jul 24, 2017
When fed 3-D models of household items in bird's-eye view (left), a new algorithm is able to guess what the objects are and what their overall 3-D shapes should be. This image shows the guess in the center, and the actual 3-D model on the right. Photo courtesy of Duke University.

Researchers at Duke University have come up with an algorithm that helps robots see the world a little more like a human does.

Robots see via sensors, like a camera. This gives them simple depth and shape information. That’s good enough for industrial applications, but it falls short in less structured environments, like houses and offices.

In order to do better, the robot has to use information from things it’s seen to then differentiate between similar items.

Ben Burchfiel’s team at Duke University came up with the algorithm. He says it has many practical applications.

“We really want, eventually, systems like this to be in people’s houses doing things. We always use the example of 'we want a robot that can make you tea,' or 'we want a robot that can do the dishes.' So we’re really focused on having robots understand the shape of objects around them.”

Results of the research show that robots know what they’re looking at 75 percent of the time - 25 percent better than before Burchfiel’s innovation. But he says there’s still a ways to go before we can put a robot in a home and expect it to do well.

Support your
public radio station