');//-->

MIT Wants Your Household Robot to See Better

A new algorithm developed by MIT could enable household robots to better identify objects in cluttered environments.


Object recognition is the name of the game when it comes to household robots. If a robot can’t navigate around your home, what good is it?

Researchers at the Massachusetts Institute of Technology (MIT) developed a new vision system that is as accurate and 10 times faster than current systems out there, making it much more practical for real-time use of household robots.

The team tested several scenarios in which 20-30 different images of household objects were clustered together on a table. In several of the scenarios, the clusters included multiple instances of the same object to make the ability to match more difficult.

In a paper appearing in a forthcoming issue of the International Journal of Robotics Research, the MIT researchers show that a system using an off-the-shelf algorithm to aggregate different perspectives can recognize four times as many objects as one that uses a single perspective, while reducing the number of misidentifications.

“If you just took the output of looking at it from one viewpoint, there’s a lot of stuff that might be missing, or it might be the angle of illumination or something blocking the object that causes a systematic error in the detector,” says Lawson Wong, a graduate student in electrical engineering and computer science and lead author on the new paper. “One way around that is just to move around and go to a different viewpoint.”

MIT News explains how the algorithm works:

In hopes of arriving at a more efficient algorithm, the MIT researchers adopted a different approach. Their algorithm doesn’t discard any of the hypotheses it generates across successive images, but it doesn’t attempt to canvass them all, either. Instead, it samples from them at random. Since there’s significant overlap between different hypotheses, an adequate number of samples will generally yield consensus on the correspondences between the objects in any two successive images.

To keep the required number of samples low, the researchers adopted a simplified technique for evaluating hypotheses. Suppose that the algorithm has identified three objects from one perspective and four from another. The most mathematically precise way to compare hypotheses would be to consider every possible set of matches between the two groups of objects: the set that matches objects 1, 2, and 3 in the first view to objects 1, 2, and 3 in the second; the set that matches objects 1, 2, and 3 in the first to objects 1, 2, and 4 in the second; the set that matches objects 1, 2, and 3 in the first view to objects 1, 3, and 4 in the second, and so on. In this case, if you include the possibilities that the detector has made an error and that some objects are occluded from some views, that approach would yield 304 different sets of matches.

Instead, the researchers’ algorithm considers each object in the first group separately and evaluates its likelihood of mapping onto an object in the second group. So object 1 in the first group could map onto objects 1, 2, 3, or 4 in the second, as could object 2, and so on. Again, with the possibilities of error and occlusion factored in, this approach requires only 20 comparisons.

Using RFID Tags for Better Vision

Researchers at the Georgia Institute of Technology are also helping robots better locate objects by placing ultra-high frequency RFID tags on objects around the house. One of their PR2 robots was recently outfitted with directionally sensitive antennae that engage the robot in a “hotter or colder” search. The robot receives a stronger RFID signal when pointed in the direction of an RFID tag, and also when they are closer to it.

“The robot can use its mobility and our special behaviors to get close to a tag and oriented toward it,” said study co-author Travis Deyle, former Georgia Tech student, who worked on the project in Kemp’s laboratory as part of his doctoral degree. “This could allow a robot to search for, grasp and deliver the right medication to the right person at the right time. RFID provides precise identification, so the risk of delivering the wrong medication is dramatically reduced. Creating a system that allows robots to accurately locate the correct tag is an important first step.”

The robot’s object-locating abilities were tested on several household objects, including a medication bottle, a hair brush, a TV remote and a cell phone. The researchers say this RFID system could help the robot potentially identify billions of individual objects.

“With a little modification of the objects in your home, a robot could quickly take inventory of your possessions and navigate to an object of your choosing,” Kemp said. “Are you looking for something? The robot will show you where it is.”


About the Author

Steve Crowe · Steve Crowe is managing editor of Robotics Business Review and sister website Robotics Trends.
Contact Steve Crowe: scrowe@ehpub.com  ·  View More by Steve Crowe


Editors’ Picks

Meet ChihiraAico: The Creepiest Robot of CES
ChihiraAico uses 43 pneumatic actuators for its movements, including 24 in its shoulders, arms...

The Best Drones of CES 2015
We've scoured the CES 2015 showfloor and compiled our list of the best...

Mercedes-Benz F 015: Meet the Future Self-Driving Personal Chauffeur
The self-driving concept car features a "lounge-like" cabin with four seats that...

Robotics Trends Giveaway: Win a $1200 FURo-i Home Robot
FURo-i Home edition can provide companionship with elderly people at home; can...