New MIT robot can identify objects by sight and touch

For humans, it is easy to identify an object by touch or find out how it will feel to touch something just by looking at it. But for robots, this can be one of the biggest obstacles. Now a new automaton developed by MIT's Laboratory of Computer Science and Artificial Intelligence (CSAIL) is trying to do just that.

The team took a KUKA robotic arm and added a tactile sensor called GelSight, created by Ted Adelson's group at CSAIL. The information collected by GelSight was then fed to an Artificial Intelligence (AI) so that it could learn the relationship between visual and tactile information.

To teach AI how to identify objects by touch, the team recorded 12,000 videos of 200 objects, such as fabrics, tools and household objects being touched. The videos were split into still images and AI used this data set to connect touch data with visuals.

"Looking at the scene, our model can imagine the sensation of touching a flat surface or a sharp edge," says Yunzhu Li, a CSAIL doctoral student and the lead author of a new article on the system. "By blindly tapping around, our model can predict interaction with the environment purely from tactile feelings. Gathering these two senses could empower the robot and reduce the data we may need for tasks involving manipulating and grasping objects."

For now, it is still difficult for the robot to operate outside the controlled environment, but MIT is trying to increase the database so that it can work on more diverse configurations. "Methods like this have the potential to be very useful for robotics, where you need to answer questions like 'is this object hard or soft?' Or 'If I lift that mug by the handle, how strong will it be to be my grip?' "says Andrew Owens, a postdoctoral researcher at the University of California at Berkeley.

"This is a very challenging problem, since the touches are very different, but this model showed great ability."

Related Articles