The Indian Wire » Artificial Intelligence » MIT’s new AI Teaches its robot to see, feel and recognise objects
Artificial Intelligence

MIT’s new AI Teaches its robot to see, feel and recognise objects

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a brand new robot arm to see, feel and recognize things using a new AI-driven optics system.

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge,” said CSAIL PhD student and lead author on the research Yunzhu Li, who wrote the paper alongside MIT professors Russ Tedrake and Antonio Torralba and MIT postdoc Jun-Yan Zhu. “By blindly touching around, our [AI] model can predict the interaction with the environment purely from tactile feelings. Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects.”

During the test conducted at MIT’s CSAIL, the system recognized over 12,000 touches visually using its single camera. Each of the 12,000 video clips were broken down into static frames to create “VisGel”, a dataset of over 3 million visual/tactile-paired images. The system also uses generative adversarial networks (GANs) to understand interactions between vision and touch better, something that was a challenge in MIT’s project of a similar nature back in 2016.

“This is the first method that can convincingly translate between visual and touch signals,” says Andrew Owens, a postdoctoral researcher at the University of California at Berkeley. “Methods like this have the potential to be very useful for robotics, where you need to answer questions like ‘is this object hard or soft?’ or ‘if I lift this mug by its handle, how good will my grip be?’ This is a very challenging problem, since the signals are so different, and this model has demonstrated great capability.”

“Methods like this have the potential to be very useful for robotics, where you need to answer questions like ‘is this object hard or soft?’, or ‘if I lift this mug by its handle, how good will my grip be?’,” says Andrew Owens, a postdoctoral researcher at the University of California at Berkeley. “This is a very challenging problem, since the signals are so different, and this model has demonstrated great capability.”

Reach out to The Indian Wire!

Want to work with us? Looking to share some feedback or suggestion? Have a business opportunity to discuss?

You can reach out to us at [email protected] and we will get back in minutes.

Like us on Facebook!

Advertisement