Deep Multimodal Embedding

About

Obtaining a good common representation between different modalities is challenging for two main reasons. First, each modality might intrinsically have very different statistical properties – for example, most trajectory representations are inherently dense, while a bag-of-words representation of language is by nature sparse. This makes it challenging to apply algorithms designed for unimodal data, as one modality might overpower the others. Second, even with expert knowledge, it is extremely challenging to design joint features between such disparate modalities. Humans are able to map similar concepts from different sensory system to the same concept using common representation between different modalities [14]. For example, we are able to correlate the appearance with feel of a banana, or a language instruction with a real-world action. This ability to fuse information from different input modalities and map them to actions is extremely useful to a household robot.

We introduce an algorithm that learns to pull together semantically similar environment/language pairs and their corresponding trajectories to the same regions in a shared embedding space, and push environment/language pairs away from irrelevant trajectories based on how irrelevant these tra- jectories are. Our algorithm also allows for efficient inference because, given a new instruction and point-cloud, we only need to find the nearest trajectory to the projection of this pair in the learned embedding space using a fast nearest- neighbor algorithms [46].


* Please refer to the journal version of the work for the explanation of this figure.

Research



Deep Multimodal Embedding: Manipulating Novel Objects with Point-clouds, Language, Trajectories
Jaeyong Sung, Ian Lenz, Ashutosh Saxena
In IEEE International Conference on Robotics and Automation (ICRA), 2017
[PDF] [arXiv] [Dataset]

Video

As the PR2 robot stands in front of the object it has never seen before, the robot is given a natural language instruction (manual) and segmented point-cloud. Using our algorithm, the robot was even able to make a cup of latte.