Finding Beans in Burgers: Deep Semantic-Visual Embedding with Localization

Published in CVPR, 2018

Abstract

Several works have proposed to learn a two-path neural network that maps images and texts, respectively, to a same shared Euclidean space where geometry captures useful semantic relationships. Such a multi-modal embedding can be trained and used for various tasks, notably image captioning. In the present work, we introduce a new architecture of this type, with a visual path that leverages recent spaceaware pooling mechanisms. Combined with a textual path which is jointly trained from scratch, our semantic-visual embedding offers a versatile model. Once trained under the supervision of captioned images, it yields new state-of-theart performance on cross-modal retrieval. It also allows the localization of new concepts from the embedding space into any input image, delivering state-of-the-art result on the visual grounding of phrases.

Paper Code Poster Slides

Citation

If you found this work useful, please cite the associated paper:

M. Engilberge, L. Chevallier, P. Perez, and M. Cord, “Finding Beans in Burgers: Deep Semantic-Visual Embedding with Localization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 3984–3993

BibTex:

@inproceedings{engilbergeFinding2018,
  title = {Finding Beans in Burgers: Deep Semantic-Visual Embedding with Localization},
  booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  author = {Engilberge, Martin and Chevallier, Louis and Pérez, Patrick and Cord, Matthieu},
  year = {2018},
  pages = {3984--3993}
}