Learning Transformations To Reduce the Geometric Shift in Object Detection

Published in CVPR, 2023

Abstract

The performance of modern object detectors drops when the test distribution differs from the training one. Most of the methods that address this focus on object appearance changes caused by, e.g., different illumination conditions, or gaps between synthetic and real images. Here, by contrast, we tackle geometric shifts emerging from variations in the image capture process, or due to the constraints of the environment causing differences in the apparent geometry of the content itself. We introduce a self-training approach that learns a set of geometric transformations to minimize these shifts without leveraging any labeled data in the new domain, nor any information about the cameras. We evaluate our method on two different shifts, i.e., a camera’s field of view (FoV) change and a viewpoint change. Our results evidence that learning geometric transformations helps detectors to perform better in the target domains.

Paper Suppl. Code Poster Video

Citation

If you found this work useful, please cite the associated paper:

Vidit, M. Engilberge, and M. Salzmann, “Learning Transformations To Reduce the Geometric Shift in Object Detection” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.

BibTex:

@inproceedings{vidit2023learning,
  title={Learning Transformations To Reduce the Geometric Shift in Object Detection},
  author={Vidit, Vidit and Engilberge, Martin and Salzmann, Mathieu},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}