Hierarchical Average Precision Training for Pertinent Image Retrieval
Published in European Conference on Computer Vision (ECCV 2022), 2022
Image Retrieval is commonly evaluated with Average Precision (AP) or Recall@k. Yet, those metrics, are limited to binary labels and do not take into account errors’ severity. This paper introduces a new hierarchical AP training method for pertinent image retrieval (HAPPIER). HAPPIER is based on a new H-AP metric, which leverages a concept hierarchy to refine AP by integrating errors’ importance and better evaluate rankings. To train deep models with H-AP, we carefully study the problem’s structure and design a smooth lower bound surrogate combined with a clustering loss that ensures consistent ordering. Extensive experiments on 6 datasets show that HAPPIER significantly outperforms state-of-the-art methods for hierarchical retrieval, while being on par with the latest approaches when evaluating fine-grained ranking performances. Finally, we show that HAPPIER leads to better organization of the embedding space, and prevents most severe failure cases of non-hierarchical methods. Our code is publicly available at https://github.com/elias-ramzi/HAPPIER.
Recommended citation: Ramzi, E., Audebert, N., Thome, N., Rambour, C., Bitot, X.: RHierarchical Average Precision Training for Pertinent Image Retrieval. In: European Conference on Computer Vision. Springer (2022). https://arxiv.org/abs/2207.04873