For convolutional neural network models that optimize an image embedding, this article proposes a method to highlight the regions of images that contribute most to pairwise similarity.
This work is a corollary to the visualization tools developed for classification networks, but applicable to the problem domains better suited to similarity learning. The visualization shows how similarity networks that are fine-tuned learn to focus on different features. This approach is generalized to embedding networks that use different pooling strategies and provides a simple mechanism to support image similarity searches on objects or sub-regions in the query image. (publisher abstract modified)
Downloads
Related Datasets
Similar Publications
- MeshMonk Open-source large-scale intensive 3D phenotyping
- A Rapid and Accurate Method for Hemp Compliance Testing Using Liquid Chromatography Diode Array Detector With Optional Electrospray Ionization Time-of-Flight Mass Spectrometry
- Inferring bone attribution to species through micro-Computed Tomography: A comparison of third metapodials from Homo sapiens and Ursus americanus