For convolutional neural network models that optimize an image embedding, this article proposes a method to highlight the regions of images that contribute most to pairwise similarity.
This work is a corollary to the visualization tools developed for classification networks, but applicable to the problem domains better suited to similarity learning. The visualization shows how similarity networks that are fine-tuned learn to focus on different features. This approach is generalized to embedding networks that use different pooling strategies and provides a simple mechanism to support image similarity searches on objects or sub-regions in the query image. (publisher abstract modified)
Downloads
Related Datasets
Similar Publications
- Interactive analysis and visualization of situationally aware building evacuations
- Differential Sampling of Contact Surfaces of Footwear to Separate Fractions of Loosely, Moderately and Tightly Held Particles
- Quantitative Assessment of the effects of Microbial Degradation of a Simple Hydrocarbon Mixture