Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Deep Scalable Supervised Quantization by Self-Organizing Map

Deep Scalable Supervised Quantization by Self-Organizing Map Approximate Nearest Neighbor (ANN) search is an important research topic in multimedia and computer vision fields. In this article, we propose a new deep supervised quantization method by Self-Organizing Map to address this problem. Our method integrates the Convolutional Neural Networks and Self-Organizing Map into a unified deep architecture. The overall training objective optimizes supervised quantization loss as well as classification loss. With the supervised quantization objective, we minimize the differences on the maps between similar image pairs and maximize the differences on the maps between dissimilar image pairs. By optimization, the deep architecture can simultaneously extract deep features and quantize the features into suitable nodes in self-organizing map. To make the proposed deep supervised quantization method scalable for large datasets, instead of constructing a larger self-organizing map, we propose to divide the input space into several subspaces and construct self-organizing map in each subspace. The self-organizing maps in all the subspaces implicitly construct a large self-organizing map, which costs less memory and training time than directly constructing a self-organizing map with equal size. The experiments on several public standard datasets prove the superiority of our approaches over the existing ANN search methods. Besides, as a by-product, our deep architecture can be directly applied to visualization with little modification, and promising performance is demonstrated in the experiments. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) Association for Computing Machinery

Loading next page...
 
/lp/association-for-computing-machinery/deep-scalable-supervised-quantization-by-self-organizing-map-QsvL4Sbz0I
Publisher
Association for Computing Machinery
Copyright
Copyright © 2019 ACM
ISSN
1551-6857
eISSN
1551-6865
DOI
10.1145/3328995
Publisher site
See Article on Publisher Site

Abstract

Approximate Nearest Neighbor (ANN) search is an important research topic in multimedia and computer vision fields. In this article, we propose a new deep supervised quantization method by Self-Organizing Map to address this problem. Our method integrates the Convolutional Neural Networks and Self-Organizing Map into a unified deep architecture. The overall training objective optimizes supervised quantization loss as well as classification loss. With the supervised quantization objective, we minimize the differences on the maps between similar image pairs and maximize the differences on the maps between dissimilar image pairs. By optimization, the deep architecture can simultaneously extract deep features and quantize the features into suitable nodes in self-organizing map. To make the proposed deep supervised quantization method scalable for large datasets, instead of constructing a larger self-organizing map, we propose to divide the input space into several subspaces and construct self-organizing map in each subspace. The self-organizing maps in all the subspaces implicitly construct a large self-organizing map, which costs less memory and training time than directly constructing a self-organizing map with equal size. The experiments on several public standard datasets prove the superiority of our approaches over the existing ANN search methods. Besides, as a by-product, our deep architecture can be directly applied to visualization with little modification, and promising performance is demonstrated in the experiments.

Journal

ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)Association for Computing Machinery

Published: Aug 20, 2019

Keywords: Approximate nearest neighbor search

References