Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Multi-View Clustering via Deep Matrix Factorization

Multi-View Clustering via Deep Matrix Factorization <jats:p> Multi-View Clustering (MVC) has garnered more attention recently since many real-world data are comprised of different representations or views. The key is to explore complementary information to benefit the clustering problem. In this paper, we present a deep matrix factorization framework for MVC, where semi-nonnegative matrix factorization is adopted to learn the hierarchical semantics of multi-view data in a layer-wise fashion. To maximize the mutual information from each view, we enforce the non-negative representation of each view in the final layer to be the same. Furthermore, to respect the intrinsic geometric structure in each view data, graph regularizers are introduced to couple the output representation of deep structures. As a non-trivial contribution, we provide the solution based on alternating minimization strategy, followed by a theoretical proof of convergence. The superior experimental results on three face benchmarks show the effectiveness of the proposed deep matrix factorization model. </jats:p> http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Proceedings of the AAAI Conference on Artificial Intelligence CrossRef

Multi-View Clustering via Deep Matrix Factorization

Proceedings of the AAAI Conference on Artificial Intelligence , Volume 31 (1) – Feb 13, 2017

Multi-View Clustering via Deep Matrix Factorization


Abstract

<jats:p>

Multi-View Clustering (MVC) has garnered more attention recently since many real-world data are comprised of different representations or views. The key is to explore complementary information to benefit the clustering problem. In this paper, we present a deep matrix factorization framework for MVC, where semi-nonnegative matrix factorization is adopted to learn the hierarchical semantics of multi-view data in a layer-wise fashion. To maximize the mutual information from each view, we enforce the non-negative representation of each view in the final layer to be the same. Furthermore, to respect the intrinsic geometric structure in each view data, graph regularizers are introduced to couple the output representation of deep structures. As a non-trivial contribution, we provide the solution based on alternating minimization strategy, followed by a theoretical proof of convergence. The superior experimental results on three face benchmarks show the effectiveness of the proposed deep matrix factorization model.

</jats:p>

Loading next page...
 
/lp/crossref/multi-view-clustering-via-deep-matrix-factorization-xFvQyRzq4I

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
CrossRef
ISSN
2374-3468
DOI
10.1609/aaai.v31i1.10867
Publisher site
See Article on Publisher Site

Abstract

<jats:p> Multi-View Clustering (MVC) has garnered more attention recently since many real-world data are comprised of different representations or views. The key is to explore complementary information to benefit the clustering problem. In this paper, we present a deep matrix factorization framework for MVC, where semi-nonnegative matrix factorization is adopted to learn the hierarchical semantics of multi-view data in a layer-wise fashion. To maximize the mutual information from each view, we enforce the non-negative representation of each view in the final layer to be the same. Furthermore, to respect the intrinsic geometric structure in each view data, graph regularizers are introduced to couple the output representation of deep structures. As a non-trivial contribution, we provide the solution based on alternating minimization strategy, followed by a theoretical proof of convergence. The superior experimental results on three face benchmarks show the effectiveness of the proposed deep matrix factorization model. </jats:p>

Journal

Proceedings of the AAAI Conference on Artificial IntelligenceCrossRef

Published: Feb 13, 2017

There are no references for this article.