Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

LCBM: A Multi-View Probabilistic Model for Multi-Label Classification.

LCBM: A Multi-View Probabilistic Model for Multi-Label Classification. Multi-label classification is an important research topic in machine learning, for which exploiting label dependencies is an effective modeling principle. Recently, probabilistic models have shown great potential in discovering dependencies among labels. In this paper, motivated by the recent success of multi-view learning to improve the generalization performance, we propose a novel multi-view probabilistic model named latent conditional Bernoulli mixture (LCBM) for multi-label classification. LCBM is a generative model taking features from different views as inputs, and conditional on the latent subspace shared by the views a Bernoulli mixture model is adopted to build label dependencies. Inside each component of the mixture, the labels have a weak correlation which facilitates computational convenience. The mean field variational inference framework is used to carry out approximate posterior inference in the probabilistic model, where we propose a Gaussian mixture variational autoencoder (GMVAE) for effective posterior approximation. We further develop a scalable stochastic training algorithm for efficiently optimizing the model parameters and variational parameters, and derive an efficient prediction procedure based on greedy search. Experimental results on multiple benchmark datasets show that our approach outperforms other state-of-the-art methods under various metrics. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IEEE transactions on pattern analysis and machine intelligence Pubmed

LCBM: A Multi-View Probabilistic Model for Multi-Label Classification.

IEEE transactions on pattern analysis and machine intelligence , Volume 43 (8): 15 – Jul 2, 2021

LCBM: A Multi-View Probabilistic Model for Multi-Label Classification.


Abstract

Multi-label classification is an important research topic in machine learning, for which exploiting label dependencies is an effective modeling principle. Recently, probabilistic models have shown great potential in discovering dependencies among labels. In this paper, motivated by the recent success of multi-view learning to improve the generalization performance, we propose a novel multi-view probabilistic model named latent conditional Bernoulli mixture (LCBM) for multi-label classification. LCBM is a generative model taking features from different views as inputs, and conditional on the latent subspace shared by the views a Bernoulli mixture model is adopted to build label dependencies. Inside each component of the mixture, the labels have a weak correlation which facilitates computational convenience. The mean field variational inference framework is used to carry out approximate posterior inference in the probabilistic model, where we propose a Gaussian mixture variational autoencoder (GMVAE) for effective posterior approximation. We further develop a scalable stochastic training algorithm for efficiently optimizing the model parameters and variational parameters, and derive an efficient prediction procedure based on greedy search. Experimental results on multiple benchmark datasets show that our approach outperforms other state-of-the-art methods under various metrics.

Loading next page...
 
/lp/pubmed/lcbm-a-multi-view-probabilistic-model-for-multi-label-classification-uO65zgeHCs

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

ISSN
0162-8828
eISSN
1939-3539
DOI
10.1109/TPAMI.2020.2974203
pmid
32078533

Abstract

Multi-label classification is an important research topic in machine learning, for which exploiting label dependencies is an effective modeling principle. Recently, probabilistic models have shown great potential in discovering dependencies among labels. In this paper, motivated by the recent success of multi-view learning to improve the generalization performance, we propose a novel multi-view probabilistic model named latent conditional Bernoulli mixture (LCBM) for multi-label classification. LCBM is a generative model taking features from different views as inputs, and conditional on the latent subspace shared by the views a Bernoulli mixture model is adopted to build label dependencies. Inside each component of the mixture, the labels have a weak correlation which facilitates computational convenience. The mean field variational inference framework is used to carry out approximate posterior inference in the probabilistic model, where we propose a Gaussian mixture variational autoencoder (GMVAE) for effective posterior approximation. We further develop a scalable stochastic training algorithm for efficiently optimizing the model parameters and variational parameters, and derive an efficient prediction procedure based on greedy search. Experimental results on multiple benchmark datasets show that our approach outperforms other state-of-the-art methods under various metrics.

Journal

IEEE transactions on pattern analysis and machine intelligencePubmed

Published: Jul 2, 2021

There are no references for this article.