Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Attentive recurrent adversarial domain adaptation with Top-k pseudo-labeling for time series classification

Attentive recurrent adversarial domain adaptation with Top-k pseudo-labeling for time series... The key challenge of Unsupervised Domain Adaptation (UDA) for analyzing time series data is to learn domain-invariant representations by capturing complex temporal dependencies. In addition, existing unsupervised domain adaptation methods for time series data are designed to align marginal distribution between source and target domains. However, existing UDA methods (e.g. R-DANN Purushotham et al. (2017), VRADA Purushotham et al. (2017), CoDATS Wilson et al. (2020)) neglect the conditional distribution discrepancy between two domains, leading to misclassification of the target domain. Therefore, to learn domain-invariant representations by capturing the temporal dependencies and to reduce the conditional distribution discrepancy between two domains, a novel Attentive Recurrent Adversarial Domain Adaptation with Top-k time series pseudo-labeling method called ARADA-TK is proposed in this paper. In the experiments, our proposed method was compared with the state-of-the-art UDA methods (R-DANN, VRADA and CoDATS). Experimental results on four benchmark datasets revealed that ARADA-TK achieves superior classification accuracy when it is compared to the competing methods. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Intelligence Springer Journals

Attentive recurrent adversarial domain adaptation with Top-k pseudo-labeling for time series classification

Loading next page...
 
/lp/springer-journals/attentive-recurrent-adversarial-domain-adaptation-with-top-k-pseudo-20L8nd2p7b

References (34)

Publisher
Springer Journals
Copyright
Copyright © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ISSN
0924-669X
eISSN
1573-7497
DOI
10.1007/s10489-022-04176-x
Publisher site
See Article on Publisher Site

Abstract

The key challenge of Unsupervised Domain Adaptation (UDA) for analyzing time series data is to learn domain-invariant representations by capturing complex temporal dependencies. In addition, existing unsupervised domain adaptation methods for time series data are designed to align marginal distribution between source and target domains. However, existing UDA methods (e.g. R-DANN Purushotham et al. (2017), VRADA Purushotham et al. (2017), CoDATS Wilson et al. (2020)) neglect the conditional distribution discrepancy between two domains, leading to misclassification of the target domain. Therefore, to learn domain-invariant representations by capturing the temporal dependencies and to reduce the conditional distribution discrepancy between two domains, a novel Attentive Recurrent Adversarial Domain Adaptation with Top-k time series pseudo-labeling method called ARADA-TK is proposed in this paper. In the experiments, our proposed method was compared with the state-of-the-art UDA methods (R-DANN, VRADA and CoDATS). Experimental results on four benchmark datasets revealed that ARADA-TK achieves superior classification accuracy when it is compared to the competing methods.

Journal

Applied IntelligenceSpringer Journals

Published: Jun 1, 2023

Keywords: Domain adaptation; Adversarial training; Time series classification; Attentive; Pseudo-labeling

There are no references for this article.