Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Individuality-Preserving Voice Conversion for Articulation Disorders Using Phoneme-Categorized Exemplars

Individuality-Preserving Voice Conversion for Articulation Disorders Using Phoneme-Categorized... Individuality-Preserving Voice Conversion for Articulation Disorders Using Phoneme-Categorized Exemplars RYO AIHARA, Graduate School of System Informatics, Kobe University TETSUYA TAKIGUCHI and YASUO ARIKI, Organization of Advanced Science and Technology, Kobe University We present a voice conversion (VC) method for a person with an articulation disorder resulting from athetoid cerebral palsy. The movements of such speakers are limited by their athetoid symptoms and their consonants are often unstable or unclear, which makes it difficult for them to communicate. Exemplar-based spectral conversion using Nonnegative Matrix Factorization (NMF) is applied to a voice from a speaker with an articulation disorder. In our conventional work, we used a combined dictionary that was constructed from the source speaker's vowels and the consonants from a target speaker without articulation disorders in order to preserve the speaker's individuality. However, this conventional exemplar-based approach needs to use all the training exemplars (frames), and it may cause mismatching of phonemes between input signals and selected exemplars. In order to reduce the mismatching of phoneme alignment, we propose a phonemecategorized subdictionary and a dictionary selection method using NMF. The effectiveness of this method was confirmed by comparing its effectiveness with that of a conventional Gaussian Mixture Model (GMM)-based http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Accessible Computing (TACCESS) Association for Computing Machinery

Individuality-Preserving Voice Conversion for Articulation Disorders Using Phoneme-Categorized Exemplars

Individuality-Preserving Voice Conversion for Articulation Disorders Using Phoneme-Categorized Exemplars

ACM Transactions on Accessible Computing (TACCESS) , Volume 6 (4) – May 11, 2015

Abstract

Individuality-Preserving Voice Conversion for Articulation Disorders Using Phoneme-Categorized Exemplars RYO AIHARA, Graduate School of System Informatics, Kobe University TETSUYA TAKIGUCHI and YASUO ARIKI, Organization of Advanced Science and Technology, Kobe University We present a voice conversion (VC) method for a person with an articulation disorder resulting from athetoid cerebral palsy. The movements of such speakers are limited by their athetoid symptoms and their consonants are often unstable or unclear, which makes it difficult for them to communicate. Exemplar-based spectral conversion using Nonnegative Matrix Factorization (NMF) is applied to a voice from a speaker with an articulation disorder. In our conventional work, we used a combined dictionary that was constructed from the source speaker's vowels and the consonants from a target speaker without articulation disorders in order to preserve the speaker's individuality. However, this conventional exemplar-based approach needs to use all the training exemplars (frames), and it may cause mismatching of phonemes between input signals and selected exemplars. In order to reduce the mismatching of phoneme alignment, we propose a phonemecategorized subdictionary and a dictionary selection method using NMF. The effectiveness of this method was confirmed by comparing its effectiveness with that of a conventional Gaussian Mixture Model (GMM)-based

Loading next page...
 
/lp/association-for-computing-machinery/individuality-preserving-voice-conversion-for-articulation-disorders-GVLEnwP8o0
Publisher
Association for Computing Machinery
Copyright
Copyright © 2015 by ACM Inc.
ISSN
1936-7228
DOI
10.1145/2738048
Publisher site
See Article on Publisher Site

Abstract

Individuality-Preserving Voice Conversion for Articulation Disorders Using Phoneme-Categorized Exemplars RYO AIHARA, Graduate School of System Informatics, Kobe University TETSUYA TAKIGUCHI and YASUO ARIKI, Organization of Advanced Science and Technology, Kobe University We present a voice conversion (VC) method for a person with an articulation disorder resulting from athetoid cerebral palsy. The movements of such speakers are limited by their athetoid symptoms and their consonants are often unstable or unclear, which makes it difficult for them to communicate. Exemplar-based spectral conversion using Nonnegative Matrix Factorization (NMF) is applied to a voice from a speaker with an articulation disorder. In our conventional work, we used a combined dictionary that was constructed from the source speaker's vowels and the consonants from a target speaker without articulation disorders in order to preserve the speaker's individuality. However, this conventional exemplar-based approach needs to use all the training exemplars (frames), and it may cause mismatching of phonemes between input signals and selected exemplars. In order to reduce the mismatching of phoneme alignment, we propose a phonemecategorized subdictionary and a dictionary selection method using NMF. The effectiveness of this method was confirmed by comparing its effectiveness with that of a conventional Gaussian Mixture Model (GMM)-based

Journal

ACM Transactions on Accessible Computing (TACCESS)Association for Computing Machinery

Published: May 11, 2015

There are no references for this article.