Individuality-Preserving Voice Conversion for Articulation Disorders Using Phoneme-Categorized Exemplars
Abstract
Individuality-Preserving Voice Conversion for Articulation Disorders Using Phoneme-Categorized Exemplars RYO AIHARA, Graduate School of System Informatics, Kobe University TETSUYA TAKIGUCHI and YASUO ARIKI, Organization of Advanced Science and Technology, Kobe University We present a voice conversion (VC) method for a person with an articulation disorder resulting from athetoid cerebral palsy. The movements of such speakers are limited by their athetoid symptoms and their consonants are often unstable or unclear, which makes it difficult for them to communicate. Exemplar-based spectral conversion using Nonnegative Matrix Factorization (NMF) is applied to a voice from a speaker with an articulation disorder. In our conventional work, we used a combined dictionary that was constructed from the source speaker's vowels and the consonants from a target speaker without articulation disorders in order to preserve the speaker's individuality. However, this conventional exemplar-based approach needs to use all the training exemplars (frames), and it may cause mismatching of phonemes between input signals and selected exemplars. In order to reduce the mismatching of phoneme alignment, we propose a phonemecategorized subdictionary and a dictionary selection method using NMF. The effectiveness of this method was confirmed by comparing its effectiveness with that of a conventional Gaussian Mixture Model (GMM)-based