Access the full text.
Sign up today, get DeepDyve free for 14 days.
References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.
Non-permanent facial makeup is one of the most difficult problems inhibiting face recognition systems in security applications. In this paper, a new method is proposed for makeup-invariant face identification and verification. Face images from the virtual makeup (VMU) and YouTubemakeup (YMU) datasets were subjected to the Gabor filtering and histogram of oriented gradients (HOG) methods for feature extraction. The Gabor and HOG features were concatenated to generate the final feature vectors and subsequently reduced using the fisher linear discriminant analysis subspace. The reduced features were classified using the city block distance (CBD), Euclidean distance (EUC), cosine similarity measure (CSM) and whitened cosine similarity measure (WCSM). The CSM achieved the best recognition rates out of the four metrics used. Performance evaluation of these metrics produced identification and verification rates of 100% and 100% for the VMU database, and 72.52% and 79.47% for the YMU database, respectively. The developed method outperformed several state-of-the-art methods initially exploited.
International Journal of Signal and Imaging Systems Engineering – Inderscience Publishers
Published: Jan 1, 2017
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.