Please use this identifier to cite or link to this item: http://hdl.handle.net/123456789/13146
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHasrul Mohd Nazid-
dc.contributor.authorHariharan Muthusamy-
dc.contributor.authorVikneswaran Vijean-
dc.contributor.authorSazali Yaacob-
dc.date.accessioned2016-05-19T04:24:22Z-
dc.date.available2016-05-19T04:24:22Z-
dc.date.issued2015-
dc.identifier.citationHasrul Mohd Nazid, Hariharan Muthusamy, Vikneswaran Vijean, Sazali Yaacob. 2015. “Improved Speaker-Independent Emotion Recognition from Speech Using Two-Stage Feature Reduction.” Journal of ICT 14: 57–76.en_US
dc.identifier.issn1675-414X-
dc.identifier.urihttp://www.jict.uum.edu.my/index.php/component/jdownloads/download/4-ict-vol-14-2015/4-improved-speaker-independent-emotion-recognition-from-speech-using-two-stage-feature-reduction-
dc.identifier.urihttp://ir.unikl.edu.my/jspui/handle/123456789/13146-
dc.descriptionThis artilce index by SCOPUS. Sazali Yaacob (UniKL MSI)en_US
dc.description.abstractIn the recent years, researchers are focusing to improve the accuracy of speech emotion recognition. Generally, high emotion recognition accuracies were obtained for two-class emotion recognition, but multi-class emotion recognition is still a challenging task. The main aim of this work is to propose a two-stage feature reduction using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) for improving the accuracy of the speech emotion recognition (ER) system. Short-term speech features were extracted from the emotional speech signals. Experiments were carried out using four different supervised classifiers with two different emotional speech databases. From the experimental results, it can be inferred that the proposed method provides better accuracies of 87.48% for speaker dependent (SD) and gender dependent (GD) ER experiment, 85.15% for speaker independent (SI) ER experiment, and 87.09% for gender independent (GI) experiment.en_US
dc.publisherUniversiti Utara Malaysia Pressen_US
dc.subjectEmotional speechen_US
dc.subjectcepstral featuresen_US
dc.subjectfeature reductionen_US
dc.subjectemotion recognitionen_US
dc.titleImproved Speaker-Independent Emotion Recognition from Speech Using Two-Stage Feature Reductionen_US
dc.typeArticleen_US
Appears in Collections:Journal Articles



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.