Evaluating the Effectiveness of Parameter Tuning for Support Vector Machine on Voice Pathology Database
DOI:
https://doi.org/10.11113/elektrika.v24n2.628Keywords:
Feature Extraction, Kernel, SVD, SVM, Voice PathologyAbstract
This study explores the effectiveness of parameter tuning for Support Vector Machine (SVM) in the classification of voice pathologies using the Saarbrucken Voice Database (SVD). A balanced dataset consisting of 200 pathological and 200 non-pathological voice samples was used. Feature extraction was performed to derive key acoustic properties from the voice samples, and the SVM classifier was evaluated using four different kernel functions: linear, polynomial, radial basis function (RBF), and sigmoid. A grid search was employed to tune the hyperparameters, specifically focusing on the regularization parameter (C) and gamma (γ). The performance of each kernel was visualized using heatmaps that represent accuracy scores for different parameter combinations. Results demonstrated that the linear kernel, with C = 0.1 and γ = 1, achieved the highest accuracy of 64%, making it the most effective for this classification task. In contrast, the polynomial, RBF, and sigmoid kernels showed lower performance. These findings emphasize the importance of selecting appropriate kernels and parameters in SVM-based classification tasks for voice pathology detection.
Downloads
Published
How to Cite
Issue
Section
License
Copyright of articles that appear in Elektrika belongs exclusively to Penerbit Universiti Teknologi Malaysia (Penerbit UTM Press). This copyright covers the rights to reproduce the article, including reprints, electronic reproductions, or any other reproductions of similar nature.













