Analysis of Sonic Effects of Music from a Comprehensive Datasets on Audio Features
DOI:
https://doi.org/10.11113/elektrika.v20n1.233Keywords:
Audio features, Music, Acousticness, Speechiness, Instrumentalness, danceabilityAbstract
Music, for the longest time, has impacted human lives tremendously. The ability of music to access and activate a wide range of human emotions is sensational. Toward this end, audio features provide a variety of information necessary for sound engineers, music producers, and artists to improve their craft to excite the vast majority of music listeners across the globe. In this paper, analysis of audio features derived using the Spotify web API endpoint and Spotify (Python module for Spotify web servers) is presented. The dataset was curated from audio features of over 160,000 songs released from the year 1921-2020. For clarity, statistical descriptions and probability distribution functions of the audio features are reported. Also, the interrelationship and correlation amongst the various audio features are demonstrated. Overall, the dataset would find useful applications in classical and future music production.References
J. Amaegbe and P. Omuku, “Sonic Effects of Indigenous Percussive Musical Instruments in Choral Music Performances : The Case of Kings Choral Voices of Port Harcourt,†J. Niger. Music Educ., no. 9, pp. 180–189, 2017.
T. Wilmering, D. Moffat, A. Milo, and M. B. Sandler, “A history of audio effects,†Appl. Sci., vol. 10, no. 791, pp. 1–27, 2020, doi: 10.3390/app10030791.
P. C. Terry, C. I. Karageorghis, M. L. Curran, O. V. Martin, and R. L. Parsons-Smith, “Effects of Music in Exercise and Sport: A Meta-Analytic Review,†Psychol. Bull., vol. 146, no. 2, pp. 91–117, 2019, doi: 10.1037/bul0000216.
M. Anglada-tort, S. Keller, J. Steffens, and D. Müllensiefen, “The Impact of Source Effects on the Evaluation of Music for Advertising Are there Differences in How Advertising Professionals and Consumers Judge Music?,†J. Advert. Res., no. July, pp. 1–15, 2020, doi: 10.2501/JAR-2020-016.
A. Zelechowska, V. E. Gonzalez-Sanchez, B. Laeng, and A. R. Jensenius, “Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music,†Front. Psychol., vol. 11, no. 698, pp. 1–19, 2020, doi: 10.3389/fpsyg.2020.00698.
T. Theorell and E. Bojner Horwitz, “Emotional Effects of Live and Recorded Music in Various Audiences and Listening Situations,†Medicines, vol. 6, no. 16, pp. 1–12, 2019, doi: 10.3390/medicines6010016.
Theimer, Vatolkin, and Eronen, “Definitions of Audio Features for Music Content Description,†… Rep. TR08-2- …, no. Ls 11, 2008.
R. Panda, R. M. Malheiro, and R. P. Paiva, “Novel audio features for music emotion recognition,†IEEE Trans. Affect. Comput., pp. 1–14, 2018, doi: 10.1109/TAFFC.2018.2820691.
D.-C. M. L. N. Hanna P., B.-P. J, P. Hanna, N. Louis, M. Desainte-Catherine, and J. Benois-Pineau, “Audio features for noisy sound segmentation,†vol. 1, pp. 120–124, 2004.
Z. Cataltepe, Y. Yaslan, and A. Sonmez, “Music genre classification using MIDI and audio features,†EURASIP J. Adv. Signal Process., vol. 2007, 2007, doi: 10.1155/2007/36409.
B. De Man, B. Leonard, R. King, and J. D. Reiss, “An analysis and evaluation of audio features for multitrack music mixtures,†Proc. 15th Int. Soc. Music Inf. Retr. Conf. ISMIR 2014, no. Ismir, pp. 134–142, 2014.
D. Mitrović, M. Zeppelzauer, and C. Breiteneder, “Features for Content-Based Audio Retrieval,†vol. 78, pp. 71–150, 2010, doi: 10.1016/s0065-2458(10)78003-7.
D. Turnbull, L. Barrington, D. Torres, and G. Lanckriet, “Semantic annotation and retrieval of music and sound effects,†IEEE Trans. Audio, Speech Lang. Process., vol. 16, no. 2, pp. 467–476, 2008, doi: 10.1109/TASL.2007.913750.
J. T. Foote, “Content-Based of Music and Audio,†1997, [Online]. Available: http://www.music.mcgill.ca/~ich/classes/mumt611_05/Query Retrieval/footespie97.pdf.
F. Mörchen, A. Ultsch, M. Thies, and I. Löhken, “Modeling timbre distance with temporal statistics from polyphonic music,†IEEE Trans. Audio, Speech Lang. Process., vol. 14, no. 1, pp. 81–90, 2006, doi: 10.1109/TSA.2005.860352.
R. Neumayer and A. Rauber, “Integration of text and audio features for genre classification in music information retrieval,†Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 4425 LNCS, pp. 724–727, 2007, doi: 10.1007/978-3-540-71496-5_78.
D. Tingle, Y. E. Kim, and D. Turnbull, “Exploring automatic music annotation with ‘acoustically- objective’ tags,†MIR 2010 - Proc. 2010 ACM SIGMM Int. Conf. Multimed. Inf. Retr., pp. 55–61, 2010, doi: 10.1145/1743384.1743400.
S. Shuvaev, H. Giaffar, and A. A. Koulakov, “Representations of Sound in Deep Learning of Audio Features from Music,†2017, [Online]. Available: http://arxiv.org/abs/1712.02898.
M. Müller and S. Ewert, “Towards Timbre-Invariant Audio Features for Harmony-Based Music,†IEEE Trans. Audio. Speech. Lang. Processing, vol. 18, no. 3, pp. 649–662, 2010.
L. Mion and G. De Poli, “Score-independent audio features for description of music expression,†IEEE Trans. Audio, Speech Lang. Process., vol. 16, no. 2, pp. 458–466, 2008, doi: 10.1109/TASL.2007.913743.
Downloads
Published
How to Cite
Issue
Section
License
Copyright of articles that appear in Elektrika belongs exclusively to Penerbit Universiti Teknologi Malaysia (Penerbit UTM Press). This copyright covers the rights to reproduce the article, including reprints, electronic reproductions, or any other reproductions of similar nature.