Analysis of Sonic Effects of Music from a Comprehensive Datasets on Audio Features
Keywords:Audio features, Music, Acousticness, Speechiness, Instrumentalness, danceability
AbstractMusic, for the longest time, has impacted human lives tremendously. The ability of music to access and activate a wide range of human emotions is sensational. Toward this end, audio features provide a variety of information necessary for sound engineers, music producers, and artists to improve their craft to excite the vast majority of music listeners across the globe. In this paper, analysis of audio features derived using the Spotify web API endpoint and Spotify (Python module for Spotify web servers) is presented. The dataset was curated from audio features of over 160,000 songs released from the year 1921-2020. For clarity, statistical descriptions and probability distribution functions of the audio features are reported. Also, the interrelationship and correlation amongst the various audio features are demonstrated. Overall, the dataset would find useful applications in classical and future music production.
J. Amaegbe and P. Omuku, â€œSonic Effects of Indigenous Percussive Musical Instruments in Choral Music Performances : The Case of Kings Choral Voices of Port Harcourt,â€ J. Niger. Music Educ., no. 9, pp. 180â€“189, 2017.
T. Wilmering, D. Moffat, A. Milo, and M. B. Sandler, â€œA history of audio effects,â€ Appl. Sci., vol. 10, no. 791, pp. 1â€“27, 2020, doi: 10.3390/app10030791.
P. C. Terry, C. I. Karageorghis, M. L. Curran, O. V. Martin, and R. L. Parsons-Smith, â€œEffects of Music in Exercise and Sport: A Meta-Analytic Review,â€ Psychol. Bull., vol. 146, no. 2, pp. 91â€“117, 2019, doi: 10.1037/bul0000216.
M. Anglada-tort, S. Keller, J. Steffens, and D. MÃ¼llensiefen, â€œThe Impact of Source Effects on the Evaluation of Music for Advertising Are there Differences in How Advertising Professionals and Consumers Judge Music?,â€ J. Advert. Res., no. July, pp. 1â€“15, 2020, doi: 10.2501/JAR-2020-016.
A. Zelechowska, V. E. Gonzalez-Sanchez, B. Laeng, and A. R. Jensenius, â€œHeadphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music,â€ Front. Psychol., vol. 11, no. 698, pp. 1â€“19, 2020, doi: 10.3389/fpsyg.2020.00698.
T. Theorell and E. Bojner Horwitz, â€œEmotional Effects of Live and Recorded Music in Various Audiences and Listening Situations,â€ Medicines, vol. 6, no. 16, pp. 1â€“12, 2019, doi: 10.3390/medicines6010016.
Theimer, Vatolkin, and Eronen, â€œDefinitions of Audio Features for Music Content Description,â€ â€¦ Rep. TR08-2- â€¦, no. Ls 11, 2008.
R. Panda, R. M. Malheiro, and R. P. Paiva, â€œNovel audio features for music emotion recognition,â€ IEEE Trans. Affect. Comput., pp. 1â€“14, 2018, doi: 10.1109/TAFFC.2018.2820691.
D.-C. M. L. N. Hanna P., B.-P. J, P. Hanna, N. Louis, M. Desainte-Catherine, and J. Benois-Pineau, â€œAudio features for noisy sound segmentation,â€ vol. 1, pp. 120â€“124, 2004.
Z. Cataltepe, Y. Yaslan, and A. Sonmez, â€œMusic genre classification using MIDI and audio features,â€ EURASIP J. Adv. Signal Process., vol. 2007, 2007, doi: 10.1155/2007/36409.
B. De Man, B. Leonard, R. King, and J. D. Reiss, â€œAn analysis and evaluation of audio features for multitrack music mixtures,â€ Proc. 15th Int. Soc. Music Inf. Retr. Conf. ISMIR 2014, no. Ismir, pp. 134â€“142, 2014.
D. MitroviÄ‡, M. Zeppelzauer, and C. Breiteneder, â€œFeatures for Content-Based Audio Retrieval,â€ vol. 78, pp. 71â€“150, 2010, doi: 10.1016/s0065-2458(10)78003-7.
D. Turnbull, L. Barrington, D. Torres, and G. Lanckriet, â€œSemantic annotation and retrieval of music and sound effects,â€ IEEE Trans. Audio, Speech Lang. Process., vol. 16, no. 2, pp. 467â€“476, 2008, doi: 10.1109/TASL.2007.913750.
J. T. Foote, â€œContent-Based of Music and Audio,â€ 1997, [Online]. Available: http://www.music.mcgill.ca/~ich/classes/mumt611_05/Query Retrieval/footespie97.pdf.
F. MÃ¶rchen, A. Ultsch, M. Thies, and I. LÃ¶hken, â€œModeling timbre distance with temporal statistics from polyphonic music,â€ IEEE Trans. Audio, Speech Lang. Process., vol. 14, no. 1, pp. 81â€“90, 2006, doi: 10.1109/TSA.2005.860352.
R. Neumayer and A. Rauber, â€œIntegration of text and audio features for genre classification in music information retrieval,â€ Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 4425 LNCS, pp. 724â€“727, 2007, doi: 10.1007/978-3-540-71496-5_78.
D. Tingle, Y. E. Kim, and D. Turnbull, â€œExploring automatic music annotation with â€˜acoustically- objectiveâ€™ tags,â€ MIR 2010 - Proc. 2010 ACM SIGMM Int. Conf. Multimed. Inf. Retr., pp. 55â€“61, 2010, doi: 10.1145/1743384.1743400.
S. Shuvaev, H. Giaffar, and A. A. Koulakov, â€œRepresentations of Sound in Deep Learning of Audio Features from Music,â€ 2017, [Online]. Available: http://arxiv.org/abs/1712.02898.
M. MÃ¼ller and S. Ewert, â€œTowards Timbre-Invariant Audio Features for Harmony-Based Music,â€ IEEE Trans. Audio. Speech. Lang. Processing, vol. 18, no. 3, pp. 649â€“662, 2010.
L. Mion and G. De Poli, â€œScore-independent audio features for description of music expression,â€ IEEE Trans. Audio, Speech Lang. Process., vol. 16, no. 2, pp. 458â€“466, 2008, doi: 10.1109/TASL.2007.913743.
How to Cite
Copyright of articles that appear in Elektrika belongs exclusively to Penerbit Universiti Teknologi Malaysia (Penerbit UTM Press). This copyright covers the rights to reproduce the article, including reprints, electronic reproductions, or any other reproductions of similar nature.