Classification of Iranian traditional musical modes (DASTGÄH) with artificial neural network

Authors

1 Biomechatronics and Cognitive Sciences Research Lab, School of Mechanical Engineering, Iran University of Science and Technology, Tehran, Iran

2 Biomechatronics and Cognitive Engineering Research Lab, School of Mechanical Engineering, Iran University of Science and Technology

Abstract

The concept of Iranian traditional musical modes, namely DASTGÄH, is the basis for the traditional music system. The concept introduces seven DASTGÄHs. It is not an easy process to distinguish these modes and such practice is commonly performed by an experienced person in this field. Apparently, applying artificial intelligence to do such classification requires a combination of the basic information in the field of traditional music with mathematical concepts and knowledge. In this paper, it has been shown that it is possible to classify the Iranian traditional musical modes (DASTGÄH) with acceptable errors. The seven Iranian musical modes including SHÖR, HOMÄYÖN, SEGÄH, CHEHÄRGÄH, MÄHÖR, NAVÄ and RÄST-PANJGÄH are studied for the two musical instruments NEY and Violin as well as for a vocal song. For the purpose of classification, a multilayer perceptron neural network with supervised learning method is used. Inputs to the neural network include the top twenty peaks from the frequency spectrum of each musical piece belonging to the three aforementioned categories. The results indicate that the trained neural networks could distinguish the DASTGÄH of test tracks with accuracy around 65% for NEY, 72% for violin and 56% for vocal song.

Highlights

  •  Classification of the Iranian traditional musical modes is performed.
  •  All seven DASTGÄHs have been studied for NEY, Violin and vocal song tracks.
  •  A multilayer perceptron ANN with supervised learning method has been used.
  •  Classification accuracy is found 65% for NEY, 72% for violin and 56% for vocal song.

Keywords

Main Subjects


[1] P. Scott, Music classification using neural networks, Manuscript Class ee373a, Stanford, (2001).
[2] J.-W. Lee, S.-B. Park, S.-K. Kim, Music genre classification using a time-delay neural network, in:  Advances in Neural Networks-ISNN 2006, Springer, 2006, pp. 178-187.
[3] Z. Cataltepe, Y. Yaslan, A. Sonmez, Music genre classification using MIDI and audio features, EURASIP Journal on Advances in Signal Processing, 2007 (2007) 1-8.
[4] K. Kosina, Music genre recognition, (2002).
[5] H. Habibi, M. HomayoonPour, Automatic detection of music styles, Signal and Data Processing, 1 (2010) 33-52.
[6] N. Darabi, N. Azimi, H. Nojumi, Recognition of Dastgah and Maqam for Persian music with detecting skeletal melodic models, in:  Proc. 2nd IEEE BENELUX/DSP Valley Signal Processing Symposium, Citeseer, 2006.
[7] S. Abdoli, Iranian traditional music Dastgah classification, in:  ISMIR, 2011, pp. 275-280.
[8] M.A. Layegh, S. Haghipour, Y.N. Sarem, Classification of the Radif of Mirza Abdollah a canonic repertoire of Persian music using SVM method, Gazi University Journal of Science Part A: Engineering and Innovation, 1 (2013) 57-66.
[9] H. Hajimolahoseini, R. Amirfattahi, M. Zekri, Real-time classification of Persian musical Dastgahs using artificial neural network, in:  16th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP), IEEE, 2012, pp. 157-160.
[10] S. Mahmoodan, A. Banooshi, Automatic classification of Iranian music by an artificial neural network, in:  2nd International Conference on Acoustics and Vibration, 2012.
[11] N. Darabi, Generation and analyzation digital signals of music and automatic recognition of music styles, M.Sc. Thesis, K. N. Toosi University of Technology, 2004.
[12] H. Farhat, The Dastgah concept in Persian music, Cambridge University Press, 2004.
[13] C.J. Plack, A.J. Oxenham, R.R. Fay, Pitch: neural coding and perception, Springer Science & Business Media, 2006.
[14] J.G. Roederer, The physics and psychophysics of music: an introduction, Springer Science & Business Media, 2008.
[15] D.A. Ross, Being in time to the music, Cambridge Scholars Publishing, 2008.
[16] R. Lyon, S. Shamma, Auditory representations of timbre and pitch, in:  Auditory computation, Springer, 1996, pp. 221-270.
[17] B. Yegnanarayana, Artificial neural networks, PHI Learning Pvt. Ltd., 2009.
[18] G. Tzanetakis, P. Cook, Musical genre classification of audio signals, IEEE transactions on Speech and Audio Processing, 10 (2002) 293-302.