Music Playlist Generation using Facial Expression Analysis and Task Extraction
Abstract
In day to day stressful environment of IT Industry, there is a truancy for the appropriate relaxation time for all working professionals. To keep a person stress free, various technical or non-technical stress releasing methods are now being adopted. We can categorize the people working on computers as administrators, programmers, etc. each of whom require varied ways in order to ease themselves. The work pressure and the vexation of any kind for a person can be depicted by their emotions. Facial expressions are the key to analyze the current psychology of the person. In this paper, we discuss a user intuitive smart music player. This player will capture the facial expressions of a person working on the computer and identify the current emotion. Intuitively the music will be played for the user to relax them. The music player will take into account the foreground processes which the person is executing on the computer. Since various sort of music is available to boost one's enthusiasm, taking into consideration the tasks executed on the system by the user and the current emotions they carry, an ideal playlist of songs will be created and played for the person. The person can browse the playlist and modify it to make the system more flexible. This music player will thus allow the working professionals to stay relaxed in spite of their workloads.
Keywords
Full Text:
PDFReferences
F. ABDAT, C. MAAOUI and A. PRUSKI, “Human-computer interaction using emotion recognition from facial expression,” UKSim 5th European Symposium on Computer Modeling and Simulation, 2011, pp. 196-201.
Chim Chwee WONG, Emy Salfarina ALIAS and Junichi KISHIGAMI, “Playlist environmental analysis for the serendipity-based data mining,” 2013 International Conference on Informatics, Electronics and Vision, May 2013.
Rakesh Kumar, Rajesh Kumar and Seema, “Gabor Wavelet Based Features Extraction for RGB Objects Recognition Using Fuzzy Classifier,” International Journal of Application or Innovation in Engineering & Management, Vol. 2, Issue 8, August 2013, pp. 122-127.
P. Viola and M. J. Jones, “Robust real-time object detection,” International Journal of Computer Vision, Vol. 57, No. 2, pp. 137–154, 2004.
Hui Peng and Yao Wang, “WMIC-based technology server network management software design,” 2010 Second Pacific-Asia Conference on Circuits, Communications and System, August 2010.
Derrick H. Nguyen and Bernard Widrow, “Neural networks for selflearning control systems,” IEEE Control Systems Magazine, Vol. 10, Issue 3, April 1990.
Qingfeng Liu, Ajit Puthenputhussery and Chengjun Liu, “Novel general KNN classifier and general nearest mean classifier for visual classification,” IEEE International Conference on Image Processing, September 2015.
Michael J. Lyons, Shigeru Akemastu, Miyuki Kamachi, Jiro Gyoba, “Coding Facial Expressions with Gabor Wavelets”, 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200-205, 1998.
P. N. Bellhumer, J. Hespanha, and D. Kriegman, “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Special Issue on Face Recognition, 17(7):711--720, 1997.
P. Ekman and W. V. Friesen, “Facial Action Coding System: A Technique for the Measurement of Facial Movement,” Palo Alto, California, USA: Consulting Psychologists Press, 1978.
T. Li and M. Ogihara, “Music genre classification with taxonomy,” In Proc. of IEEE Intern. Conf. on Acoustics, Speech and Signal Processing, pages 197–200, Philadelphia, USA, 2005.
A.G. Ramakrishnan, S. Kumar Raja and H.V. Raghu Ram, “Neural network-based segmentation of textures using Gabor features,” Proc. 12th IEEE Workshop on Neural Networks for Signal Processing, pp. 365 - 374, 2002.
Gianluca Donato, Marian Stewart Bartlett, Joseph C. Hager, Paul Ekman, and Terrence J. Sejnowski, “Classifying facial action,” IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 21, October 1999, pp. 974-989.
S. S. Ge, C. Wang, C. C. Hang, “Facial expression imitation in human robot interaction,” Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, August 2008.
Carlos N. Silla, Alessandro L. Koerich and Celso A. A. Kaestner, “Feature selection in automatic music genre classification,” Tenth IEEE International Symposium on Multimedia, October 2008, pp. 39-44.
Andrew Ryan, Jeffery F. Cohn, Simon Lucey, Jason Saragih, Patrick Lucey, & Fernando De la Torre and Adam Rossi, “Automated facial expression recognition system,” IEEE Journal, 2009, pp. 172-177.
Changbo Hu, Ya Chang, R. Feris, M. Turk, “Manifold based analysis of facial expression,” IEEE Conference on Computer Vision and Pattern Recognition Workshop, June 2004.
T. Kanade, J. F. Cohn, Y. Tian, “Comprehensive database for facial expression analysis,” Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France, 2000, pp. 46-53.
Z. Zhang, M. J. Lyons, M. Schuster, and S. Akamatsu, “Comparison between geometrybased and gabor-wavelets-based facial expression recognition using multi-layer perceptron,” in IEEE FG, April 1998.
M. Pantic, L. Rothkrantz, “Automatic Analysis of Facial Expressions: The State of the Art,” IEEE Transactions On Pattern Analysis and Machine Intelligence, Vol. 22, No. 12, 2000.
P. Menezes, J.C. Barreto, and J. Dias, “Face tracking based on Haar-like features and eigenfaces,” 5th IFAC Symposium on Intelligent Autonomous Vehicles, Lisbon, Portugal, July 5-7, 2004.
DOI: http://dx.doi.org/10.17951/ai.2016.16.2.1
Date of publication: 2017-12-22 09:38:04
Date of submission: 2017-12-18 13:57:14
Statistics
Indicators
Refbacks
- There are currently no refbacks.
Copyright (c) 2017 Arnaja Sen, Dhaval Popat, Hardik Shah, Priyanka Kuwor, Era Johri
This work is licensed under a Creative Commons Attribution 4.0 International License.