In today’s world, digital media is a significant element of human life. People take its help to stay motivated and explore media according to their moods. It takes a lot of effort to find appropriate music that suits the particular emotional state from loads of options available. Media players in today’s world are not giving priority to the emotional state and effective recommendation of a person. Human emotion plays a vital role in media selection in recent times. Emotion expresses the individual’s behavior and state of mind and digital media has the power to change one’s mental state from negative to positive. The objective of this paper is to extract features from the human face and detect emotion, age, and gender, and suggest media according to the features detected. The emotional state, age, and gender can be interpreted from facial expressions through the webcam. We have used the CNN classifier to build a neural network model. This model is trained and subjected to detect mood, age, and gender from facial expressions using OpenCV. A system that generates a media playlist based on the detected emotion, age, and gender gives better results.
This work is licensed under a Creative Commons Attribution 4.0 International License.
- Babanne, V., Borgaonkar, M., Katta, M., Kudale, P., and Deshpande, V. 2020. Emotion
- based personalized recommendation system. International Research Journal of Engineering
- and Technology (IRJET) Vol.7, No.8, pp.701–705.
- Bali, V., Haval, S., Patil, S., and Priyambiga, R. 2019. Emotion based music player.
- International Journal of Research in Engineering, Science and Management Vol.2, No.2,
- Bhat, A., Amith, V., Prasad, N., and Mohan, D. 2014. An efficient classification algorithm
- for music mood detection in western and hindi music using audio feature extraction. pp.359–
- Hemanth P, A., B, A. C., P, A., and Kumar, V. A. 2018. Emo player: Emotion based music
- player. International Research Journal of Engineering and Technology (IRJET) Vol.4, No.4,
- Koldijk, S., Neerincx, M. A., and Kraaij, W. 2018. Detecting work stress in offices by
- combining unobtrusive sensors. IEEE Trans. Affect. Comput. Vol.9, No.2, pp.227–239.
- Lawrence, S., Gilese, C., Tsoi, A. C., and Back, A. 1997. Face recognition: a convolutional
- neural-network approach. IEEE Transactions on Neural Networks Vol.8, No.1, pp.98–113.
- Mariappan, M., Suk, M., and Prabhakaran, B. 2012. Facefetch: A user driven multimedia
- content recommendation system based on facial expression recognition. pp.84–87.
- Roy, S., Sharma, P. M., and Singh, D. S. 2019. Movie recommendation system using semisupervised
- Swaminathan, S. and Schellenberg, E. G. 2015. Current emotion research in music psychology.
- Emotion Review Vol.7, No.2, pp.189–197.
- Tian, Y.-L., Kanade, T., and Cohn, J. 2000. Recognizing lower. face action units for facial
- expression analysis. Proceedings of the 4th IEEE International Conference on Automatic
- Face and Gesture Recognition, pp.284–490.