SMART VIRTUAL PAINTER
##plugins.themes.academic_pro.article.main##
Abstract
The main goal of computer vision is to recognize different objects of various size, shape and position. The major
challenges faced by computer vision are the illumination and the viewpoint of the object, concerning this multiple
studies on detecting and recognizing objects that showed a high level of accuracy and precision on these tasks. to
used for object detection online the proposed work allows the user to track the movement of any cultured object
of his/her choice. The user has the ablity even choose the colors of his choice to be displayed by running the
concerned application, the camera is activated thus enabling the user to draw in the air just by waving the tracker
object. The drawing part that is concerned is also simultaneously visible on the white window. The instructor
can choose any color of his choice as his need displayed above to draw and also can clear the screen when needed.
OpenCV and python are used to build this application using computer vision techniques
##plugins.themes.academic_pro.article.details##
This work is licensed under a Creative Commons Attribution 4.0 International License.
References
- Araga, Y., Shirabayashi, M., Kaida, K., and Hikawa, H. 2012. Real time gesture recognition system using posture classifier and jordan recurrent neural network. In The 2012 International Joint Conference on Neural Networks (IJCNN). IEEE, 1–8.
- Bragatto, T., Ruas, G., and Lamar, M. 2006. Real-time video based finger spelling recognition system using low computational complexity artificial neural networks. In 2006 International Telecommunications Symposium. IEEE, 393–397.
- Chang, Y.-H. and Chang, C.-M. 2010. Automatic hand-pose trajectory tracking system using video sequences. In User Interfaces. IntechOpen.
- Grossman, T., Balakrishnan, R., Kurtenbach, G., Fitzmaurice, G., Khan, A., and Buxton, B. 2002. Creating principal 3d curves with digital tape drawing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 121–128.
- Huang, Y., Liu, X., Zhang, X., and Jin, L. 2016. A pointing gesture based egocentric interaction system: Dataset, approach and application. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 16–23.
- M¨akel¨a, W. 2005. Working 3d meshes and particles with finger tips, towards an immersive artists. In Interface,” Proc. IEEE Virtual Reality Workshop. Citeseer.
- Pavlovic, V. I., Sharma, R., and Huang, T. S. 1997. Visual interpretation of hand gestures for human-computer interaction: A review. IEEE Transactions on pattern analysis and machine intelligence 19, 7, 677–695.
- Ramasamy, P., Prabhu, G., and Srinivasan, R. 2016. An economical air writing system converting finger movements to text using web camera. In 2016 International Conference on Recent Trends in Information Technology (ICRTIT). IEEE, 1–6.
- Sae-Bae, N., Ahmed, K., Isbister, K., and Memon, N. 2012. Biometric-rich gestures: a novel approach to authentication on multi-touch devices. In proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 977–986.
- Sudderth, E. B., Mandel, M. I., Freeman, W. T., and Willsky, A. S. 2004. Visual hand tracking using nonparametric belief propagation. In 2004 Conference on Computer Vision and Pattern Recognition Workshop. IEEE, 189–189.
- Yang, R. and Sarkar, S. 2009. Coupled grouping and matching for sign and gesture recognition. Computer Vision and Image Understanding 113, 6, 663–681.