Design and Analysis of Multipliers for DNN application using approximate 4:2 Compressors

##plugins.themes.academic_pro.article.main##

Shubham Anjankar
Hemant Gillurkar
Pankaj Joshi
Pravin Dwaramwar

Abstract

The demand for Deep Learning applications in resource constrained devices is booming in recent years. The
use of Deep Neural Network (DNN) is the leading method in these applications which has error resilient nature.
This allows the use of Approximate Computing for efficient computation to leverage efficiency accuracy trade
off by replacing Approximate Multiplier in place of exact multipliers. In this paper we proposed Approximate
Compressors and compared them in different cases of 8 bit integer dadda multipliers in terms of the Error metrics
and accuracy in real life object classification applications. The approximate multipliers are designed using different
compressors and used to perform multiplication in ResNet. We have proposed two approximate compressors
designs Design 1 and Design 2.The proposed 4:2 compressors design shows the more correct outputs and less
Worst Case Relative Error (WCRE) in the range of 2-16. Our proposed 4:2 compressor Design1 is utilized in the
modified Reduction circuitry of dadda multiplier and shows the accuracy of 81.6 % for DNN application

##plugins.themes.academic_pro.article.details##

How to Cite
Anjankar, S., Hemant Gillurkar, Joshi, P., & Dwaramwar, P. (2022). Design and Analysis of Multipliers for DNN application using approximate 4:2 Compressors. International Journal of Next-Generation Computing, 13(5). https://doi.org/10.47164/ijngc.v13i5.918

References

  1. Aktas, K., Ignjatovic, V., Ilic, D., Marjanovic, M., and Anbarjafari, G. 2022. Deep convolutional neural networks for detection of abnormalities in chest X-rays trained on the very large dataset. Signal, Image and Video Processing. DOI: https://doi.org/10.1007/s11760-022-02309-w
  2. Ansari, M. S., Mrazek, V., Cockburn, B. F., Sekanina, L., Vasicek, Z., and Han, J. 2020. Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 28, 2(2), 317–328. DOI: https://doi.org/10.1109/TVLSI.2019.2940943
  3. Asakawa, E., Kaneko, N., Hasegawa, D., and Shirakawa, S. 2022. Evaluation of text-to-gesture generation model using convolutional neural network. Neural Networks 151,365–375. DOI: https://doi.org/10.1016/j.neunet.2022.03.041
  4. Brodley, C. E., Rebbapragada, U., Small, K., and Wallace, B. 2012. Challenges and Opportunities in Applied Machine Learning. AI Magazine 33, 1 (mar 15), 11–24. DOI: https://doi.org/10.1609/aimag.v33i1.2367
  5. ehw fit. 2020. Github - ehw-fit/tf-approximate: Approximate layers - TensorFlow extension. https://github.com/ehw-fit/tf-approximate. [Online; accessed 2022-10-19].
  6. Jongkind, D. 2017. Book Review: Going Deeper with New Testament Greek: Andreas J. K ̈ostenberger, Benjamin L. Merkle, Robert L. Plummer, Going Deeper with New Testa-ment Greek. The Expository Times 128, 5 (2), 252–252. DOI: https://doi.org/10.1177/0014524616680778k
  7. Klein, S. C., Kantic, J., and Blume, H. 2021. Fixed Point Analysis Workflow for efficient Design of Convolutional Neural Networks in Hearing Aids. Current Directions in Biomedical Engineering 7, 2 (oct 1), 787–790. DOI: https://doi.org/10.1515/cdbme-2021-2201
  8. Krizhevsky, A., Sutskever, I., and Hinton, G. E. 2017. Imagenet classification with deep convolutional neural networks. Communications of the ACM 60, 6 (may 24), 84–90. DOI: https://doi.org/10.1145/3065386
  9. Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11, 2278–2324. DOI: https://doi.org/10.1109/5.726791
  10. Manikantta Reddy, K., Vasantha, M., Nithin Kumar, Y., and Dwivedi, D. 2019. Design and analysis of multiplier using approximate 4-2 compressor. AEU - International Journal of Electronics and Communications 107, 89–97. DOI: https://doi.org/10.1016/j.aeue.2019.05.021
  11. Momeni, A., Han, J., Montuschi, P., and Lombardi, F. 2015. Design and Analysis of Approximate Compressors for Multiplication. IEEE Transactions on Computers 64, 4 (4), 984–994. DOI: https://doi.org/10.1109/TC.2014.2308214
  12. Mrazek, V., Sekanina, L., and Vasicek, Z. 2020. Using Libraries of Approximate Circuits in Design of Hardware Accelerators of Deep Neural Networks. 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS). DOI: https://doi.org/10.1109/AICAS48895.2020.9073837
  13. Shafiq, M. and Gu, Z. 2022. Deep Residual Learning for Image Recognition: A Survey. Applied DOI: https://doi.org/10.3390/app12188972
  14. Sciences 12, 18 (sep 7), 8972.
  15. Shobana, G., Chithiraimuthu, R., and Adhithyavel, A. 2022. Performance analysis and implementation of approximate multipliers on spartan 6 FPGA. International journal of health sciences, 10633–10652. DOI: https://doi.org/10.53730/ijhs.v6nS1.7546
  16. Sze, V., Chen, Y.-H., Yang, T.-J., and Emer, J. S. 2017. Efficient Processing of Deep Neural Networks: A Tutorial and Survey. DOI: https://doi.org/10.1109/JPROC.2017.2761740
  17. Vaughan, O. 2022. Accelerating deep learning with precision. Nature Electronics 5, 7 (7), 411–411. DOI: https://doi.org/10.1038/s41928-022-00813-y