An Approach for generating best possible questions from the given text using Natural Language Processing

##plugins.themes.academic_pro.article.main##

Neha Bhagwatkar
Kimaya Vaidya
Aditi Singh
Sneha Borikar
Hirkani Padwad

Abstract

A crucial ability for every person is the capacity to ask pertinent questions. By automating the process of question formation, an automatic question generator is able to decrease the time and effort needed for manual question creation. Along with benefitting educational institutions like schools and colleges, automated question generation can be used in chatbots and for automated tutoring systems. Question Generation is an area in NLP that is still under research for greater accuracy. Research work has been done in many languages too. The goal of an automatic question generator is to generate syntactically and semantically correct questions, valid according to the given input. The Bidirectional Encoder Representations from Transformers (BERT) model is one of the pre-trained models adopted to implement the same. Additionally, we used Python packages, including NLTK, Spacy, and PKE. To test our findings, we evaluated the validity and relevance of generated questions using human-level cognition and evaluation. We were successful in creating inquiries that adequately reflected several of the peculiarities of English so that a person might comprehend them.

##plugins.themes.academic_pro.article.details##

How to Cite
Neha Bhagwatkar, Kimaya Vaidya, Aditi Singh, Sneha Borikar, & Hirkani Padwad. (2023). An Approach for generating best possible questions from the given text using Natural Language Processing. International Journal of Next-Generation Computing, 14(1). https://doi.org/10.47164/ijngc.v14i1.1044

References

  1. Guanliang Chen, Jie Yang, C. H. G.-J. H. 2018. “learningq: A large-scale dataset for educational question generation”. International Journal of Next-Generation Computing.
  2. Guokun Lai, Qizhe Xie, H. L. Y. Y. E. H. 2017. “race: Large-scale reading comprehension dataset from examinations”. arXiv preprint. DOI: https://doi.org/10.18653/v1/D17-1082
  3. Montgomerie., A. 2020/7. “generating questions using transformers”. 21–30. 10.1109/ICSA.2017.24
  4. Payal Bajaj, Daniel Campos, N. C. L. D. J. G. X. L. R. M. A. M. B. M. T. N. M. R. X. S. A. S. S. T. T. W. 2018. “ms marco: A human generated machine reading comprehension dataset”. arXiv preprint, pp. 466–475.
  5. Poibeau, T. and Kosseim., L. 2000. “proper name extraction from non-journalistic texts”. Computational Linguistics in the Netherlands, 144–157.
  6. Pranav Rajpurkar, R. J. and Liang., P. 2018. “know what you don’t know: Unanswerable questions for squad”. arXiv preprint. DOI: https://doi.org/10.18653/v1/P18-2124
  7. Priti Gumaste, Shreya Joshi, S. K. S. M. 2020/6. “automated question generator system using nlp libraries”. IRJET , 13(3).
  8. Qingyu Zhou, Nan Yang, F. W. C. T. H. B. M. Z. 2017/4. “neural question generation from text: A preliminary study”.
  9. R. Joshi, K. . Punwatkar, C. W. and Dhorajiya, P. . “a mathematical word problem solver system using deep learning”. International Journal of Next-Generation Computing.
  10. Radford., A. 2019. “language models are unsupervised multi-task learners”. OpenAI Blog 1.8 , p.9.
  11. Siva Reddy, D. C. and Manning., C. D. 2019. “coqa: A conversational question answering challenge”. in: Transactions of the association for computational linguistics”. pp. 249–266. DOI: https://doi.org/10.1162/tacl_a_00266