Feature Selection for Ranking using Heuristics based Learning to Rank using Machine Learning

##plugins.themes.academic_pro.article.main##

Sushilkumar Chavhan
Dr. R. C. Dharmik

Abstract

Machine Learning based ranking is done every filed. Ranking is also solved by using (LTR i. e. learning to Rank)
techniques. In this work, we propose a Heuristics LTR based models for information retrieval. Different new
algorithms are tackling the problem feature selection in ranking. In this proposed model try to makes use of the
simulated annealing and Principal Component analysis for document retrieval using learning to rank. A use of
simulated annealing heuristics method used for the feature Selection to test the results improvement. The feature
extraction technique helps to find the minimal subsets of features for better results. The core idea of the proposed
framework is to make use of k-fold cross validation of training queries in the SA as well as the training queries
in the any feature selection method to extract features and only using training quires make use of validation
and test quires to create a learning model with LTR. The standard evaluation measures are used to verify the
significant improvement in the proposed model. Performance of proposed model are measured based on prediction
on some selected benchmark datasets, Improvement in the results are compare on recent high performed pairwise
algorithms.

##plugins.themes.academic_pro.article.details##

How to Cite
Sushilkumar Chavhan, & Dr. R. C. Dharmik. (2022). Feature Selection for Ranking using Heuristics based Learning to Rank using Machine Learning. International Journal of Next-Generation Computing, 13(5). https://doi.org/10.47164/ijngc.v13i5.958

References

  1. Aljarah, I., Al-Zoubi, A., Faris, H., Hassonah, M., Mirjalili, S., and Saadeh, H. 2018. Simultaneous feature selection and support vector machine optimization using the grasshopper optimization algorithm. Cognitive Computation 10. DOI: https://doi.org/10.1007/s12559-017-9542-9
  2. Bai, B., Weston, J., Grangier, D., Collobert, R., Sadamasa, K., Qi, Y., Chapelle, O., and Weinberger, K. 2010. Learning to rank with (a lot of) word features. Information Retrieval 13, 291–314. DOI: https://doi.org/10.1007/s10791-009-9117-9
  3. Burges, C., Ragno, R., and Le, Q. 2006. Learning to rank with nonsmooth cost functions.193–200.
  4. Chavhan, S., Raghuwanshi, M., and Dharmik, R. 2021. Information retrieval using machine learning for ranking: A review. Journal of Physics: Conference Series 1913, 012150. DOI: https://doi.org/10.1088/1742-6596/1913/1/012150
  5. Chelaru, S., Orellana-Rodriguez, C., and Altingo¨vde, I. 2012. Can social features help learning to rank youtube videos? DOI: https://doi.org/10.1007/978-3-642-35063-4_40
  6. Cheng, F., Guo, W., and Zhang, X. 2018. Mofsrank: A multiobjective evolutionary algorithm for feature selection in learning to rank. Complex. 2018, 7837696:1–7837696:14. DOI: https://doi.org/10.1155/2018/7837696
  7. Duan, Y., Jiang, L., Qin, T., Zhou, M., and Shum, H.-Y. 2010. An empirical study on learning to rank of tweets 1. Vol. 2. 295–303.
  8. Freund, Y., Iyer, R., Schapire, R., and Singer, Y. 2003. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research 4, 933–969.
  9. Johnson, D., Aragon, C., McGeoch, L., and Schevon, C. 1989. Optimization by simulated annealing: An experimental evaluation. part i, graph partitioning. Operations Research 37, 865–892. DOI: https://doi.org/10.1287/opre.37.6.865
  10. Kang, C., Wang, X., Chen, J., Liao, C., Chang, Y., Tseng, B., and Zheng, Z. 2011. Learning to re-rank web search results with multiple pairwise features. 735–744. DOI: https://doi.org/10.1145/1935826.1935924
  11. Lai, H., Pan, Y., and Tang, Y. 2013. Fsmrank: Feature selection algorithm for learning to rank. Neural Networks and Learning Systems, IEEE Transactions on 24, 940–952. DOI: https://doi.org/10.1109/TNNLS.2013.2247628
  12. Macdonald, C., Santos, R., and Ounis, I. 2012. On the usefulness of query features for learning to rank. 2559–2562. DOI: https://doi.org/10.1145/2396761.2398691
  13. Purpura, A., Buchner, K., Silvello, G., and Susto, G. A. 2021. Neural Feature Selection for Learning to Rank. 342–349. DOI: https://doi.org/10.1007/978-3-030-72240-1_34
  14. Rahangdale, A. and Raut, S. 2019. Deep neural network regularization for feature selection in learning-to-rank. IEEE Access 7, 53988–54006. DOI: https://doi.org/10.1109/ACCESS.2019.2902640
  15. Rajpurohit, J., Sharma, D. T., and Abraham, A. 2017. Glossary of metaheuristic algo- rithms. 181–205.
  16. Sousa, D., Canuto, S., Gonc¸alves, M., Couto, T., and Martins, W. 2019. Risk-sensitive learning to rank with evolutionary multi-objective feature selection. ACM Transactions on Information Systems 37, 1–34. DOI: https://doi.org/10.1145/3300196
  17. Xu, J. and Li, H. 2007. Adarank: a boosting algorithm for information retrieval. In SIGIR. DOI: https://doi.org/10.1145/1277741.1277809
  18. Yang, Y. and Gopal, S. 2012. Multilabel classification with meta-level features in a learning- to-rank framework. Machine Learning 88. DOI: https://doi.org/10.1007/s10994-011-5270-7
  19. Yeh, J.-Y. and Tsai, C.-J. 2021. A graph-based feature selection method for learning to rank using spectral clustering for redundancy minimization and biased pagerank for relevance analysis. Computer Science and Information Systems 19, 42–42. DOI: https://doi.org/10.2298/CSIS201220042Y
  20. Zheng, H.-T., Li, Q., Jiang, Y., Xia, S.-T., and Zhang, L. 2013. Exploiting multiple features for learning to rank in expert finding. In Advanced Data Mining and Applica- tions, H. Motoda, Z. Wu, L. Cao, O. Zaiane, M. Yao, and W. Wang, Eds. Springer Berlin Heidelberg, Berlin, Heidelberg, 219–230. DOI: https://doi.org/10.1007/978-3-642-53917-6_20