International Journal of Next-Generation Computing http://ijngc.perpetualinnovation.net/index.php/ijngc <p>The International Journal of Next-Generation Computing (IJNGC) is a peer-reviewed journal aimed at providing a platform for researchers to showcase and disseminate high-quality research in the domain of next-generation computing. With the introduction of new computing paradigms such as cloud computing, IJNGC promises to be a high-quality and highly competitive dissemination forum for new ideas, technology focus, research results and sicussions in these areas.</p> <p>Online ISSN: 0976-5034</p> <p>Print ISSN : 2229-4678</p> en-US [email protected] (Editor-In-Chief [IJNGC]) [email protected] (TechSupport) Tue, 17 Dec 2024 14:44:44 +0530 OJS 3.3.0.6 http://blogs.law.harvard.edu/tech/rss 60 Agent Enhancement using Deep Reinforcement Learning Algorithms for Multiplayer game (Slither.io) http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1586 <p><span class="fontstyle0">Developing self learning model for a game is challenging as the environment keeps changing all the time and therefore require highly intelligent models which can make decisions depending on the environment in real time. The agent has to learn the environment and takes action based on the inference. Based on the action, a positive or negative reward is given to the agent. The agent again learns from the reward and enhances / trains itself to behave better in the environment. This work aims to train an agent using deep reinforcement learning algorithms to play a multiplayer online game like SLITHER.IO. We use an OpenAI Universe environment to collect raw image inputs from sample gaming as training data. Agent learns the current state of the environment and the position of the other players (snakes). Then it takes action in the form of direction of its movement. To compare our model to other existing systems and random policy, we propose to use deep Q-learning and other actor critic approaches such as Proximal Policy Optimisation (PPO) with reward shaping and replay buffer. Out of all these algorithms the PPO agent shows significant improvement in the score over a range of episodes. PPO agent learns quickly and its reward progression is higher when compared to other techniques.</span> </p> Rajalakshmi Sivanaiah, Abrit Pal Singh, Aviansh Gupta, Ayush Nanda Copyright (c) 2024 International Journal of Next-Generation Computing https://creativecommons.org/licenses/by-nc-nd/4.0 http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1586 Tue, 17 Dec 2024 00:00:00 +0530 Secure Healthcare Monitoring and Attack Detection Framework using ELUS-BILSTM and STECAES http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1545 <p><span class="fontstyle0">The patterns of providing health-centric services have transformed extremely with the enhancement along with innovations in mobile and wireless communication technologies subsuming the Internet of Things (IoT). Due to the rapidly increasing attack, the doctors were not provided with an accurate alerting mechanism by the prevailing health monitoring system. Thus, by utilizing the Exponential Linear activation Units-centred Bidirectional Long Short Term Memory (ELUS-BiLSTM) technique, a novel healthcare monitoring along with an attack detection system is proposed in this work. Attack detection, Data security, and Patient health monitoring are the three primary phases incorporated in the proposed methodology. Initially, from the patient, the data are collected, and then the features are extracted in the attack detection phase. Next, the features being extracted are inputted to the ELUS-BiLSTM classifier where the data is classified as attacked or non-attacked data. After that, by utilizing Skew Tent Elliptic Curve Advanced Encryption Standard (STECAES), the non-attacked data is encrypted whereas the attacked data is stored in the log file. Lastly, to generate the fuzzy rules, the encrypted data is utilized; subsequently, the alert message is sent to the doctor. The experiential outcomes displayed that when analogized with the prevailing methodologies, the proposed model obtained better outcomes.</span> </p> Y. Jani, P. Raajan Copyright (c) 2024 International Journal of Next-Generation Computing https://creativecommons.org/licenses/by-nc-nd/4.0 http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1545 Tue, 17 Dec 2024 00:00:00 +0530 An efficient optimized encryption and compression techniques to improve medical image security and transmission in the cloud http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1718 <p><span class="fontstyle0">As digital technology for illness diagnosis and analysis has advanced, medical images are sent over the Internet. Cloud computing plays a major role for low-cost data storage and data sharing. In the healthcare industry, data security and privacy are key issues with cloud computing. It is imperative that healthcare professionals make sure that patient data is safe against hackers, unauthorized access, and thrift. To secure the confidential data and store the huge amount of data, encryption and compression techniques are used. This paper provides an enhanced optimized encryption and efficient hybrid compression strategies based on cloud environment. The proposed model involves various operations such as generate optimal key, encryption, compression, decompression and decryption. In order to transfer the data using high speed cloud data retrieval, we initially propose the Huffman Fano Hybrid Entropy approach. In the next step, the proposed model performs Elliptic Curve Coding based encryption technique to secure compressed medical image transmission. Here the shared secret keys are generated optimally by dynamic group based cooperative optimization algorithm, which makes use of the encryption quality measures and is called DGBCO–ECC model. On the receiving end, the image is decrypted and decompressed. The proposed model performance is validated by exploiting various parameters namely Mean Square Error, Standard Deviation of the Mean Error, Universal image quality index, Structural Similarity Index, Entropy, Peak Signal to Noise Ratio, Compression Ratio, Data Rate Saving and compression time. When the experimental outcome is contrasted with existing methods, it is found to perform better.</span> </p> D. Jeni Jeba Seeli, K.K. Thanammal Copyright (c) 2024 International Journal of Next-Generation Computing https://creativecommons.org/licenses/by-nc-nd/4.0 http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1718 Tue, 17 Dec 2024 00:00:00 +0530 Leveraging Deep Transfer Learning for Precision in Similar Color and Texture-Based Fruit Classification http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1592 <p><span class="fontstyle0">Worldwide, the most enormously produced fruits, including bananas, papayas, mangoes, and guavas, are found in India. Over the years, agricultural production in India has consistently increased. There is still a massive gap between per capita demand and supply due to losses, including post-harvest. With adequate processing facilities, a clear scope exists to reduce this post-harvest wastage. In recent years, research in cutting-edge technology like computer vision (CV), Artificial Intelligence, and image processing has played an important role in sorting as well as grading fruits. Fruits in similar colors and textures increase the difficulty of identification. Deep learning networks are used to adapt and recognize complex patterns, especially in visual tasks. Utilizing deep transfer learning facilitates achieving excellent results expeditiously. This paper uses the deep transfer learning approach to classify fruits with similar color and texture, namely guava, avocado, lime, apple, pear, mango, and pomelo sweetie. This study introduces a novel model derived from integrating DenseNet, MobileNet, and EfficientNet architectures. The model’s performance is systematically assessed using different optimizers, contributing to a comprehensive evaluation of its efficacy. Simulation findings indicate that MobileNetV1 when paired with the Adam optimizer, surpasses other models in terms of training time, accuracy, and testing time.</span> </p> Anita Bhatt, Maulin Joshi Copyright (c) 2024 International Journal of Next-Generation Computing https://creativecommons.org/licenses/by-nc-nd/4.0 http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1592 Tue, 17 Dec 2024 00:00:00 +0530 A Smartphone based Automated Primary Screening of Oral Cancer based on Deep Learning http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1786 <p><span class="fontstyle0">In low- and middle-income countries, oral cancer is becoming more common. One factor delaying the discovery of oral cancer in rural areas is a lack of resources. To stop the disease from spreading, it is essential to quickly obtain information about any cancers. Therefore, it is essential to carry out early identification before it spreads. Primary screening is maintained in this study. Furthermore, deep neural network-based automated methods were used to produce complex patterns to address the challenging issue of assessing oral cancer infection. The goal of this work is to develop an Android application that uses a deep neural network to categorize oral photos into four groups: erythroplakia, leukoplakia, ulcer, and normal mouth. Convolutional neural networks and K-fold validation processes are used in this study’s methodology to create a customized Deep Oral Augmented Model (DOAM). Data augmentation techniques including shearing, scaling, rotation, and flipping are used to pre-process the images. A convolutional neural network is then used to extract features from the images Optimal configurations of max pooling layers, dropout, and activation functions have resulted in the attainment of maximum accuracies. By using the ”ELU” activation function in conjunction with RMSProp as the optimizer, the model achieves 96% validation accuracy, 96% precision, 96% F1 score, and 68% testing accuracy. The model is then deployed in TensorFlow Lite using an Android application.</span> </p> Rinkal Shah, Jyoti Pareek Copyright (c) 2024 International Journal of Next-Generation Computing https://creativecommons.org/licenses/by-nc-nd/4.0 http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1786 Tue, 17 Dec 2024 00:00:00 +0530 AeroNet: Efficient YOLOv7 for Tiny-Object Detection in UAV Imagery http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1789 <p><span class="fontstyle0">The detection of multiple tiny objects from diverse perspectives using unmanned aerial vehicles (UAVs) and onboard edge devices presents a significant challenge in computer vision. To address that, this study proposes AeroNet, a lightweight and efficient detection algorithm based on YOLOv7 (You Only Look Once version7).This algorithm features the LHGNet (Lightweight High-Performance GhostNet) backbone, an advanced feature extraction network that integrates depth-wise separable convolution and channel shuffle modules.These modules enable deeper exploration of network features, promoting the fusion of local detail information and channel characteristics. Additionally, this research introduces the LGS(Lightweight Gradient-Sensitive) bottleneck and LGSCSP(Lightweight Gradient-Sensitive Cross Stage Partial Network) fusion module in the neck to reduce computational complexity while maintaining accuracy. Structural modifications and adjusted feature map sizes further enhance detection accuracy. Evaluated on the SkyFusion dataset,this method demonstrated a 25.0% reduction in parameter count and a 12.8% increase in mAP (0.5) compared to YOLOv7. These results underscore the effectiveness of this proposed approach in improving detection accuracy and model efficiency through the proposed enhancements.</span> </p> Sushmita Sheeba Dsa Copyright (c) 2024 International Journal of Next-Generation Computing https://creativecommons.org/licenses/by-nc-nd/4.0 http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1789 Tue, 17 Dec 2024 00:00:00 +0530 Comparative Analysis of Denoising Methods to Improve Image Quality for Medical Visual Question Answering http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1773 <p><span class="fontstyle0">Medical Visual Question Answering (MedVQA) is a dual research field that intersects medical imaging and natural language processing, for better interpretability and accessibility of medical image data.Medical image quality is paramount for accurate diagnostics and subsequent medical visual question answering (MedVQA) tasks. This research focuses on applying and then analyzing results of different denoising methods on VQA-RAD, Medical VQA dataset to enhance quality of images. This study explores effectiveness of different traditional and deep learning based methods to reduce noise within medical images, thereby improving the accuracy and reliability of MedVQA task. We applied different traditional denoising filtering methods such as, gaussian filter, median filter, average filter, bilateral filter and convolutional autoencoder (CAE) based on deep learning on a VQA-RAD dataset to compare effectiveness of each denoising methods to improve image quality. Through comprehensive experiments and evaluations, this paper demonstrates that the convolutional autoencoder is potentially enhancing quality of medical images with an emphasis on preserving essential diagnostic information while suppressing unwanted noise with compare to other traditional denoising filters. The denoised images are then employed as input to improve accuracy for MedVQA tasks. The results of this research will help in optimizing medical imaging pipelines, ultimately benefiting clinical decision-making and healthcare outcomes.</span> </p> Rikita D. Parekh, Hiteishi M. Diwanji Copyright (c) 2024 International Journal of Next-Generation Computing https://creativecommons.org/licenses/by-nc-nd/4.0 http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1773 Tue, 17 Dec 2024 00:00:00 +0530 Modern Thyroid Cancer Diagnosis: A Review of AI-Powered Algorithms for Detection and Classification http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1768 <p>Thyroid nodules are one of the most common abnormalities in the thyroid gland, which are often harmless in nature (benign), but in a few unfortunate instances, they may be fatal (malignant). This review explores recent advancements in artificial intelligence (AI) applied to thyroid cancer detection and classification, with a focus on machine learning, deep learning, and image processing techniques. We provide a comprehensive evaluation of AI applications across key imaging modalities—Ultrasonography (USG), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET/CT)—as well as cytopathological analysis using Fine Needle Aspiration Biopsy (FNAB). By critically examining studies on AI-driven preoperative assessments, we highlight improvements in diagnostic accuracy, sensitivity, specificity and efficiency. This review also identifies current limitations in AI applications, including technical challenges and unresolved issues that hinder widespread clinical adoption. Although significant progress has been achieved, the integration of AI in clinical settings remains limited, as AI-based outputs currently serve as supportive tools rather than definitive diagnostic evidence. We discuss the potential of AI to transform thyroid cancer diagnostics by enhancing reliability and accessibility, while addressing the need for further research to develop a unified, robust and clinically trustworthy AI framework for thyroid cancer diagnosis.</p> Kuntala Boruah, Lachit Dutta, Manash Kapil Pathak Copyright (c) 2024 International Journal of Next-Generation Computing https://creativecommons.org/licenses/by-nc-nd/4.0 http://ijngc.perpetualinnovation.net/index.php/ijngc/article/view/1768 Tue, 17 Dec 2024 00:00:00 +0530