Advanced Convolutional Neural Network with Squeezenet Optimization and Transfer Learning for Mri-Based Brain Tumor Segmentation and Classification

Main Article Content

R. Subhan Tilak Basha
Dr. B. P. Santosh Kumar

Abstract

Multimodal imaging plays a crucial role in the accurate detection, segmentation, and classification of brain tumors by leveraging complementary information from multiple MRI sequences. Each modality provides distinct insights into tumor structure, location, and pathology. In this study, we offer a state-of-the-art deep learning system for efficient and accurate multimodal MRI tumor detection in the brain. The proposed methodology integrates a novel Improved Pyramid Convolutional Neural Network (I-PCNN) with an enhanced Pyramid Nonlocal U-Net (PN-UNET) architecture to leverage both local and global contextual features for precise tumor segmentation. Additionally, the Improved Pyramid Histogram of Oriented Gradients (I-PHOG) technique is introduced for robust feature extraction, preserving essential texture and structural information from different MRI modalities such as T1, T2, and FLAIR. Through comprehensive experiments and comparative analyses against several state-of-the-art models, the proposed system demonstrates superior performance in terms of accuracy, sensitivity, specificity, and Dice coefficient. Furthermore, the model’s performance across different training epochs validates its learning stability and scalability. Simulation results demonstrated that For multimodal fusion with PN-UNET, the proposed I-PCNN achieved the highest classification accuracy of 95.4%, outperforming other models like DCNNBT (94.0%) and Ensemble Deep Learning (94.8%). For individual MRI modalities, the I-PCNN also maintained high accuracy—94.2% for T1, 93.6% for T2, and 92.9% for FLAIR—indicating its robustness across varying input types. In the feature extraction phase, the proposed I-PHOG with PN-UNET yielded the best performance with 95.4% accuracy, 93.2% sensitivity, 97.1% specificity, and a Dice coefficient of 0.91, while maintaining a low feature extraction time of 1.02 seconds. This shows an optimal balance between precision and efficiency. During classification, the proposed I-PCNN with PN-UNET again led with 95.4% accuracy, 93.2% sensitivity, 97.1% specificity, and a Dice score of 0.91, surpassing other models like RanMerFormer (93.2% accuracy, 0.88 Dice) and traditional CNNs (92.5% accuracy, 0.87 Dice).

Article Details

Section
Articles