Neuro Image Processor For Tumour Detection (NIPTI) using Machine Learning
 
Birendra Kumar Saraswat1*, Shashikant Katiyar2, Ashish Kumar Sharma3
1,2,3 Computer Science & Engineering, Raj Kumar Goel Institute of Technology, Ghaziabad,UP,India
1 Email: saraswatbirendra@gmail.com
2 Email: shashikantkatiyar2002@gmail.com
3 Email: workforashish007@gmail.com
Abstract - Deep learning algorithms, particularly Convolutional Neural Networks (CNNs), have been extensively applied to image recognition and classification tasks, achieving significant success in medical image analysis. Radiologists face challenges in accurately diagnosing brain tumours due to the variety of tumour cells. Recently, computer-aided diagnostic methods using magnetic resonance imaging (MRI) have been developed to aid in brain cancer diagnosis. CNNs play a crucial role in medical image analysis, including the detection of brain cancers, helping physicians overcome the difficulties in identifying brain tumours, especially in the early stages of brain haemorrhage. The proposed model categorizes brain images into four classes: Normal, Glioma, Meningioma, and Pituitary. It achieves a recall of 95%, an accuracy of 95.44%, and an F1-score of 95.36%.
Keywords—Deep learning; convolutional neural networks; brain tumor; classification; magnetic resonance imaging
I. INTRODUCTION
A brain tumor forms due to the uncontrolled growth and proliferation of cells within the skull, putting pressure on the brain and posing significant health risks. These tumors make up 85% to 90% of all major Central Nervous System (CNS) tumors. Radiologists frequently use medical imaging techniques to detect tumors, with MRI being the preferred method due to its detailed imaging capabilities. However, manually detecting brain cancers can be slow and error-prone, depending on the radiologist's experience. Tumor grading is particularly challenging due to the variations in shape, size, and appearance, as well as the similarities between different diseases. Developing a successful Computer Aided Diagnosis (CAD) system requires accurate feature extraction, which is a complex process needing specific domain knowledge. Medical diagnoses heavily rely on image data from various biomedical devices using techniques like X-ray, CT scans, and MRI. MRI, which measures magnetic field vectors generated after exciting hydrogen nuclei in the body's water molecules with strong magnetic fields and radio frequency pulses, is preferable to CT scans because it does not involve radiation. MRI can effectively detect brain tumors, but manually inspecting MRI images is time-consuming and impractical for large datasets.Deep Learning (DL), a subset of Machine Learning (ML), uses multi-layered artificial neural networks to model complex data patterns. In computer vision, image classification involves training models to identify and categorize objects in images. Convolutional Neural Networks (CNNs), designed specifically for image processing, are commonly used for brain tumor detection. CNNs use convolutional layers to identify local patterns in images, enabling them to classify objects. By training on large datasets of labeled brain images, CNNs can classify new images as containing a tumor or not, aiding in early diagnosis and treatment.Image processing methods, especially segmentation, are widely used for tumor detection. Segmentation divides an image into homogeneous regions to identify shapes within these regions. MRI or CT scans are used to examine brain structures, with MRI being more effective since it does not use radiation. Tumors, composed of various biological tissues, require multiple MRI types for comprehensive information. Combining different data enhances tumor segmentation. MRI features used for segmentation typically include three weighted images (T2, T1, and Proton Density (PD)) for each axial slice. Segmentation techniques have been particularly successful in identifying infected tissues in the early stages of development.
II. RELATED WORKS
Deep Learning (DL) and Artificial Intelligence (AI) are pivotal in MRI image processing, particularly for tasks like segmentation, recognition, and categorization. They are extensively used in the classification and detection of brain cancer. Numerous studies have explored the identification and segmentation of brain MRI images. A comprehensive review of international literature was conducted to evaluate the application of DL in identifying and categorizing brain tumors. It introduced a novel Caps Net architecture designed to access neighboring tissues while maintaining focus on the core target. This modified Caps Net architecture for brain tumor classification incorporates coarse tumor boundaries as additional inputs, significantly enhancing its performance compared to other methods.The authors in employed a convolutional neural network (CNN) for multimodal brain tumor categorization aimed at early diagnosis. Their CNN model achieved an accuracy of 92.66%, classifying brain tumors into five categories: Normal, Glioma, Meningioma, Pituitary, and Metastatic. They used grid search optimization to define critical hyperparameters automatically. The proposed CNN model was compared with other popular CNN models like Alex Net, Inceptionv3, ResNet-50, VGG-16, and Google Net, producing satisfactory classification results using large publicly available clinical datasets. However, some drawbacks include the need for manual identification of tumor locations and the unsatisfactory accuracy of current techniques given the significance of MRI classification in the medical field.The authors proposed a method to enhance classification performance using three feature extraction techniques: intensity histogram, gray level co-occurrence matrix (GLCM), and bag-of-words model (Bow). They demonstrated that using an enlarged tumor region as the region of interest (ROI) improves the accuracy of these models. For example, the intensity histogram accuracy improved from 71.39% to 82.31%, GLCM from 78.18% to 84.75%, and Bow from 83.54% to 88.19%. With ring partitioning, accuracy further improved, showing the effectiveness of their strategy for classifying brain cancers in T1-weighted CE-MRI images.The authors provided an overview of the challenges in brain tumor segmentation and discussed how deep learning techniques have addressed these challenges. They reviewed various deep learning models, including CNNs, recurrent neural networks (RNNs), and generative adversarial networks (GANs). The survey highlighted the advantages and limitations of these methods and suggested future research directions. These studies collectively underscore the significant advancements and ongoing challenges in applying deep learning to brain tumor classification and segmentation in MRI images. The architecture of a CNN-based model is highly customizable, with specific hyperparameters playing a pivotal role in shaping its overall structure. These hyperparameters, including the number of convolutional layers, activation functions, and hidden units per layer, significantly influence the model's ability to extract meaningful features from input images.
Figure 1: Flowchart of a System for Detecting Brain Tumors.
Our chosen model comprises five convolutional layers with varying numbers of filters, doubling in depth from 32 filters in the initial layer to 512 filters in the final layer to extract diverse features. Additionally, five max-pooling layers are employed to extract crucial information from the preceding convolutional layers. A flattening layer is used to transform the information into one dimension, followed by a Dense layer with 128 units and a final Dense layer with 4 units, reflecting the four final classes. The SoftMax function is utilized in the last layer for multi-class models. RELU activation functions are applied throughout the layers due to their effectiveness compared to other functions. Figure 2 illustrates the layout of our proposed CNN architecture.Brain tumor detection poses a significant challenge in medical imaging, and CNNs serve as potent tools for addressing this problem. CNNs, a type of DL algorithm, excel in analyzing image data and can be trained to identify specific features indicative of brain tumors. By processing medical images and recognizing distinctive patterns, CNNs can locate areas of the brain potentially harboring tumors.
III. METHODOLOGY
The methodology for brain tumor detection using machine learning involves several key steps: data collection, data preprocessing, model selection, training, evaluation, and testing. Here’s a detailed outline of the process.
3.1: Dataset Collection:
The dataset utilized in this study is sourced from the "Brain Tumor MRI Dataset" available on Kaggle. This comprehensive collection comprises magnetic resonance imaging (MRI) scans of the brain obtained from a diverse group of individuals. Included within the dataset are MRI scans from both healthy subjects and individuals diagnosed with brain tumors, all meticulously annotated by medical professionals with expertise in the field. B. Background on CNN.
Figure 2: Some sample images of the dataset.
3.2: Pre-Processing:
Preprocessing is crucial to prepare the raw data for analysis. This step typically includes:
Resizing: Adjusting the images to a uniform size to ensure consistency. Common sizes include 224x224 pixels.
Normalization: Scaling pixel values to a standard range (e.g., 0 to 1) to enhance the model's performance.
Augmentation: Applying transformations such as rotation, flipping, and zooming to increase the diversity of the training dataset and improve model robustness.
Segmentation (optional): Extracting regions of interest (ROI) if the focus is on specific areas of the brain.
3.3: Segmentation:
Segmentation is the process of separating the region of interest of the image. This separation can be done by considering each pixel of the image with a similar attribute. The main advantage here is instead of processing the entire image, the image which is divided into segments can be processed. The most common technique is to indicate the edges of the particular region. The other approaches such as thresholding, clustering, and region growing use detection of similarities in the particular region. Colour-based k means clustering is implemented here.
3.4: Feature Extraction:
Feature extraction is a module that is used to classify and identify the kind of sickness by extracting the features from an image. We employ the TensorFlow module from Keras to determine the illness connected to a skin lesion. Many image processing and computer vision applications, including object recognition, picture retrieval, and scene analysis, depend on feature extraction. The precise task at hand and the properties of the images under examination determine the feature extraction technique to be used.
3.5: Classification:
It is a module that aids in the classification of disease in an acquired image and aids in providing the user with the appropriate suggestion by utilizing the suggestion model.
IV. RESULT
The results and discussion section presents the findings of the research conducted on brain tumour detection using Machine learning.
Figure 4. healthy and tumour MRI scans.
V. CONCLUSION
Pinpointing brain tumours remains a significant challenge due to their diverse appearances, sizes, shapes, and internal structures. While tumour segmentation techniques have shown promise in MRI analysis and detection, significant improvements are needed for accurate segmentation and classification of tumour regions. Existing methods face limitations in identifying tumour substructures and classifying healthy versus unhealthy brain images. This survey aims to comprehensively cover the latest advancements in this field, highlighting the limitations and challenges researchers encounter. Deep learning approaches have made substantial contributions, but a more generalizable technique is still needed. These methods excel when training and testing data share similar acquisition characteristics (like intensity range and resolution). However, even slight variations significantly impact their robustness. Future research should focus on enhancing brain tumour detection accuracy using real-world patient data from various sources (different scanners). Fusing handcrafted features with deep learning features could improve classification results. Furthermore, exploring lightweight methods such as quantum machine learning holds promise for boosting accuracy and efficiency. This could lead to significant time savings for radiologists and ultimately improve patient survival rates.
VI. REFERENCES
  1. F. J. Díaz-Pernas, M. Martínez-Zarzuela, M. Antón-Rodríguez, and D. González-Ortega, “A Deep Learning Approach for Brain Tumor Classification and Segmentation Using a Multiscale Convolutional Neural Network,” Healthcare, vol. 9, no. 2, p. 153, Feb. 2021, doi: 10.3390/healthcare9020153.
  2. P. Tiwari et al., “CNN Based Multiclass Brain Tumor Detection Using Medical Imaging,” Computational Intelligence and Neuroscience, vol. 2022, pp. 1–8, Jun. 2022, doi: 10.1155/2022/1830010.
  3. O. Terrada, B. Cherradi, A. Raihani, and O. Bouattane, “Atherosclerosis disease prediction using Supervised Machine Learning Techniques,” in 2020 1st International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), Meknes, Morocco, Apr. 2020, pp. 1–5. doi: 10.1109/IRASET48871.2020.9092082.
  4. D. Lamrani, B. Cherradi, O. E. Gannour, M. A. Bouqentar, and L. Bahatti, “Brain Tumor Detection using MRI Images and Convolutional Neural Network,” IJACSA, vol. 13, no. 7, 2022, doi: 10.14569/IJACSA.2022.0130755.
  5. S. Laghmati, B. Cherradi, A. Tmiri, O. Daanouni, and S. Hamida, “Classification of Patients with Breast Cancer using Neighbourhood Component Analysis and Supervised Machine Learning Techniques,” in 2020 3rd International Conference on Advanced Communication Technologies and Networking (CommNet), Marrakech, Morocco, Sep. 2020, pp. 1–6. doi: 10.1109/CommNet49926.2020.9199633.
  6. O. El Gannour et al., “Concatenation of Pre-Trained Convolutional Neural Networks for Enhanced COVID-19 Screening Using Transfer Learning Technique,” Electronics, vol. 11, no. 1, p. 103, Dec. 2021, doi: 10.3390/electronics11010103.
  7. H. Moujahid, B. Cherradi, and L. Bahatti, “Convolutional Neural Networks for Multimodal Brain MRI Images Segmentation: A Comparative Study,” in Smart Applications and Data Analysis, vol. 1207, M. Hamlich, L. Bellatreche, A. Mondal, and C. Ordonez, Eds. Cham: Springer International Publishing, 2020, pp. 329–338. doi: 10.1007/978-3-030-45183-7_25.
  8. H. Moujahid, B. Cherradi, M. Al-Sarem, and L. Bahatti, “Diagnosis of COVID-19 Disease Using Convolutional Neural Network Models Based Transfer Learning,” in Innovative Systems for Intelligent Health Informatics, vol. 72, F. Saeed, F. Mohammed, and A. Al-Nahari, Eds. Cham: Springer International Publishing, 2021, pp. 148–159. doi: 10.1007/978-3-030-70713-2_16.
  9. O. Terrada, A. Raihani, O. Bouattane, and B. Cherradi, “Fuzzy cardiovascular diagnosis system using clinical data,” in 2018 4th International Conference on Optimization and Applications (ICOA), Mohammedia, Apr. 2018, pp. 1–4. doi: 10.1109/ICOA.2018.8370549.
  10. S. Hamida, O. E. Gannour, B. Cherradi, H. Ouajji, and A. Raihani, “Optimization of Machine Learning Algorithms Hyper-Parameters for Improving the Prediction of Patients Infected with COVID-19,” in 2020 IEEE 2nd International Conference on Electronics, Control, Optimization and Computer Science (ICECOCS), Kenitra, Morocco, Dec. 2020, pp. 1–6. doi: 10.1109/ICECOCS50124.2020.9314373.
  11. O. Asmae, R. Abdelhadi, C. Bouchaib, S. Sara, and K. Tajeddine, “Parkinson’s Disease Identification using KNN and ANN Algorithms based on Voice Disorder,” in 2020 1st International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), Meknes, Morocco, Apr. 2020, pp. 1–6. doi: 10.1109/IRASET48871.2020.9092228.
  12. O. El Gannour, S. Hamida, B. Cherradi, A. Raihani, and H. Moujahid, “Performance Evaluation of Transfer Learning Technique for Automatic Detection of Patients with COVID-19 on X-Ray Images,” in 2020 IEEE 2nd International Conference on Electronics, Control, Optimization and Computer Science (ICECOCS), Kenitra, Morocco, Dec. 2020, pp. 1–6. doi: 10.1109/ICECOCS50124.2020.9314458.
  13. O. Daanouni, B. Cherradi, and A. Tmiri, “Predicting diabetes diseases using mixed data and supervised machine learning algorithms,” in Proceedings of the 4th International Conference on Smart City Applications, Casablanca Morocco, Oct. 2019, pp. 1–6. doi: 10.1145/3368756.3369072.
  14. O. Terrada, B. Cherradi, S. Hamida, A. Raihani, H. Moujahid, and O. Bouattane, “Prediction of Patients with Heart Disease using Artificial Neural Network and Adaptive Boosting techniques,” in 2020 3rd International Conference on Advanced Communication Technologies and Networking (CommNet), Marrakech, Morocco, Sep. 2020, pp. 1–6. doi: 10.1109/CommNet49926.2020.9199620.
  15. L. Hua, Y. Gu, X. Gu, J. Xue, and T. Ni, “A Novel Brain MRI Image Segmentation Method Using an Improved Multi-View Fuzzy c-Means Clustering Algorithm,” Front. Neurosci., vol. 15, p. 662674, Mar. 2021, doi: 10.3389/fnins.2021.662674.
  16. S. Hamida, B. Cherradi, O. Terrada, A. Raihani, H. Ouajji, and S. Laghmati, “A Novel Feature Extraction System for Cursive Word Vocabulary Recognition using Local Features Descriptors and Gabor Filter,” in 2020 3rd International Conference on Advanced Communication Technologies and Networking (CommNet), Marrakech, Morocco, Sep. 2020, pp. 1–7. doi: 10.1109/CommNet49926.2020.9199642.
  17. S. Hamida, B. Cherradi, and H. Ouajji, “Handwritten Arabic Words Recognition System Based on HOG and Gabor Filter Descriptors,” in 2020 1st International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), Meknes, Morocco, Apr. 2020, pp. 1–4. doi: 10.1109/IRASET48871.2020.9092067.