eISSN: 1897-4309
ISSN: 1428-2526
Contemporary Oncology/Współczesna Onkologia
Current issue Archive Manuscripts accepted About the journal Supplements Addendum Special Issues Editorial board Reviewers Abstracting and indexing Subscription Contact Instructions for authors Publication charge Ethical standards and procedures
Editorial System
Submit your Manuscript
SCImago Journal & Country Rank
4/2023
vol. 27
 
Share:
Share:
Original paper

Brain tumour detection from magnetic resonance imaging using convolutional neural networks

Irene Rethemiotaki
1

  1. School of Electrical and Computer Engineering, Technical University of Crete, Chania, Crete, Greece
Contemp Oncol (Pozn) 2023; 27 (4): 230–241
Online publish date: 2024/02/10
Article file
- Brain tumour.pdf  [0.91 MB]
Get citation
 
PlumX metrics:
 

Introduction

A brain tumour is an abnormal mass of tissue inside the skull in which cells grow and multiply uncontrollably. Brain tumours are classified based on their speed of growth and the likelihood of them growing back after treatment. They are mainly divided into 2 overall categories: malignant and benign. Benign tumours are not cancerous, they grow slowly, and are less likely to return after treatment. Malignant tumours, on the other hand, are essentially made up of cancer cells, they have the ability to invade the tissues locally, or they can spread to different parts of the body, a process called metastasis [1]. Most patients who develop brain metastases have a known primary cancer. Brain metastases or metastatic brain tumours are the most common intracranial neoplasm in adults and are a significant cause of morbidity and mortality in patients with advanced cancer. Most brain metastases originate from lung cancer (40–50%), breast cancer (15–25%), or melanoma (5–20%), but renal cell carcinoma, colon cancer, and gynaecological malignancies also make up a significant fraction [2]. While lung cancer accounts for most brain metastases, melanoma has the highest propensity to disseminate to the brain; 50% of patients with advanced melanoma eventually develop metastatic brain disease [3]. The most common brain tumours are meningioma, glioma, and pituitary. Among these 3 tumours, meningioma is the most important primary, slow-growing brain tumour formed by the meninges – the membrane layers surrounding the brain and spinal cord [4]. On the other hand, glioma tumour accounts for 78% of malignant brain tumours and arises from the brain’s supporting cells, i.e. the glia. More specifically, glioma tumours are the result of glial cell mutations resulting in malignancy of normal cells, and they are the most common types of astrocytomas (tumour of the brain or spinal cord) [5]. The phenotypical makeup of glioma tumours can consist of astrocytomas, oligodendrogliomas, or ependymomas [6]. Another type of brain tumour is the pituitary tumour, which is caused by excessive growth of brain cells in the pituitary gland of the brain [7]. Most pituitary tumours are non-cancerous (benign). Benign pituitary gland tumours are also called pituitary adenomas or neuroendocrine tumours, according to recently revised fifth edition of the World Health Organization guidelines [8]. Brain tumours can lead to death if not treated, so early diagnosis is of utmost importance [9].

According to Global Cancer Statistics 2020, 308.1 thousand new brain cancer cases were diagnosed worldwide in 2020, and 251.3 thousand cancer-related deaths occurred [10]. The impact of brain tumours is more significant in the United States than in other countries, with approximately 86.9 thousand cases of brain tumours diagnosed in 2019 alone [11]. Magnetic resonance imaging (MRI) scanning is commonly utilized by physicians for accurate diagnosis of brain tumours without surgery [12]. In addition to producing high-resolution pictures with great contrast, MRI also has the benefit of being a radiation-free technology. For this reason, it is the preferred noninvasive imaging technique for identifying many forms of brain malignancy [13].

Nowadays, machine learning is responsible for recognizing and classifying medical imaging patterns. It provides the ability to automatically retrieve and extract knowledge from data and ensures diagnostic accuracy. Therefore, machine learning, especially Deep Learning, is a useful tool to improve performance in various medical applications in different fields, including disease prediction and diagnosis, molecular and cellular structure recognition, tissue segmentation, and image classification [1418]. In image recognition and classification, the most successful techniques used today are convolutional neural networks (CNNs) because they have many layers and high diagnostic accuracy, especially when the number of input images is high [18, 19]. Moreover, the use of such methods paves the way for accurate and error-free identification of tumours to detect and distinguish them from other similar diseases.

In this work, GoogleNet is proposed for the automatic classification of brain tumours. Eleven other CNN methods were compared to determine if there was a remarkable difference between these methods and the proposed method in terms of performance.

Material and methods

The data used in this work consist of 3264 MRI images, of which 926 are glioma images, 937 are meningiomas, 901 are pituitary gland tumours, and 500 are healthy brains [20]. Figure 1 shows examples of images of the different tumour types. The magnetic resonance imaging from this dataset had different sizes, and because these images represent the input layer of the networks, they were reduced to 100 × 100 pixels. Of the total 3264 data, 80% (2611) were used as training data, and 20% (653) were used as validation data.

Convolutional networks, also known as neural networks, are means of processing complex data inspired by the function of human neurons and human senses [21]. They are capable of “learning” and analysing complex data sets using a series of interconnected processors and computational pathways [22]. Figure 2 shows the CNN architecture. In the CNN architecture, there are 3 types of layer, namely convolutional layers, alternating pooling layers, and fully connected layers. The last pooling layer is transformed into a one-dimensional layer with flattened layers so that it can be forwarded to the fully connected layers. Classification of data into classes was based on the Softmax activation function. In this process, batch normalization and regularization were used to avoid overfitting. The rectified linear unit function was used as an activation function. To improve performance, Adam was used as the optimization function with a learning rate of 0.001. After 100 epochs, the training process was complete. The batch size was set to 64, and each epoch took a different time depending on the neural network used. The different structures of each neural network are shown in Table 1.

Fig. 1

Different samples of magnetic resonance imaging: glioma, meningioma, pituitary tumour, and healthy brain

/f/fulltexts/WO/52380/WO-27-52380-g001_min.jpg
Fig. 2

Architecture of the convolutional neural network

/f/fulltexts/WO/52380/WO-27-52380-g002_min.jpg
Table 1

Structure of convolutional neural networks

ParametersConvolutional layersPooling layersDense layers
MobileNetV253131
GoogleNet2221
Xception3622
DenseNet-BC10031
MobileNetV2 with meta pseudo labels53131
ResNet-504851
SqueezeNet1142
ShuffleNet4253
VGG-161353
AlexNet530
Enet4011
EfficientNetB05442

The metrics used to evaluate the performance of each neural network in classifying MRI images into glioma, meningioma, pituitary tumour, and healthy brain categories included accuracy, precision, recall, F-measure, false positive rate (FPR), and true negative rate. The equations for these metrics are shown below:

Precision = TPTP+TF         (1)
Recall = TPTP+TN             (2)
F-measure = 2 xprecision x recallprecision + recall         (3)
Accuracy = 2 x TP+TNTP+TN+FP+FN        (4)
False positive rate = TPFP+TN                  (5)
True negative rate = TPTN+FP                   (6)

TP – true positive, TN – true negative, FP – false positive, FN – false negative

To evaluate classifiers and visualize their performance, receiver operating characteristic (ROC) and confusion matrix diagrams can be useful to describe the results. More specifically, the ROC curve is created by plotting the rate of true positives (TPR) against the FPR, where maximizing TPR and minimizing FPR are ideal outcomes. The confusion matrix allows us to see if there are confounding results or overlaps between classes. It is very important to reduce false positives and false negatives in the modelling process.

The entire process in this study was done in Keras using Tensorflow. The neural networks in this study were designed in the Jupyter environment.

Results

Table 2 shows the results of the 12 convolutional neural networks with their accuracy values as well as the duration of each epoch per neural network. As can be seen, the highest validation accuracy was found to be 97% for GoogleNet. MobileNetV2 (96.4%) and Xception (94.5%) also have high accuracy. Figure 3 shows the training progress for each CNN, i.e., the accuracy during the training and validation process. The precision, recall, and F-measure of the 4 categories obtained from the 12 CNNs are summarized in Table 3. GoogleNet has the highest accuracy (97%), recall (95%), and F-measure (96%) for classifying gliomas. For the classification of meningiomas, GoogleNet has the highest precision (98%), ResNet-50 has the highest recall (98%), and GoogleNet has the highest F-measure (97%). For the classification of pituitary glands, MobileNetV2 has the highest precision (99%), and DenseNet-BC has the highest recall (100%) and F-measure (99%). For healthy brain classification, ResNet-50 has the highest accuracy (98%), while MobileNetV2 has the highest recall (98%) and F-measure (97%).

Fig. 3

Training progress: accuracy score during training and validation process

/f/fulltexts/WO/52380/WO-27-52380-g003_min.jpg
Table 2

Accuracy score and duration time of each epoch per neural network

ParametersAccuracy
score
Duration/epoch
(sec)
MobileNetV20.96421–30
GoogleNet0.9731–50
Xception0.94493–199
DenseNet–BC0.9475–107
MobileNetV2 with meta
pseudo labels
0.93718–30
ResNet–500.91672–140
SqueezeNet0.7675–6
ShuffleNet0.82143–112
VGG–160.791244–1148
AlexNet0.76315–20
Enet0.74316–20
EfficientNetB00.7020–57
Table 3

Precision, recall, and F-measure of the convolutional neural networks

ParametersPrecisionRecallF1 score
MobileNetV2
Glioma0.970.940.95
Meningioma0.930.970.95
Pituitary0.990.980.99
No tumour0.970.980.97
GoogleNet
Glioma0.970.950.96
Meningioma0.980.970.97
Pituitary0.980.990.99
No tumour0.940.980.96
Xception
Glioma0.960.920.94
Meningioma0.940.930.93
Pituitary0.940.980.96
No tumour0.920.950.93
DenseNet-BC
Glioma0.920.910.92
Meningioma0.910.920.92
Pituitary0.981.000.99
No tumour0.950.920.93
MobileNetV2 with meta
pseudo labels
Glioma0.900.940.92
Meningioma0.940.910.92
Pituitary0.970.990.98
No tumour0.950.890.92
ResNet-50
Glioma0.940.900.92
Meningioma0.820.980.89
Pituitary0.980.960.97
No tumour0.980.710.82
SqueezeNet
Glioma0.880.770.82
Meningioma0.810.890.85
Pituitary0.910.970.94
No tumour0.940.830.88
ShuffleNet
Glioma0.780.810.79
Meningioma0.760.760.76
Pituitary0.920.950.93
No tumour0.860.720.79
VGG-16
Glioma0.960.530.69
Meningioma0.690.880.77
Pituitary0.870.970.92
No tumour0.670.790.72
AlexNet
Glioma0.600.960.74
Meningioma0.930.340.49
Pituitary0.980.940.96
No tumour0.700.900.79
Enet
Glioma0.710.660.68
Meningioma0.660.740.70
Pituitary0.890.820.85
No tumour0.750.790.77
EfficientNetB0
Glioma0.710.500.59
Meningioma0.640.720.68
Pituitary0.810.890.85
No tumour0.640.690.66

Figure 4 A shows the ROC curves of the CNNs with validation accuracy of more than 90%, along with the area under curve (AUC) for the 4 categories of glioma, meningioma, pituitary tumour, and healthy brain. Coversely, Figure 4 B shows the ROC curves of the CNNs with validation accuracy up to 90%. From the ROC curve, it can be seen that the AUC value is 100% and 99% for MobileNetV2, indicating the consistency of the model. Figure 5 A shows the confusion matrix of the CNNs with an accuracy of more than 90%, and the percentages of correct classification in the validation data, while Figure 5 B shows the confusion matrix of the CNNs with an accuracy of up to 90%. As can be seen from the diagonal of this matrix, GoogleNet achieves 97% validation accuracy, while MobileNetV2 achieves 96.4%. From Figure 5 A it can be seen that a total of 197, 157, and 177 MRI images are correctly classified for meningioma, glioma, and pituitary tumour, respectively, while only 19 MRI images are misclassified by the GoogleNet architecture.

Fig. 4

Receiver operating characteristic plots for the deep convolutional neural network models with validation accuracy over 90% (A)

/f/fulltexts/WO/52380/WO-27-52380-g004_min.jpg
Fig. 4. Cont.

Receiver operating characteristic plots for the deep convolutional neural network models with validation accuracy under 90% (B)

/f/fulltexts/WO/52380/WO-27-52380-g005_min.jpg
Fig. 5

Confusion matrices for the deep convolutional neural network models with validation accuracy of more than 90% (A)

/f/fulltexts/WO/52380/WO-27-52380-g006_min.jpg
Fig. 5. Cont.

Confusion matrices for the deep convolutional neural network models with validation accuracy of up to 90% (B)

/f/fulltexts/WO/52380/WO-27-52380-g007_min.jpg

Discussion

The main goal of this study was to develop 12 deep learning networks to classify MRI images into 4 classes: glioma, meningioma, pituitary tumour, and healthy brain, and propose the network with the highest accuracy. Many researchers have tried to solve the same problem (multiclass classification of brain tumours) with different CNN models. The studies that addressed the same problem are listed in Table 4. Specifically, Gumaei et al. [23] used a hybrid feature extraction approach with a normalized extreme learning machine to classify brain tumour types. This method, feedforward neural network (FNN), achieved an accuracy of 94.23%, which is quite low compared to similar studies. Sajjad et al. [24] and Swatti et al. [25] both applied the pre-trained VGG19 model to a dataset of 3064 MRI images and achieved almost the same performance of 94.5 and 94.8, respectively. Abiwinanda et al. [26] proposed a CNN model for brain tumour classification using only 700 MRI images and achieved a classification accuracy of 84.1%, which is also quite low compared to similar studies. Sultan et al. [27] proposed a CNN architecture composed of 16 layers and achieved an accuracy of 96.13%. Anaraki et al. [28] introduced a genetic algorithm with a CNN for brain tumour prediction. The genetic algorithm, however, does not always show good accuracy when working with CNN, resulting in an accuracy of only 94.2%. Badža et al. [29] performed brain tumour detection using a CNN with 2 convolutional layers and achieved 95.40% accuracy for augmented images. However, an accuracy of 95.40% is sub-optimal compared to the results obtained with the networks used in the present study.

Table 4

Comparative analysis of proposed work with previous works

ParameterAccuracy scoreMethod
Gumaei et al. [16]94.23FNN
Sajjad et al. [17]94.50VGG-19
Swati et al. [18]94.80VGG-19
Abiwinanda et al. [19]84.10CNN
Sultan et al. [20]96.10CNN
Anaraki et al. [21]94.20CNN + GA
Badza et al. [22]96.56CNN
Gunasekara et al. [23]92.00CNN and AC
Masood et al. [24]95.90RCNN + ResNet-50
Díaz-Pernas et al. [25]97.00CNN
Irmak [26]92.60CNN
Saeedi et al. [27]93.44CNN + AE
Proposed CNN97.0012 CNN

[i] RCNN – region-based Convolution neural network

In another study, Gunasekara et al. [30] segmented the tumour region using the active contour approach, which uses energy forces and constraints to extract critical pixels from an image for further processing and interpretation. However, this method has drawbacks, such as getting stuck in local minimum states, which limited the accuracy of the method to 92%. Masood et al. [31] combined Mask region-based Convolution neural network (RCNN) with ResNet-50 model for tumour region localization and achieved 95% classification accuracy. However, more sophisticated object detection models such as the Yolo model and the Faster RCNN model perform much better than Mask RCNN. Díaz-Pernas et al. [32] and Irmak [33] both used the CNN model and achieved an accuracy of 97% and 92.6%, respectively. However, both models have drawbacks because they are computationally expensive and do not provide a method for system validation. The most recent study on brain tumour detection is by Saeedi et al. [34], who trained a new 2D CNN and a convolutional autoencoder network, which achieved an accuracy of 93.44 and 90.92, respectively, which is not optimal compared to the results of the networks used in this study.

GoogleNet has been proven to be the best choice not only for brain tumour detection and classification, but also for detection and classification of breast cancer [35], lung cancer [36, 37], colon cancer [38], cervical cancer [39], skin cancer [40], and laryngeal cancer [41]. The efficiency of the GoogleNet has been proven not only by comparing it with various CNNs but also by comparing with various state-of-the-art approaches such as support vector machine, extreme learning machine, and particle swarm optimization methods [40].

Conclusions

This study represents a significant contribution, especially in terms of the application of artificial intelligence and machine learning in brain tumour detection considering MRI images. Prediction of diseases and their progression, such as tumours, is a critical challenge in the care, treatment, and research of complex heterogeneous diseases. The significance of this study is not only that it has succeeded in achieving the highest accuracy (97%) to date in brain tumour classification, but it is also the only study to compare the performance of so many neural networks simultaneously.

Notes

[2] Conflicts of interest The author declares no conflict of interest.

References

1 

Fritz A, Percy C, Jack A, et al. International classification of diseases for oncology. 3rd ed. World Health Organization, Geneva, Switzerland 2000.

2 

Saha A, Ghosh SK, Roy C, Choudhury KB, Chakrabarty B, Sarkar R. Demographic and clinical profile of patients with brain metastases: a retrospective study. Asian J Neurosurg 2013; 8: 157-161.

3 

Sherman WJ, Romiti E, Michaelides L, et al. Systemic therapy for melanoma brain and leptomeningeal metastases. Curr Treat Options Oncol 2023; 24: 1962-1977.

4 

Louis DN, Perry A, Reifenberger G, et al. The 2016 World Health Organization classification of tumors of the central nervous system: a summary. Acta Neuropathol 2016; 131: 803-820.

5 

Goodenberger ML, Jenkins RB. Genetics of adult glioma. Cancer Genet 2012; 205: 613-621.

6 

Chatterjee S, Nizamani FA, Nürnberger A, et al. Classification of brain tumours in MR images using deep spatiospatial models. Sci Rep 2022; 12: 1505.

7 

Khan SI, Rahman A, Debnath T, et al. Accurate brain tumor detection using deep convolutional neural network. Comp Structural Biotechnol J 2022; 20: 4733-4745.

8 

WHO Classification of Tumours Editorial Board. Central nervous system tumours [Internet]. Lyon (France), International Agency for Research on Cancer, 2021 (WHO classification of tumours series, 5th ed., vol. 6).

9 

Meng Y, Tang C, Yu J, et al. Exposure to lead increases the risk of meningioma and brain cancer: a meta-analysis. J Trace Elem Med Biol 2020; 60: 126474.

10 

Sung H, Ferlay J, Siegel RL, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 2021; 71: 209-249.

11 

Hollon TC, Pandian B, Adapa AR, et al. Near real-time intraoperative brain tumor diagnosis using stimulated raman histology and deep neural networks. Nat Med 2020; 26: 52-58.

12 

Saeedi S, Rezayi S, Keshavarz H, et al. MRI-based brain tumor detection using convolutional deep learning methods and chosen machine learning techniques. BMC Med Inform Decis Mak 2023; 23: 16.

13 

Tandel GS, Biswas M, Kakde OG, et al. A review on a deep learning perspective in brain cancer classification. Cancers 2019; 11: 111.

14 

Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Ann Rev Biomed Eng 2017; 19: 221-248.

15 

Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42: 60-88.

16 

Suzuki K. Overview of deep learning in medical imaging. Radiol Phys Technol 2017; 10: 257-273.

17 

Hijazi S, Kumar R, Rowen C. Using convolutional neural networks for image recognition. San Jose Cadence Design Systems Inc 2015: 1-12.

18 

Pattanaik BB, Anitha K, Rathore S, Biswas P, Sethy PK, Behera SK. Brain tumor magnetic resonance images classification based machine learning paradigms. Contemp Oncol (Pozn) 2022; 26: 268-274.

19 

O’Shea K, Nash R. An introduction to convolutional neural networks 2015; arXiv 2015; 1511: 08458.

20 

KAGGLE, Brain tumor classification (MRI). Available from: https://www.kaggle.com/sartajbhuvaji/brain-tumor-classification-mri/.

21 

Liu W, Wang Z, Liu X, et al. A survey of deep neural network architectures and their applications. Neurocomputing 1017; 234: 11-26.

22 

Zaušková A, Lyakina M, Tretyak V, et al. Application of artificial neural networks to cost factors stimulating innovation–the case of Slovakia. Ekonom Manaz Spectrum 2020; 14: 97-105.

23 

Gumaei A, Hassan MM, Hassan MR, et al. A Hybrid feature extraction method with regularized extreme learning machine for brain tumor classification. IEEE Access 2019; 7: 36266-36273.

24 

Sajjad M, Khan S, Muhammad K, et al. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J Comput Sci 2019; 30: 174-182.

25 

Swati ZNK, Zhao Q, Kabir M, et al. Brain tumor classification for mr images using transfer learning and fine-tuning. Comput Med Imaging Graph 2019; 75: 34-46.

26 

Abiwinanda N, Hanif M, Hesaputra ST, et al. Brain tumor classification using convolutional neural network. InWorld congress on medical physics and biomedical engineering 2018–2019. Springer, Singapore 2019, 183-189.

27 

Sultan HH, Salem NM, Al-Atabany W. Multi-classification of brain tumor images using deep neural network. IEEE Access 2019; 7: 6921569225.

28 

Anaraki AK, Ayati M, Kazemi F. Magnetic resonance imaging-based brain tumor grades classification and grading via convolutional neural networks and genetic algorithms. Biocybern Biomed Eng 2019; 39: 63-74.

29 

Badža MM, Barjaktarović MČ. Classification of brain tumors from MRI images using a convolutional neural network. Appl Sci 2020; 10: 1999.

30 

Gunasekara SR, Kaldera HN, Kaldera MB. A systematic approach for MRI brain tumor localization and segmentation using deep learning and active contouring. J Healthcare Eng 2021; Article ID 6695108.

31 

Masood M, Nazir T, Nawaz M, et al. A novel deep learning method for recognition and classification of brain tumors from MRI images. Diagnostics 2021; 11: 744.

32 

Díaz-Pernas FJ, Martínez-Zarzuela M, Antón-Rodríguez M, et al. A deep learning approach for brain tumor classification and segmentation using a multiscale convolutional neural network. Healthcare 2021; 9: 153.

33 

Irmak E. Multi-classification of brain tumor MRI images using deep convolutional neural network with fully optimized framework. Iran J Sci Technol Trans Electr Eng 2021; 45: 1015-1036.

34 

Saeedi S, Rezayi S, Keshavarz H, et al. MRI-based brain tumor detection using convolutional deep learning methods and chosen machine learning techniques. BMC Med Inform Decis Mak 2023; 23: 16.

35 

Chen SH, Wu YL, Pan CY, et al. Breast ultrasound image classification and physiological assessment based on GoogLeNet. J Radiat Res Appl Sci 2023; 16: 100628.

36 

Samundeeswari P, Gunasundari R. An efficient fully automated lung cancer classification model using googlenet classifier. J Circuits Systems Comp 2023; 32: 2350246.

37 

Nibid L, Greco C, Cordelli E, et al. Deep pathomics: a new image-based tool for predicting response to treatment in stage III non-small cell lung cancer. PLoS One 2023; 18: e0294259.

38 

Kumar, VTRP Arulselvi M, Sastry KBS. Colon Cancer Classification using Google Net:10th International Conference on Computing for Sustainable Global Development (INDIACom). New Delhi, India 2023.

39 

Kalbhor MM, Shinde SV. Cervical cancer diagnosis using convolution neural network: feature learning and transfer learning approaches. Soft Comput 2023.

40 

Panneerselvam R, Balasubramaniam, S. Multi-Class Skin Cancer Classification using a hybrid dynamic salp swarm algorithm and weighted extreme learning machines with transfer learning. Acta Informatica Pragensia 2023; 12: 141-159.

41 

You Z, Han B, Shi Z. Vocal cord leukoplakia classification using deep learning models in white light and narrow band imaging endoscopy images. Head Neck 2023; 45: 3129-3145.

Copyright: © 2024 Termedia Sp. z o. o. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License (http://creativecommons.org/licenses/by-nc-sa/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material, provided the original work is properly cited and states its license.
 
Quick links
© 2024 Termedia Sp. z o.o.
Developed by Bentus.