eISSN: 1897-4309
ISSN: 1428-2526
Contemporary Oncology/Współczesna Onkologia
Current issue Archive Manuscripts accepted About the journal Supplements Addendum Special Issues Editorial board Reviewers Abstracting and indexing Subscription Contact Instructions for authors Publication charge Ethical standards and procedures
Editorial System
Submit your Manuscript
SCImago Journal & Country Rank
4/2022
vol. 26
 
Share:
Share:
Original paper

Brain tumor magnetic resonance images classification based machine learning paradigms

Baby Barnali Pattanaik
1
,
Komma Anitha
2
,
Shanti Rathore
3
,
Preesat Biswas
4
,
Prabira Kumar Sethy
1
,
Santi Kumari Behera
5

  1. Department of Electronics, Sambalpur University, Burla, India
  2. ECE Department, PVP Siddhartha Institute of Technology, Vijayawada, India
  3. Department of ET and T, Dr. C. V. Raman University Bilaspur Chhattisgarh, India
  4. Department of ET and T, GEC Jagdalpur, Chhattisgarh,India
  5. Department of Computer Science and Engineering, VSSUT Burla, India
Contemp Oncol (Pozn) 2022; 26 (4): 268–274
Online publish date: 2023/01/30
Article file
- Brain tumor.pdf  [0.24 MB]
Get citation
 
PlumX metrics:
 

Introduction

A brain tumour is a mass of abnormally growing cells in the brain or skull. Some brain tumours are harmless, while others are dangerous. Tumours can start in the brain itself (primary) or the cancer can spread from another part of the body to the brain (metastasis). Treatment options depend on the type of tumour, its size, and its location. The World Health Organization made a classification and grading system to make it easier to converse about brain tumours, plan treatments, and predict how they will turn out. Figure 1 shows how tumours are categorised by the type of cells they are made of or where they started.

Fig. 1

Anatomical view of brain with location of tumour

/f/fulltexts/WO/50030/WO-26-50030-g001_min.jpg

This research aims to develop automated approaches that will assist medical professionals in making diagnoses, to reduce the likelihood of incorrect diagnoses and give higher priority to complex patient diagnoses. In particular, the goal of this research is to automate the process of classifying different types of brain tumours based on images of patients’ brains. When analysing brain scans, a radiologist needs to look at several image slices to properly diagnose any health problems. This process is time consuming. We want to be able to confidently identify the many types of brain cancer to increase the efficiency with which care is provided, while leaving the most difficult diagnoses to medical specialists.

It is difficult to automatically detect different forms of brain tumours. Previous studies have resulted in the development of specialised algorithms for the automated classification of brain tumours. Paul et al. suggested a deep learning system for the purpose of classifying brain images containing several types of tumours, including meningioma, glioma, and pituitary [1]. In the present study, 989 T1-weighted axial pictures were taken into consideration, and 91.43% accuracy was attained with 5-fold cross validation. A convolutional neural network (CNN) was developed by Abiwinanda et al. to automatically classify the 3 most frequent forms of brain tumours, namely gliomas, meningiomas, and pituitary tumours, without the need for any region-based pre-processing processes [2]. A total of 3064 T1-weighted contrast-enhanced magnetic resonance images (MRI) that are available to the public on Figshare [3] were used in this study. The results of the experiment showed that the accuracy of the training was 98.51%, while the accuracy of the validation was 84.19%. Kader et al. presented a differential deep CNN model (differential deep CNN) to categorise different kinds of brain tumours, including abnormal and normal MRI pictures [4]. This model is based on deep convolutional neural networks. To obtain more differential feature maps, differential operators are utilised in the extraction process.

The differential deep CNN model has a number of benefits, one of which is the ability to analyse the pixel directional pattern of images by applying contrast calculations. Another benefit is the model’s high capacity to categorise a large image database. The dataset that was utilised in this study consisted of 25,000 MRI scans of the brain. This dataset included both aberrant and normal images, and it resulted in an accuracy of 99.25%. Ali et al. introduced a technique called extreme learning machine local receptive fields for classifying tumorous and non-tumorous MRIs of the brain [5]. In this location, coronal views of 9 patients were acquired for training purposes, while 7 patients were imaged for testing purposes. The information regarding the dimensions of the dataset is lacking. The author asserted that their work was accurate 97.18% of the time. A fully automatic model for the segmentation and classification of brain tumours has been presented by Díaz-Pernas et al. [6], and it makes use of a deep CNN that takes a multiscale approach. Meningiomas, gliomas, and pituitary tumours are the 3 forms of tumours that are categorised in this section. The dataset contains 3064 images with sagittal, coronal, and axial perspectives, and the accuracy rate was calculated to be 97.3%. For the purpose of 3-way (meningioma, glioma, and pituitary) classification of brain MRI, Saleh et al. analysed and compared the performance of 5 pre-trained models: Xception, ResNet50, InceptionV3, VGG16, and MobileNet [7]. The dataset contained 4480 pictures that have been separated into the following 3 categories: training, validation, and testing (unseen images). The training group contained a total of 2880 pictures, including 520 representations of each form of brain tumour. The validation group comprised a total of 800 photos, with each form of brain tumour accounting for 200 of the total number of images. The testing group had access to a total of 800 photos, with each form of brain tumour comprising 200 photographs. The F1-scores obtained as a consequence indicate that Xception had a score of 98.75%, resnet50 had a score of 98.50%, inceptionv3 had a score of 98.00%, vgg16 had a score of 97.50%, and MobileNet had a score of 97.25%. Glioma, pituitary, and meningioma are the 3 categories of brain pictures that may be classified using an approach proposed by Deepak et al. [8]. This approach is a combination of deep features and GoogLeNet. In this case, 3096 photos were employed, and with 5-fold cross validation, an accuracy rate of 98% is reached on average. The vgg19 was improved by Swati et al. using a block-wise fine-tuning method [9]. Under 5-fold cross-validation, the modified model was tested using T1-weighted pictures that are publicly available on the Figshare repository, and it attained an average accuracy of 94.82%. In both traditional machine learning [1013] and the more recently utilised deep learning models [1418], a large number of studies have been reported for classification and segmentation. Again, for brain tumour classification a deep learning model also achieved satisfactory results [1922]. Furthermore, deep learning works only with large amounts of data. Training it with large and complex data models can be expensive. It also needs extensive hardware to perform complex mathematical calculations; hence, we chose machine learning rather than deep learning approaches.

The main objective of this study is to propose a medical diagnostic support system for brain tumour classification, i.e. to set up an automated system that can accurately classify the types of brain tumour from MRI using machine learning, and to show that with feature engineering, we can find comparable accuracy to deep learning.

The primary benefit of utilizing machine learning models is that they allow for a better interpretability of the classification model because they are based on feature engineering. Feature extraction is a particular kind of dimensionality modification. The chief purpose of this technique is to capture the important characteristics of the raw data and interpret this character in a less dimensionality space [23]. In this work, we used 3 methods for feature extraction, which involved local binary pattern, the histogram of oriented gradient, and grey level co-occurrence matrix. Local binary patterns (LBP) are an effective descriptor in tasks of recognition and computer vision [24]. The local binary patterns coding the grey levels of an image by comparing the central pixel with its neighbours and the result counted as a binary number converted to decimal number substitutes the central pixel value. A histogram of oriented gradients (HOG) is an introduced object descriptor, focalizing on the structure or appearance of an object in an image. The histogram of oriented gradients provides distinguishing features when lighting variation and background noise, so it is an effective descriptor [24]. The grey level co-occurrence matrix (GLCM) is one of several popular texture examination techniques that calculate the occurrence of specific grey levels about other grey levels. The grey level co-occurrence matrix measures the frequency of various sequences of grey level values that appear in a region of interest [25]. This technique examines the association among adjacent pixels; the original pixel is identified as a reference, and the other is a neighbour pixel. Furthermore, we have taken the most prominent discriminatory features of each extraction technique. Deep learning models, on the other hand, are black box networks whose workings are extremely difficult to understand due to the complex design of the network. As a matter of fact, feature engineering is an essential part of the medical and diagnostic field for doctors because it gives them the ability to know the importance and impact of each feature on the classification and identification of cancer types, in contrast to deep learning models, which are black box networks. The main aim of this research was to find the optimal performance in brain tumour classification using multiclass data with numerous training models by means of machine learning with its paradigms. Again, comparative analysis can be done within the machine learning models with their paradigms for brain tumour classification.

The major contributions of this work are as follows:

  • for the first time, the concept of feature engineering is applied to the 4-class brain tumour classification problem;

  • three set of features such as GLCM, LBP, and HOG are extracted from brain MRIs, and then these features are used in various classifiers, namely support vector machine, K-nearest neighbour (KNN), naïve Bayes, tree, and ensemble classifier;

  • all the classification methods are evaluated in single set feature and combined set of features;

  • the combined set of features, i.e. GLCM + LBP + HOG, contributed 91.1% accuracy and 0.95 area under the curve (AUC) in fine KNN;

  • the proposed method gives very good performance even with a small dataset, and it is comparable to the deep learning approach.

Structure of the article: The second section discusses the resources and approaches that were utilised. The results of the experiment are presented in Section 3, along with a commentary of them. The conclusion can be found in section 4, which comes last.

Material and methods

This section details the dataset and the adapted methodology.

About the dataset

The brain dataset investigated in this study is collected from the Figshare repository [3]. The dataset comprises TI-weighted MRI of no tumour and 3 different types of tumours: meningioma, glioma, and pituitary. Image resolution of 512 × 512 with different views such as axial (transverse plane), coronal (frontal plane), or sagittal (lateral plane) planes was used in this dataset. The sample distribution based on the number of classes consisted of 826, 822, 827, and 395 sample instances of glioma, meningioma, pituitary, and no tumours, respectively. The sample of 3 types of brain tumour is shown in Figure 2.

Fig. 2

Samples of brain magnetic resonance imaging. 1st line: axial, 2nd line: coronal, 3rd line: ssagittal, glioma (A), meningioma (B), pituitary (C), no tumour (D)

/f/fulltexts/WO/50030/WO-26-50030-g002_min.jpg

Methodology

Figure 3 provides an overview of the methodology used to classify brain tumours. A total of 2870 T1-weighted MRIs were utilised throughout all phases of this study. The features of the images were extracted before the machine learning system classified the images according to their class. 20% of the dataset was utilised as test data, and 80% was used for training data (randomly chosen). The machine learning algorithm was trained using the training set’s visual features. Finally, image attributes of the testing set were utilised to evaluate the model’s performance. The models were run on an HP pavilion i5, Windows 10, 8 GB RAM, MATLAB 2021a platform.

Fig. 3

Overall work flow of methodology for classification brain tumour magnetic resonance imaging

GLCM – grey level co-occurrence matrix, HOG – histogram of oriented gradients, KNN – K-nearest neighbour, LBP – local binary patterns, MRI – magnetic resonance imaging, SVM – support vector machine

/f/fulltexts/WO/50030/WO-26-50030-g003_min.jpg

Features were measurable quantities that could be useful for a predictive analysis for classification. The features contained in MRI are vital for disease diagnosis, and efficient feature extraction is crucial for improving diagnostic accuracy and cancer classification.

The extracted image characteristics were fed into machine learning algorithms. Support vector machine, KNN, tree, naive Bayes, and ensemble classifier were the machine learning techniques employed. These machine learning algorithms were trained with the training set’s visual characteristics.

In this paper we extracted 13 GLCM, 36 HOG, and 59 LBP features. Here, to enhance the performance of classification models, the feature fusion technique was introduced. The combination of GLCM + LBP, GLCM + HOG, HOG + LBP and GLCM + HOG + LBP were fed into the classifiers and the performance was recorded.

Result

This section presents the results obtained from the 5 classifiers under investigation in this paper. Indeed, the performance of the models was determined using the test data. To construct the most effective feature extraction strategies, we compared the performance of models with respect to single set and combined set features in this study.

The performance of 5 classifiers with their paradigms are recorded in Table 1 in terms of accuracy and AUC with respect to a single feature set. Here the considered feature sets are GLCM, HOG, and LBP. The observation made from Table 1 is that the highest accuracy achieved using single feature set, i.e. HOG is 85% in subspace KNN. The AUC of subspace KNN using the HOG feature is 0.97. Again, the performance contributed by subspace KNN using GLCM feature is 60.8% accuracy and 0.81 AUC and LBP 80.5% accuracy and 0.94 AUC. Furthermore, the accuracy and AUC achieved by classifiers other than subspace KNN are approximately 80% and 0.85, respectively.

Table 1

Brain tumour classification using magnetic resonance imaging based on a single feature set

ClassifiersAccuracyAUC
GLCM (%)HOG (%)LBP (%)GLCMHOGLBP
SVMLinear71.851.671.40.880.700.87
Quadratic76.069.578.90.910.850.92
Cubic79.376.376.30.930.890.92
Fine gaussian77.773.264.10.900.940.87
Medium gaussian72.871.676.50.890.870.91
Coarse gaussian64.551.964.10.860.680.84
KNNFine KNN76.382.276.30.770.870.83
Medium KNN70.771.371.80.870.840.90
Coarse KNN63.155.963.10.840.750.86
Cosine KNN72.670.767.40.880.860.88
Cubic KNN71.170.671.10.870.830.90
Weighted KNN77.977.473.90.890.900.92
Naive bayes62.449.551.60.830.660.73
TreeFine tree67.953.367.10.830.750.83
Medium tree64.647.963.10.830.690.80
Coarse tree63.840.161.30.810.540.76
EnsembleBoosted trees66.752.868.60.850.740.86
Bagged trees76.069.073.70.870.860.89
Subspace discriminant65.251.765.90.840.670.85
Subspace KNN60.885.080.50.810.970.94
Rus boosted trees65.253.167.80.860.720.87

[i] AUC – area under the curve, GLCM – grey level co-occurrence matrix, HOG – histogram of oriented gradients, KNN – K-nearest neighbour, LBP – local binary patterns, SVM – support vector machine

Discussion

In the second phase the features extracted from MRI are combined, such as GLCM + HOG + LBP. The performance of different combinations of feature sets are recorded in Table 2. It was observed from Table 2 that the fine KNN performed well for the classification of 4 types of brain MRI. In fine KNN the accuracy achieved was 88.7%, 80.3%, 88.9%, and 91.1% by the combination of feature sets like GLCM + HOG, GLCM + LBP, HOG + LBP, and GLCM + HOG + LBP, respectively. Furthermore, the AUC achieved by fine KNN using GLCM + HOG, GLCM + LBP, HOG + LBP, and GLCM + HOG + LBP is 0.94, 0.85, 0.94, and 0.96, respectively. Overall, the highest performance recorded in the feature engineering approach (single-set feature and combined-set feature) is 91.1% accuracy and 0.96 AUC in the case of fine KNN. The confusion matrix and AUC curve are illustrated in Figure 4. Because the result revealed the accuracy to be more then 90% and AUC more than 0.95, the proposed diagnostic system, i.e. fine KNN with a combination of features GLCM, HOG, and LBP, is a very good diagnostic model.

Fig. 4

Performance of fine K-nearest neighbour for brain tumour classification (A) confusion matrix (B) area under the curve

AUC – area under the curve, GLCM – grey level co-occurrence matrix, HOG – histogram of oriented gradients, KNN – K-nearest neighbour, LBP – local binary patterns, SVM – support vector machine

/f/fulltexts/WO/50030/WO-26-50030-g004_min.jpg
Table 2

Brain tumour classification using magnetic resonance imaging based on a combination of feature sets

ClassifiersAccuracyAUC
GLCM + HOG (%)GLCM +LBP (%)HOG +LBP (%)GLCM +HOG + LBP (%)GLCM +HOGGLCM +LBPHOG +LBPGLCM +HOG +LBP
SVMLinear72.877.473.379.10.910.920.920.93
Quadratic80.782.882.687.50.960.940.960.97
Cubic85.081.085.487.80.970.940.970.97
Fine gaussian77.764.169.373.00.960.930.960.96
Medium gaussian83.481.280.885.20.960.940.960.96
Coarse gaussian69.768.568.371.40.900.880.900.89
KNNFine KNN88.780.388.991.10.940.850.940.96
Medium KNN76.580.079.881.40.950.940.950.96
Coarse KNN67.668.566.267.60.910.880.890.88
Cosine KNN76.075.875.479.40.940.920.930.94
Cubic KNN76.378.677.580.30.950.940.950.95
Weighted KNN82.882.283.486.40.970.950.970.97
Naive Bayes66.756.157.556.80.860.770.780.77
TreeFine tree65.969.067.474.90.820.830.810.83
Medium tree62.264.663.467.90.840.820.820.82
Coarse tree60.165.762.264.60.790.810.770.79
EnsembleBoosted trees68.872.370.272.80.870.860.900.88
Bagged trees79.375.675.681.90.920.910.900.94
Subspace Discriminant69.573.771.176.00.890.900.900.90
Subspace KNN63.462.085.565.50.840.820.980.84
Rus boosted trees66.970.771.369.00.860.870.900.86

Conclusions

The brain tumour classification is an exploring research for machine learning people and medical practitioners because the deep learning approach is a black box method and medical practitioners are unable to analyse the exact features of brain MRI for classification. The approach proposed in this article is challenging enough to the deep learning approach. The proposed method, i.e. fine KNN, achieved 91.1% accuracy and 0.96 of AUC. Furthermore, this model has the possibility to integrate in low-end devices unlike deep learning, which requires a complex system. Again, the performance of the classification models may improve by the introduction of optimization techniques.

Notes

[2] Conflicts of interest The authors declare no conflict of interest.

References

1 

Paul JS, Plassard AJ, Landman BA, Fabbri D. Deep learning for brain tumor classification. Med Imaging 2017; 10137: 253-268.

2 

Abiwinanda N, Hanif M, Hesaputra ST, Handayani A, Mengko TR. Brain tumor classification using convolutional neural network. World Congress on Medical Physics and Biomedical Engineering 2018. Springer, Singapore 2019.

3 

Jun Cheng. Brain Tumor Dataset (Version 5). Available: 10.6084/m9.figshare.1512427.v5.

4 

Abd El Kader I, Xu G, Shuai Z, Saminu S, Javaid I, Ahmad IS. Differential deep convolutional neural network model for brain tumor classification. Brain Sci 2021; 11: 352.

5 

Ali A, Hanbay D. Deep learning based brain tumor classification and detection system. Turk J Elect Engin Comp Sci 2018; 26: 2275-2286.

6 

Díaz-Pernas FJ, Martínez-Zarzuela M, Antón-Rodríguez M, González-Ortega D. A deep learning approach for brain tumor classification and segmentation using a multiscale convolutional neural network. Healthcare 2021; 9: 153.

7 

Saleh A, Sukaik R, Abu-Naser SS. Brain tumor classification using deep learning. 2020 International Conference on Assistive and Rehabilitation Technologies (iCareTech). IEEE, 2020.

8 

Deepak S, Ameer PM. Brain tumor classification using deep CNN features via transfer learning. Comp Biol Med 2019; 111: 103345.

9 

Zar Nawab Khan S, Xu G, Shuai Z, et al. Brain tumor classification for MR images using transfer learning and fine-tuning. Comp Med Imaging Graph 2019; 75: 34-46.

10 

Cheng J, Huang W, Cao S, et al. Enhanced performance of brain tumor classification via tumor region augmentation and partition. PLoS One 2015; 10: e0140381

11 

Ismael MR, Abdel-Qader I. Brain tumor classification via statistical features and back-propagation neural network. 2018 IEEE international conference on electro/information technology (EIT) 2018, 0252-0257.

12 

Tahir B, Iqbal S, Usman Ghani Khan M, Saba T, et al. Feature enhancement framework for brain tumor segmentation and classification. Microsc Res Tech 2019; 82: 803-811.

13 

Havaei M, Jodoin PM, Larochelle H. Efficient interactive brain tumor segmentation as within-brain KNN classification. Proceedings of the 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014; IEEE: Los Alamitos, CA, USA, 2014, 556-561.

14 

Paul JS, Plassard AJ, Landman BA, Fabbri D. Deep learning for brain tumor classification. Proc SPIE 2017; 10137: 1-16.

15 

Zhou Y, Li Z, Zhu H, et al. Holistic brain tumor screening and classification based on densenet and recurrent neural network. International MICCAI Brain lesion workshop. Springer, Berlin 2018, 208-217.

16 

Pashaei A, Sajedi H, Jazayeri N. Brain tumor classification via convolutional neural network and extreme learning machines. 2018 8th International conference on computer and knowledge engineering (ICCKE), 314-319.

17 

Abiwinanda N, Hanif M, Hesaputra ST, Handayani A, Mengko TR. Brain tumor classification using convolutional neural network. World Congress on Medical Physics and Biomedical Engineering 2018. Springer, Singapore 2019, 183-189.

18 

Havaei M, Davy A, Warde-Farley D, et al. Brain tumor segmentation with deep neural networks. Med Image Anal 2017; 35: 18-31.

19 

Rajinikanth V, Kadry S, Nam Y. Convolutional-neural-network assisted segmentation and SVM classification of brain tumor in clinical MRI slices. Inform Technol Control 2021; 50: 342-356.

20 

Maqsood S, Damasevicius R, Shah FM. An efficient approach for the detection of brain tumor using fuzzy logic and U-NET CNN classification. International Conference on Computational Science and Its Applications, Springer, Cham 2021, 105-118.

21 

Bakary B, Deniz Ülker E. A deep transfer learning based architecture for brain tumor classification using MR Images. Inform Technol Control 2022; 51: 332-344.

22 

Sarmad M, Damaševičius R, Maskeliūnas R. Multi-modal brain tumor detection using deep neural network and multiclass SVM. Medicina 2022; 58: 1090.

23 

Saslow D, Boetes C, Burke W, et al. American Cancer Society guidelines for breast screening with MRI as an adjunct to mammography. CA Cancer J Clin 2007; 57: 75-89.

24 

Skibbe H, Reisert M. Circular fourier-hog features for rotation invariant object detection in biomedical images. 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI) 2012, 450-453.

25 

Öztürk Ş, Akdemir B. Application of feature extraction and classification methods for histopathological image using GLCM, LBP, LBGLCM, GLRLM and SFTA. Proc Comp Sci 2018; 132: 40-46.

Copyright: © 2023 Termedia Sp. z o. o. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License (http://creativecommons.org/licenses/by-nc-sa/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material, provided the original work is properly cited and states its license.
 
Quick links
© 2024 Termedia Sp. z o.o.
Developed by Bentus.