v
Search
Advanced

Publications > Journals > Exploratory Research and Hypothesis in Medicine > Article Full Text

  • OPEN ACCESS

Comparison of Different Kernels in a Support Vector Machine to Classify Prostate Cancerous Tissues in T2-weighted Magnetic Resonance Imaging

  • Ahmad Shanei1,
  • Mahnaz Etehadtavakol1,
  • Mohammadreza Azizian1 and
  • Eddie Y.K. Ng2,* 
 Author information  Cite
Exploratory Research and Hypothesis in Medicine   2023;8(1):25-35

doi: 10.14218/ERHM.2022.00013

Abstract

Background and objectives

A support vector machine (SVM) is one of the most powerful classifiers in machine learning that can be applied when a data set is introduced in two classes in a high dimensional feature space. The objective of this study is to compare different kernels of SVMs to classify prostate cancerous tissues.

Methods

In the present study, a novel algorithm was proposed to classify cancerous prostate tissues. Five features, among 14 Haralick, were chosen as the most significant features: contrast, correlation, homogeneity, energy, and entropy. In addition, 17 features were calculated from each outlined region of interest (ROI) on the images. Then, the dimensionality number of the features set was reduced from 17 to 5 using the principal component analysis (PCA) technique. The reduced features were considered as given inputs to an SVM algorithm for classification.

Results

The sensitivity of the SVM was 0.9565 with the radial basis function (RBF), and 0.9097 and 0.9028 were achieved with the Gaussian and the linear kernels, respectively. Moreover, the accuracy of the linear, RBF, and Gaussian functions were 0.9028, 0.8405, and 0.8239, respectively.

Conclusions

The RBF is preferable compared with the other examined functions due to the highest sensitivity and the second-largest accuracy.

Keywords

Prostate cancer, Classification, Support vector machine, Haralick features, Kernel

Introduction

Prostate Cancer (PCa) is the most commonly diagnosed cancer and the second most dominant cause of death among men in the USA.1–3 Approximately one in six men experiences PCa during their lifetime.4 Early detection of PCa can drastically increase the patients’ survival rate and consequently decrease treatment costs.4 Therefore, in vivo imaging techniques play key roles in the identification and treatment of this cancer.5

Recent studies have shown that tumor volume is the most crucial predictor of cancer recurrence after radical prostatectomy.6,7–9 In addition, tumor volume is a predictor of PCa, whether local recurrence or metastasis. People who died from PCa had a significantly larger tumor volume than those who survived.10,11 Various biological markers, such as Gleason score, tumor stage, and surgical margin status have been correlated with tumor density. Recently, PCa imaging has been significantly improved with the advent of multiparametric magnetic resonance imaging (MRI),4 which is an accurate and non-invasive method to diagnose the location of PCa compared with the other imaging techniques, such as transrectal ultrasound.12 The multiparametric MRI can be applied for tumor detection, staging, evaluation of tumor spread, and treatment monitoring.13,14 Since multiparametric MRI can potentially help identify low-grade tumors, it allows physicians to use an active monitoring program instead of choosing an invasive treatment.4 In addition, computer-aided detection may improve PCa diagnostic accuracy and help to reduce variations in interpretation between physicians in a reproducible manner.5,13,15,16

Upgrading a computer-aided detection system based on multiparametric MRI and Gleason scores can be useful for predicting and identifying patients for whom active monitoring is appropriate. Therefore, it leads to appropriate treatment decisions.13–20 The assessment of tumor volume can be helpful in the determination of the cancer risk before starting treatment and when choosing an appropriate treatment.14

Artificial intelligence (AI) is an area of computer science that focuses on the development of smart devices to accomplish tasks that currently need human intelligence. Among machine learning techniques, the deep learning model teaches computers to learn by example, something that human beings do inherently. AI is altering healthcare. Digital pathology is being helped by AI to assist researchers when assessing big data sets and delivering faster and more accurate diagnoses of PCa lesions. AI has demonstrated excellent accuracy in the detection of prostate lesions and in the prediction of patient consequences for survival and treatment response when applied to diagnostic imaging.21 In a 2021 study, Chiu et al.22 investigated the value of machine learning in enhancing PCa diagnosis. They concluded, using the same clinical parameters, that machine learning approaches performed better than the European Randomized Study of Screening for Prostate Cancer or Prostate-Specific Antigen density in Clinically significant PCa predictions and could avoid ≤50% of unnecessary biopsies.22 In 2021, Zhang et al.,23 by combining machine learning methods, diagnosed prostate lesions in MRIs. They found that the accuracy of their method improved by approximately 20% compared with other methods.23 A literature comparison table is provided in Table 1. 5,24,25,26

Table 1

Literature table

ReferenceApproaches and algorithmResultsConclusion
Li et al.24ROIs were identified through radiological-pathological correlation. Eleven parameters were derived from the Multiparametric MRI and histogram analysis, including mean, median, 10th percentile, skewness, and kurtosis were performed for each parameter.The prediction model yielded an area under the receiver operating characteristics curve (AUC) of 0.99 (95% CI: 0.98–1.00) when trained in dataset A2 and 0.91 (95% CI: 0.85–0.95) for the validation in dataset B2. When the data sets were reversed, an AUC of 0.99 (95% CI: 0.99–1.00) was obtained when the model was trained in dataset B2 and 0.90 (95% CI: 0.85–0.95) for the validation in dataset A2.The SVM classification based on Multiparametric magnetic resonance imaging (mp-MRI) derived image features obtains consistently accurate classification of the GS of PCa in the CG.
Chang et al.25An active contour model was used to segment the prostate. 136 features were extracted from the dynamic MRIs after injection at different times and transformed into relative intensity change (RIC) curves. 10 discriminative features were selected by Fisher’s Discrimination Ration (FDR) and Sequential Forward Floating Selection (SFFS). Finally, the SVM classified the segmented prostate into two categories.Accuracy of the proposed method was up to 94.7493%.The best texture features and the combination of features using RIC can assist the urologist in classifying the PCa.
Shah et al.26Cancer and normal regions were identified in the peripheral zone. Segmented regions on the MP-MRI were correlated to histopathology and used as training sets. A GA was used to find the optimal values for a set of parameters, and finally, a cancer probability map was generated.The nonoptimized system had an f-measure of 85% and the Kappa coefficient of 71%. The efficacy of the DSS after optimizing SVM parameters using a GA had an f-measure of 89% and a Kappa coefficient of 80%. A 4% increase in the f-measure and a 9% increase in the Kappa coefficient were achieved.Decision Support System (DSS) provides a cancer probability map for peripheral zone prostate tumors based on endorectal MP-MRI which can potentially aid radiologists in accurately localizing peripheral zone PCas.
Artan et al.5A new segmentation method was developed by combining conditional random fields (CRF) with a cost-sensitive framework.Additional parameters were used to control class-related costs in the SVM formulation, which allowed them to increase overall segmentation accuracy. They used three training schemes and compared their performances.Multispectral MRI helped to increase the accuracy of PCa localization and using cost-sensitive SVM and the proposed cost-sensitive CRF can boost the performance significantly when compared to SVM.

The objective of this study was to compare different kernels of support vector machines (SVMs) to classify cancerous prostate tissues. The advantages of this study were: (1) compared different SVM kernels for the classification of prostate cancer; and (2) used our datasets and not data from public databases.

Methods

This study was performed in distinct steps. First, patients with cancerous tumors were selected and regions of interest (ROIs) on their images were contoured by an experienced radiologist. Then, these segmented ROIs were outlined on the T2 WMR sequences using a command in MATLAB. Next, the area-based features were extracted, followed by reducing the number of the extracted features using the PCA technique. Finally, the reduced features were used to provide inputs for the SVM classifier. Figure 1 shows the study steps followed in this work.

The flow chart of this study.
Fig. 1  The flow chart of this study.

PCA, principal component analysis; ROI, region of interest; SVM, support vector machine; T2-W MRI, T2 weighted magnetic resonance imaging.

The study was performed following the Helsinki Declaration on ethical principles for medical research involving human subjects and was approved by the Institutional Committee for Ethics in Biomedical Research of the Isfahan University of Medical Sciences (approval ID: IR.MUI.MED.REC.1398.437). Informed consent was obtained from all participants included in the study.

Image preparation and data collection

The data set used in this study included patients who had suspected PCa and underwent a series of biopsies from March 21, 2018, to September 22, 2019, at Baradaran Pathology Center in Isfahan, Iran. Patients were excluded if they had previously been treated for PCa, including surgery, hormone therapy, radiation therapy, or other treatment modalities. All patients underwent MRI examination at least 3 weeks post-biopsy. MRI was performed using the 1.5 Tesla Magnetom Aera Siemens system. In this imaging, the used parameters were as follows: Repetition Time (TR) = 3,400 ms, Echo Time (TE) = 113 ms, Field of View (FOV) = 220 mm, Slice Thickness = 3.5 mm, interslice gap = 0.65 mm, 26 ≥ number of slices ≥ 20, matrix Acquired = 320 × 310.

Patients who had devices, such as pacemakers, clips, artificial heart valves, or any injuries in their bodies were not included in our study, due to the possible artifacts in their images. Figure 2 shows a flow chart of patients’ inclusion and exclusion.

The flow chart of patient’s inclusion and exclusion.
Fig. 2  The flow chart of patient’s inclusion and exclusion.

In this study, the radiologist reviewed the patient’s pathology report as a baseline reference and identified the approximate tumor location. If the tumor was in the peripheral zone, the tumor was identified in the apparent diffusion coefficient map, and then it was identified in T2 WMR by adapting the main coordinates of the tumor. Table 2 demonstrates details of our data set according to Gleason scores.

Table 2

Details of data set according to Gleason scores

Gleason score
3 + 3
3 + 4
4 + 3
4 + 4
4 + 5
5 + 4
Total
Number of patients7131518448
Tumor Volume (mL)
  Mean1.215.556.3119.997.6716.186.70
  Median0.943.676.1019.994.3713.223.66
  Standard deviation0.805.504.73007.9314.617.34
  Minimum0.500.571.1519.992.161.930.50
  Maximum2.8616.5914.5419.9925.9536.3536.35
  Range2.3516.0113.380023.7934.4135.84
Age of patients (years)
  Mean58.7169.1564.4069.0065.5063.5065.06
  Median67.0068.0064.0069.0071.5064.5066.00

Signal to noise ratio (SNR) is a standard that is used to describe the performance of an MRI system. The most common approach to measuring SNR needs two separate ROIs from a single image. One from the tissue of interest, and one from the image background, for example, in air, outside the imaged object. These two-region methods are referred to as MeanGM and Standard DeviationAIR, respectively. According to the following equation, SNR = 26 was obtained for our system.

SNR=MeanGMStandard DeviationAir

MATLAB image processing

Preprocessing of MRIs

The purpose of preprocessing is to normalize the MRI, which is accompanied by a change in the brightness of the pixels of that image, for instance, the transition to a range in which all the studied images have the same intensity distribution. This normalization was performed because the DICOM images studied in this work had different brightness intensities, for instance, some 16-bit images with a range of 0–65,535 (216) and some with a range of 0–1,023 (210). Before applying any processes to the images, they must be normalized. Therefore, they were normalized in the range 0– 1 and then were transferred to the interval of 0–255 by multiplying that interval by 255. Normalization of the images with wider ranges was accompanied by the loss of information, which caused them to fade and lose their diagnostic accuracy. Therefore, 12 images were removed from our data set of 63 images.

ROI outlining

All MRIs were interpreted, and their ROI was demarcated by a radiologist with 10 years of experience in MRI interpretation. Then, using the freehand command in MATLAB, the ROIs on the images were outlined. The radiologist reviewed a patient’s pathology report as a baseline reference and identified the approximate tumor locations. If the tumor was in the peripheral zone, it was first detected on the apparent diffusion coefficient maps and then obtained in the T2W-MRI sequence by matching the original coordinates of the tumor. In addition, after removing the tumor parts that the radiologist identified, noncancerous ROIs were identified on the remaining image. Figure 3 shows a studied image with outlined ROI using the freehand command in MATLAB.

Prostate MR image of a 70-year-old cancer patient with a Gleason score 9 (4 + 5) and stage 3 (Case 1): (a) outlined with a freehand ROI tool on T2 WMR sequence; and (b) cropped ROI (22 × 38 pixels).
Fig. 3  Prostate MR image of a 70-year-old cancer patient with a Gleason score 9 (4 + 5) and stage 3 (Case 1): (a) outlined with a freehand ROI tool on T2 WMR sequence; and (b) cropped ROI (22 × 38 pixels).

MR, magnetic resonance; ROI, region of interest; T2 WMR, T2 weighted magnetic resonance.

If the tumor was detected in >1 slice, all the ROIs in the different slices were considered for the feature extraction and the tumor volume estimation. Of note, the features were extracted from cancerous and noncancerous areas. Therefore, there was a cancer-related ROI and a noncancerous ROI in a selected image. The number of cancer-related ROIs was different for different patients. In four slices ROIs were outlined twice, because the tumor regions were separated and not connected.

Feature extraction

The extraction of appropriate features plays an important role in this study. Previous studies have shown that choosing appropriate features for classification is more important than choosing the classifier itself.19 Feature extraction provides parameters that can be used to classify the area into two regions, for example, normal and abnormal.27 In total, 14 Haralick features were designed based on a mathematical method. The features extracted from the image were not detected by the human eye.28 Successful applications of this method have been demonstrated in various fields.29 Haralick features extraction techniques involve two steps. The first step is calculating the co-occurrence matrix and the second step is calculating features by applying the calculated co-occurrence matrix. The gray-level co-occurrence matrix was constructed by comparing the pixel values of neighboring pixels in four different directions. The gray-level co-occurrence matrix is a square matrix, and the number of gray levels determines its size. Usually, the angular directions used are 0°, 45°, 90°, and 135°. The neighborhood relationships between pixels that are needed for calculating the gray-level co-occurrence matrix are shown in Figure 4.

Four angles (0°, 45°, 90°, and 135°) to calculate the gray-level co-occurrence matrix.
Fig. 4  Four angles (0°, 45°, 90°, and 135°) to calculate the gray-level co-occurrence matrix.

Wibmer et al.,29 in a study of T2 weighted images and apparent diffusion coefficient images of 147 MRIs of PCa patients undergoing MRIs discovered that five Haralick features (e.g., energy, entropy, correlation, homogeneity, and contrast) were more useful to diagnose PCa than the rest.29 The formulas for these five features are provided in Table 3.

Table 3

Formulas of five features discovered by Wibmer et al29

FeaturesFormula
Contrasti,j|ij|2p(i,j)
Correlationi,j(iμi)(jμj)p(i,j)σiσj
Energyi,j(p(i,j))2
Homogeneityi,jp(i,j)1+|ij|
Entropyi,jp(i,j)log2p(i,j)

In this study, 17 features were extracted from the outlined ROIs of our dataset. Hence, our feature set was a matrix with 202 rows and 17 columns. The rows represent the observations and the columns represent the features. Table 4 presents 17 features that were extracted from the outlined ROIs in four cases.

Table 4

Seventeen extracted features include contrast, correlation, energy, homogeneity (in 0°, 45°, 90°, and 135° directions) and entropy for four cases

Case 1Case 2Case 3Case 4
Features extracted from ROIA 70-year-old cancer patient with a Gleason score 9 (4 + 5) and stage 3A 76-year-old cancer patient with a Gleason score 7 (4 + 3) without radical prostatectomyA 61-year-old cancer patient with a Gleason score 7 (4 + 3) without radical prostatectomyA 49-year-old cancer patient with a Gleason score 7 (4 + 3) without radical prostatectomy
Contrast 0 degree0.1646190.1765940.3482140.231724
Contrast 45 degree0.3024450.245690.3936510.328571
Contrast 90 degree0.2493730.1864580.4333330.25
Contrast 135 degree0.2355210.2521550.5968250.308571
Correlation 0 degree0.8228070.6766950.6618560.653724
Correlation 45 degree0.6749990.5475090.6156270.497307
Correlation 90 degree0.7315690.6588050.5739560.625746
Correlation 135 degree0.7471440.5356080.4172220.528215
Energy 0 degree0.3426460.3677740.2955640.325641
Energy 45 degree0.2961180.3332090.2520840.289212
Energy 90 degree0.3183430.3578190.2600730.322591
Energy 135 degree0.3137120.3317430.2183420.309306
Homogeneity 0 degree0.9357080.9214560.8918650.893333
Homogeneity 45 degree0.8933930.8915230.8455030.849048
Homogeneity 90 degree0.9104010.9151040.8439390.887821
Homogeneity 135 degree0.9079790.8897270.8023810.866667
Entropy0.3261150.2898350.3731730.341325

Dimensionality reduction using PCA

PCA is a popular linear method for dimensionality reduction. It accomplishes a linear mapping of the data to a space of a lower dimension to maximize the variance of the data. The steps involved in conducting this technique are: (1) constructing the covariance matrix of the data; (2) computing the eigenvectors of the calculated matrix; (3) choosing the eigenvectors corresponding to the largest eigenvalues; and (4) using the chosen eigenvector to reconstruct the data. The advantages of PCA are: (1) low sensitivity to noise; (2) reducing the memory requirements and the number of operations needed; (3) increasing the efficiency; and (4) no data redundancy.

In this method, features are developed into a new set that is a linear combination of original features. This new set of features is known as the principal components. They are collected so that the first principal component describes the most possible variation in the original features. The second principal component should be orthogonal to the first principal component. Specifically, it collects the variance in the data that is not acquired by the first principal component.

In this study, the dimensionality number of features set was reduced from 17 to 5 using the PCA technique. The PCA algorithm helped to identify five significant principal components that could achieve classification accuracy, the same as that of 17 features. The new reduced features set of data is presented in Table 5.

Table 5

New reduced features set of data presented in Table 2 from dimension 17 to dimension 5

FeaturesCase 1Case 2Case 3Case 4
10.25850.17540.98910.3744
21.43141.11701.14191.0830
31.24561.35091.33281.2486
4−0.2127−0.1810−0.2705−0.1960
50.23260.15390.28850.2390

Classification with a machine learning algorithm

SVMs are supervised machine learning algorithms that can analyze data for classification.30–33 In the SVM algorithm, each data is represented as a point in space and mapped so that the data of individual classes are set apart by a margin as wide as possible. A new data test is mapped to the same space and according to the side of the margin on which it lands is concluded to assign a class. Therefore, an SVM classifies data by finding the best hyperplane that separates all data points in one class from those in the other class. The best hyperplane for an SVM indicates the one with the largest margin between both classes. In this case of binary classification, there was a dataset made from 202 observations, each observation made of a vector with the dimension of five or xi and a target variable yi which could be either −1 or 1 depending on whether the observation belonged to the cancerous class or the other. SVM can employ a hyperplane that separates many, but not all data points. Using this data, the SVM learns the parameters of a hyperplane that separates the space into two parts: one for the observations of the cancerous class and the other part for the noncancerous class. In addition, between all possible hyperparameters that separate both classes, the SVM learns the one that separates them the most; therefore, leaving as much margin as possible between each class and the hyperplane. SVMs can determine the advantage of kernel functions. These are functions that return the same thing as the scalar product between two vectors but without needing to find the coordinates of those vectors. SVMs can work for data that is not linearly separable. An SVM goes to higher dimensions where the data could be linearly separable and finds the hyperplane to separate the data there and tries to fit some complicated function to the data. Therefore, SVMs do not calculate the parameters of the hyperplane, instead, they remember the observations that they need to calculate the hyperplane and, when new input data comes, SVMs perform the scalar products between these observations, called support vectors, and the input data. In total, 80% of our total observations (e.g., 101 cancerous and 101 noncancerous cases) were allocated to the learning part and the remaining 20% to the test.

This division was randomized; therefore, the classification was not biased. In the learning step, the learning data were given to SVM classifiers with their class tag to build an appropriate model. Then, in the test part, the class that was determined by the algorithm related to testing data was assigned to the classifier based on the model built in the training part.30 Comparing radial basis function (RBF) and linear kernels, RBF adds an extra hyperparameter to tune. However, in contrast to the linear kernel function, this function does map the data to a higher dimension. Hence, the SVM can represent a nonlinear separation. First, the SVM was trained, and the classifier was cross-validated. Then, the trained machine was used to classify the test data. In addition to the linear classification, SVMs can accomplish nonlinear classifications using some kernels, such as Gaussian and RBF. They map to high dimensional spaces.30,243435 In this study, to obtain satisfactory predictive accuracy, three different SVM kernel functions were employed and then the parameters of the kernel functions were tuned.

Results

Several solutions were applied to improve the accuracy of the SVM classifier. These solutions are discussed in the discussion section. The results obtained after applying the solutions are given in Table 6. Accuracy, sensitivity, and specificity were calculated in the following equations, respectively:

Accuracy=TP+TNTP+TN+FP+FN
Sensitivity=TPTP+FN
Specificity=TNTN+FP
where TP is the cancerous area that is correctly classified by the classifier; TN is the noncancerous area that is correctly classified by the classifier; FP is the noncancerous area that the classifier has mistakenly identified as cancerous; and FN is the cancerous area that the classifier has mistakenly identified as noncancerous.

Table 6

SVM classification results with Gaussian, RBF, and linear kernels for K fold = 5 and 10

Cross-validation K fold = 10
Cross-validation K fold = 5
SVM–GaussianSVM– RBFSVM–LinearSVM–GaussianSVM–RBFSVM–Linear
Sensitivity0.90970.95650.90280.87460.88060.8699
Specificity0.87500.80540.87380.75450.77500.8653
Accuracy0.82390.84050.90280.81150.82720.8679
Standard deviation of accuracy0.06360.04010.041500.074520.063280.08561

From Table 6, the sensitivity of the SVM was 0.9565 using the RBF, and 0.9097 and 0.9028 were achieved using the Gaussian and the linear kernels, respectively for cross-validation K fold = 10. Moreover, the accuracy of the linear, RBF, and Gaussian were 0.9028, 0.8405, and 0.8239, respectively for cross-validation K fold = 10. All values related to K fold = 10 were greater than the corresponding values for K fold = 5. A flow chart of the cross-validation process for K fold = 5 is shown in Figure 5.

The flow chart of cross-validation process for K fold = 5.
Fig. 5  The flow chart of cross-validation process for K fold = 5.

Discussion

In this study, machine learning was chosen because computer-aided detection and diagnosis achieved by machine learning algorithms can help physicians interpret medical imaging findings and reduce interpretation times. Recently, the availability of large datasets accompanied by an improvement in algorithms and advances in computing power have created a great deal of interest in the topic of machine learning. Currently, machine learning algorithms are successfully used for medical image classification. Some popular machine learning algorithms for classification tasks are SVMs, artificial neural networks (ANN), and deep learning. The difference between ANN and SVM is mostly related to how nonlinear data is classified. SVMs can employ nonlinear mapping to make the data linearly separable. Consequentially, the kernel function is the key and in this study, we attempted to compare different kernels. However, ANN utilizes multilayer connections and various activation functions to deal with nonlinear problems.

Byvatov et al.36 showed that SVMs outperformed the ANN classifiers on overall prediction accuracy. In addition, SVMs demonstrated superior classification accuracies to neural classifiers in many experiments in a study by Arora et al.37 Phangtriastu et al.38 achieved the highest accuracy of 94.43% using an SVM classifier with the feature extraction algorithms.38 In a study, a comparison between SVM and convolutional neural network models for hyperspectral image classification was performed by Hasan et al.39 They concluded that SVM classifiers showed better accuracy performance. In addition, more studies that applied SVMs were: (1) Laurinda et al.31 in 2018 experimented to assess the performance of SVM classification to stratify the Gleason Score of PCa in the central gland based on image features across multiparametric MRI. They used PCA, successive projections algorithm, and a genetic algorithm (GA) followed by SVM, combined with Fourier Transform midinfrared spectroscopy presented as complementary or alternative tools to the traditional methods for PCa screening and classification. They concluded that the SVM classification based on magnetic resonance imaging achieved the accurate classification of the Gleason Score of PCa in the central gland. They found GA–SVM was the best classification approach, with higher sensitivity (100%) and specificity (80%), particularly in the early stages, which was better than traditional methods of diagnosis.31 Compared with our results, we achieved a sensitivity of 95.65% and specificity of 80.54% with the radial basis (RB) kernel and sensitivity of 90.97% and specificity of 87.50% with the Gaussian kernel. However, our model is not limited to early stages and could be applied to different stages; (2) in 2021, Rustam et al.32 used two methods of classification to diagnose PCa. These were a random forest (RF) and SVM. They found the accuracy of the RF reached the highest rate at 97.30% with 80% of data training and running time was 0.06 s, and SVMs reached the rate at 83.33% with 90% of data training and running time was 0.05 s.32 Their dataset was from Al-Islam Bandung Hospital located in Soekarno-Hatta St No.644, Manjahlega, Rancasari, Bandung City, West Java, Indonesia. They did not mention the kernel used for the SVMs algorithm. However, we achieved an accuracy of 90.28 % with a linear kernel in <0.05 s. In addition, we may compare our results with a recent study; (3) Zhang et al.23 combined improved GrowCut and Zernik feature extraction and ensemble learning techniques such as K-Nearest Neighbour (KNN), SVM, and Multi-layer Perceptron (MLP) algorithms for prostate cancer detection and lesion segmentation in MRI. The accuracy of prostate cancer detection in their proposed method was 80.97%. The accuracy of linear regression (LR), feed-forward neural networks, SVM, Naïve Bayes, and RF methods were 80%, 77%, 72%, 78%, and 80%, respectively. We achieved greater accuracy for the three kernels for both cross-validations with K fold = 5 and 10. In some recent studies, deep learning algorithms were used for classification tasks.40–44 Due to the limited number of images we were provided, deep learning algorithms were not chosen in this step of our research. We will collect more images and use deep learning algorithms in our future work to extend this research.

Machine learning commonly begins with the machine learning algorithm system computing the image features that are of importance when making the diagnosis of interest. The machine learning algorithm system can identify the best combination of these image features for classifying. Feature selection is the procedure of selecting important features from the data; therefore, the output of the model can be accurate and in agreement with the requirements. Statistical-based feature selection methods involve evaluating the relationship between each input variable and the target variable using statistics and selecting those input variables that have the strongest relationship with the target variable.40 Therefore, feature extraction is a process of transforming raw data into numerical features that can be processed when preserving the information in the original data set. It generates better results than applying machine learning directly to the raw data. It can be challenging for a machine learning practitioner to select an appropriate statistical measure for a dataset when performing filter-based feature selection.40

In this study, the accuracy of the SVM classifier was determined by dividing the number of test cases that had the same diagnostic labels and actual labels over the total number of test cases. Some solutions to increase the accuracy of the SVM classifier are as follows:

  • Increase the amount of data;

  • Increase the number of selected features;

  • Transfer features, such as changing the scale of the variable from the main scale to the scale between zero and one or normalizing them;

  • Optimizing within n-fold cross validation

  • Using different kernels.

For the first solution, there was no access to more data, in the second solution, from previous studies, the classification accuracy was not desirable in a feature space and the features may overlap; therefore, it is necessary to define and extract new features. Moreover, previous studies have shown that entropy in T2 WMR images is one of the most important features for the differentiation of cancerous and noncancerous tissues in the prostate.34 Therefore, it was decided to add entropy to all these features to improve the accuracy of the work. Therefore, the number of features reached 17. The third solution was performed, and the features were all normalized. For cross-validation, the k-fold was changed from 5 to 10. A cross-validation is a helpful tool when the size of the data set is limited. To control the problem of limited data, cross-validation can be used, when still being able to assess the fit of the model. Cross-validation splits the data set into two portions iteratively: a test and a training set. Then, the prediction errors from each of the test sets are averaged to obtain the expected prediction error for the whole model.

In this study, feature reduction was chosen. Feature reduction, which is known as dimensionality reduction, is the process of reducing the number of features in a reserve heavy computation without losing important information. Reducing the number of features means the number of variables is reduced making the computer’s task easier and faster. In this study, PCA was chosen, which was a rotation of data from one coordinate system to another. After the operation, on the new coordinate system the first dimension has the maximum variance possible, then the second dimension has most of the remaining variance possible, which continues. PCA was considered as a feature transform rather than a feature selection. Therefore, feature selection is merely selecting and excluding given features without changing them. Dimensionality reduction transforms features into a lower dimension.

Limitations and future directions

Due to the limited number of images in this study, deep learning algorithms were not chosen in this step of our research. More images will be collected and deep learning algorithms will be used in future work to extend this research. In addition, by collecting dynamic contrast-enhanced MRI and diffusion-weighted imaging in combination, potentially the accuracy in future work could be improved. In addition, for ROI segmentation, the work will be improved and another model will be used for fully automatic segmentation. One active contour algorithm, such as snake, will be considered. A snake model is an approach that can solve many segmentation problems.45,46 The model’s primary function is to determine and outline the target object for segmentation. It requires some prior knowledge of the target object’s shape, especially for complicated subjects. Active snake models, often known as snakes, are configured using spline focused on minimizing energy, followed by various forces governing the image.

In future work, two feature selection methods will be considered, such as sequential feature selection and symmetrical uncertainty algorithms to observe the feature selection algorithm results for computational time and the accuracy parameters. This study could continue to estimate tumor volume by classifying cancerous tissues and separating them from noncancerous tissue in the relevant MRI slices. The tumor surface could be calculated in the relevant slice. Then, the tumor volume in each slice could be obtained by multiplying the tumor area in that slice by the thickness of each slice. The total tumor volume could be estimated by summing the tumor volume on all slides. This study will be continued by comparing and evaluating the correlation between tumor volume and Gleason’s biopsy score in the following stages of the research.

Conclusions

In this study, the feature set was a dataset with 17 features that were extracted from the demarcated ROI. The number of columns was reduced from 17 to 5 by the PCA technique. The PCA algorithm helped to identify five significant features that could achieve a classification accuracy that was the same as that of 17 features. In addition, the RBF, Gaussian, and linear kernels were used in the SVM classification. The largest sensitivity and the second-largest accuracy were achieved using RBF.

Abbreviations

ANN: 

artificial neural network

PCA: 

principal component analysis

PCa: 

prostate cancer

ROI: 

region of interest

SNR: 

signal to noise ratio

T2 WMR: 

T2 weighted magnetic resonance

SVM: 

support vector machine

Declarations

Acknowledgement

We would like to thank the Baradaran Pathology Center in Isfahan, Iran for providing us with the data set.

Data sharing statement

The dataset used in support of the findings of this study has not been available because we have no permission from the Baradaran Pathology Center.

Ethical statement

The study was performed following the Helsinki Declaration on ethical principles for medical research involving human subjects and was approved by the Institutional Committee for Ethics in Biomedical Research of the Isfahan University of Medical Sciences (approval ID: IR.MUI.MED.REC.1398.437). Written informed consent was obtained from all individual participants to be included in the study and to publish the accompanying images.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Conflict of interest

The authors have no conflicts of interest related to this publication.

Authors’ contributions

Study concept and design (MA, ME), acquisition of data (MA, AS), analysis and interpretation of data (MA, ME), drafting of the manuscript (ME, MA), critical revision of the manuscript for important intellectual content (ME, NEYK), administrative, technical, or material support (ME, NEYK), and study supervision (AS). All authors have made a significant contribution to this study and have approved the final manuscript.

References

  1. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2015. CA Cancer J Clin 2015;65(1):5-29 View Article PubMed/NCBI
  2. Nelson WG, Antonarakis ES, Carter HB, De Marzo AM, DeWeese TL. Abeloff's Clinical Oncology. 6th ed. Amsterdam: Elsevier; 2020, 1401-1432.e7 View Article
  3. Giles GG. International Encyclopedia of Public Health. Amsterdam: Academic Press; 2008, 323-331 View Article
  4. Liu L, Tian Z, Zhang Z, Fei B. Computer-aided Detection of Prostate Cancer with MRI: Technology and Applications. Acad Radiol 2016;23(8):1024-1046 View Article PubMed/NCBI
  5. Artan Y, Haider MA, Langer DL, van der Kwast TH, Evans AJ, Yang Y, et al. Prostate cancer localization with multispectral MRI using cost-sensitive support vector machines and conditional random fields. IEEE Trans Image Process 2010;19(9):2444-2455 View Article PubMed/NCBI
  6. Palisaar RJ, Graefen M, Karakiewicz PI, Hammerer PG, Huland E, Haese A, et al. Assessment of clinical and pathologic characteristics predisposing to disease recurrence following radical prostatectomy in men with pathologically organ-confined prostate cancer. Eur Urol 2002;41(2):155-161 View Article PubMed/NCBI
  7. Bolla M, van Poppel H, Collette L, van Cangh P, Vekemans K, Da Pozzo L, et al. Postoperative radiotherapy after radical prostatectomy: a randomised controlled trial (EORTC trial 22911). Lancet 2005;366(9485):572-578 View Article PubMed/NCBI
  8. Stamey TA, McNeal JE, Yemoto CM, Sigal BM, Johnstone IM. Biological determinants of cancer progression in men with prostate cancer. JAMA 1999;281(15):1395-1400 View Article PubMed/NCBI
  9. Graefen M, Noldus J, Pichlmeier U, Haese A, Hammerer P, Fernandez S, et al. Early prostate-specific antigen relapse after radical retropubic prostatectomy: prediction on the basis of preoperative and postoperative tumor characteristics. Eur Urol 1999;36(1):21-30 View Article PubMed/NCBI
  10. Wolters T, Roobol MJ, van Leeuwen PJ, van den Bergh RC, Hoedemaeker RF, van Leenders GJ, et al. Should pathologists routinely report prostate tumour volume? The prognostic value of tumour volume in prostate cancer. Eur Urol 2010;57(5):821-829 View Article PubMed/NCBI
  11. Cornud F, Khoury G, Bouazza N, Beuvon F, Peyromaure M, Flam T, et al. Tumor target volume for focal therapy of prostate cancer-does multiparametric magnetic resonance imaging allow for a reliable estimation?. J Urol 2014;191(5):1272-1279 View Article PubMed/NCBI
  12. Ito H, Kamoi K, Yokoyama K, Yamada K, Nishimura T. Visualization of prostate cancer using dynamic contrast-enhanced MRI: comparison with transrectal power Doppler ultrasound. Br J Radiol 2003;76(909):617-624 View Article PubMed/NCBI
  13. Vos PC, Barentsz JO, Karssemeijer N, Huisman HJ. Automatic computer-aided detection of prostate cancer based on multiparametric magnetic resonance image analysis. Phys Med Biol 2012;57(6):1527-1542 View Article PubMed/NCBI
  14. Vis AN, Roemeling S, Kranse R, Schröder FH, van der Kwast TH. Should we replace the Gleason score with the amount of high-grade prostate cancer?. Eur Urol 2007;51(4):931-939 View Article PubMed/NCBI
  15. Hambrock T, Vos PC, Hulsbergen-van de Kaa CA, Barentsz JO, Huisman HJ. Prostate cancer: computer-aided diagnosis with multiparametric 3-T MR imaging—effect on observer performance. Radiology 2013;266(2):521-530 View Article PubMed/NCBI
  16. Gnep K, Fargeas A, Gutiérrez-Carvajal RE, Commandeur F, Mathieu R, Ospina JD, et al. Haralick textural features on T2 -weighted MRI are associated with biochemical recurrence following radiotherapy for peripheral zone prostate cancer. J Magn Reson Imaging 2017;45(1):103-117 View Article PubMed/NCBI
  17. Litjens G, Debats O, Barentsz J, Karssemeijer N, Huisman H. Computer-aided detection of prostate cancer in MRI. IEEE Trans Med Imaging 2014;33(5):1083-1092 View Article PubMed/NCBI
  18. Chinnu A. MRI brain tumor classification using SVM and histogram based image segmentation. International Journal of Computer Science and Information Technologies 2015;6(2):1505-1508
  19. Haralick RM. Statistical and structural approaches to texture. Proceedings of the IEEE 1979;67(5):786-804 View Article
  20. Cuocolo R, Cipullo MB, Stanzione A, Romeo V, Green R, Cantoni V, et al. Machine learning for the identification of clinically significant prostate cancer on MRI: a meta-analysis. Eur Radiol 2020;30(12):6877-6887 View Article PubMed/NCBI
  21. Tătaru OS, Vartolomei MD, Rassweiler JJ, Virgil O, Lucarelli G, Porpiglia F, et al. Artificial Intelligence and Machine Learning in Prostate Cancer Patient Management-Current Trends and Future Perspectives. Diagnostics (Basel) 2021;11(2):354 View Article PubMed/NCBI
  22. Chiu PK, Shen X, Wang G, Ho CL, Leung CH, Ng CF, et al. Enhancement of prostate cancer diagnosis by machine learning techniques: an algorithm development and validation study. Prostate Cancer Prostatic Dis 2021 View Article PubMed/NCBI
  23. Zhang L, Li L, Tang M, Huan Y, Zhang X, Zhe X. A new approach to diagnosing prostate cancer through magnetic resonance imaging. Alexandria Engineering Journal 2021;61(1):897-904 View Article
  24. Li J, Weng Z, Xu H, Zhang Z, Miao H, Chen W, et al. Support Vector Machines (SVM) classification of prostate cancer Gleason score in central gland using multiparametric magnetic resonance images: A cross-validated study. Eur J Radiol 2018;98:61-67 View Article PubMed/NCBI
  25. Chang CY, Hu HY, Tsai YS. Prostate cancer detection in dynamic MRIs. 2015 IEEE International Conference on Digital Signal Processing (DSP) 2015:1279-1282 View Article
  26. Shah V, Turkbey B, Mani H, Pang Y, Pohida T, Merino MJ, et al. Decision support system for localizing prostate cancer based on multiparametric magnetic resonance imaging. Med Phys 2012;39(7):4093-4103 View Article PubMed/NCBI
  27. Niaf E, Rouvière O, Mège-Lechevallier F, Bratan F, Lartizien C. Computer-aided diagnosis of prostate cancer in the peripheral zone using multiparametric MRI. Phys Med Biol 2012;57(12):3833-3851 View Article PubMed/NCBI
  28. de A. Lopes DF, Ramalho GLB, de Medeiros FNS, Costa RCS, Araújo RTS. Structural, Syntactic, and Statistical Pattern Recognition. SSPR /SPR 2006. Lecture Notes in Computer Science, vol 4109. Berlin, Heidelberg: Springer; 2006 View Article
  29. Wibmer A, Hricak H, Gondo T, Matsumoto K, Veeraraghavan H, Fehr D, et al. Haralick texture analysis of prostate MRI: utility for differentiating non-cancerous prostate from prostate cancer and differentiating prostate cancers with different Gleason scores. Eur Radiol 2015;25(10):2840-2850 View Article PubMed/NCBI
  30. Cuocolo R, Cipullo MB, Stanzione A, Ugga L, Romeo V, Radice L, et al. Machine learning applications in prostate cancer magnetic resonance imaging. Eur Radiol Exp 2019;3(1):35 View Article PubMed/NCBI
  31. Siqueira LFS, Morais CLM, Araújo Júnior RF, de Araújo AA, Lima KMG. SVM for FT-MIR prostate cancer classification: An alternative to the traditional methods. Journal of Chemometrics 2018;32(12):e3075 View Article
  32. Rustam Z, Angie N. Prostate Cancer Classification Using Random Forest and Support Vector Machines. Journal of Physics: Conference Series 2021;1752:012043 View Article
  33. Cervantes J, Garcia-Lamont F, Rodríguez-Mazahua L, Asdrubal Lopez A. A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing 2020;408:189-215 View Article
  34. Citak-Er F, Vural M, Acar O, Esen T, Onay A, Ozturk-Isik E. Final Gleason score prediction using discriminant analysis and support vector machine based on preoperative multiparametric MR imaging of prostate cancer at 3T. Biomed Res Int 2014;2014:690787 View Article PubMed/NCBI
  35. Jayachandran A, Dhanasekaran R. Multi class brain tumor classification of MRI images using hybrid structure descriptor and fuzzy logic based RBF kernel SVM. Iranian Journal of Fuzzy Systems 2017;14(3):41-54 View Article
  36. Byvatov E, Fechner U, Sadowski J, Schneider G. Comparison of support vector machine and artificial neural network systems for drug/nondrug classification. J Chem Inf Comput Sci 2003;43(6):1882-1889 View Article PubMed/NCBI
  37. Arora S, Bhattacharjee D, Nasipuri M, Malik L, Kundu M, Basu DK. Performance Comparison of SVM and ANN forHandwritten Devnagari Character Recognition. IJCSI International Journal of Computer Science Issues 2010;7(3):18-26
  38. Phangtriastu MR, Harefa J, Tanoto DF. Comparison Between Neural Network and Support Vector Machine in Optical Character Recognition. Procedia Computer Science 2017;116:351-357 View Article
  39. Hasan H, Shafri HZM, Habshi M. A Comparison Between Support Vector Machine (SVM) and Convolutional Neural Network (CNN) Models For Hyperspectral Image Classification. IOP Conference Series: Earth and Environmental Science 2019;357:012035 View Article
  40. Khosravi P, Lysandrou M, Eljalby M, Li Q, Kazemi E, Zisimopoulos P, et al. A Deep Learning Approach to Diagnostic Classification of Prostate Cancer Using Pathology-Radiology Fusion. J Magn Reson Imaging 2021;54(2):462-471 View Article PubMed/NCBI
  41. Liu Y, An X. 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). ; 2017, 1-6 View Article
  42. Moallem G, Pore AA, Gangadhar A, Sari-Sarraf H, Vanapalli SA. Detection of live breast cancer cells in brightfield microscopy images containing white blood cells by image analysis and deep learning [Preprint]. bioRxiv 2021:467215 View Article
  43. Patel A, Singh SK, Khamparia A. Detection of Prostate Cancer Using Deep Learning Framework. IOP Conference Series: Materials Science and Engineering 2021;1022:012073 View Article
  44. Schelb P, Kohl S, Radtke JP, Wiesenfarth M, Kickingereder P, Bickelhaupt S, et al. Classification of Cancer at Prostate MRI: Deep Learning versus Clinical PI-RADS Assessment. Radiology 2019;293(3):607-617 View Article PubMed/NCBI
  45. Etehadtavakol M, Ng EYK, Kaabouch N. Automatic Segmentation of Thermal Images of Diabetic-at-Risk Feet Using the Snakes Algorithm. Infrared Physics & Technology 2017;86:66-76 View Article
  46. Tan JH, Ng EYK, Acharya U R. An efficient automated algorithm to detect ocular surface temperature on sequence of thermograms using snake and target tracing function. J Med Syst 2011;35(5):949-958 View Article PubMed/NCBI
  • Exploratory Research and Hypothesis in Medicine
  • pISSN 2993-5113
  • eISSN 2472-0712
Back to Top

Comparison of Different Kernels in a Support Vector Machine to Classify Prostate Cancerous Tissues in T2-weighted Magnetic Resonance Imaging

Ahmad Shanei, Mahnaz Etehadtavakol, Mohammadreza Azizian, Eddie Y.K. Ng
  • Reset Zoom
  • Download TIFF