Coronavirus disease 2019 (COVID-19) is a serious human pandemic. It is caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), which is believed to have been transmitted to humans from wild animals, possibly sold in the Huanan Seafood Wholesale Market in China.1 The genome of SARS-CoV-2 has 85% similarity with bat coronavirus, and it is generally assumed that bats were the primary source of infection, and that the virus was transmitted to humans through unknown intermediary animals in Wuhan, China, in December 2019.1
Person-to-person transmission is efficient, with multiple clusters reported. Transmission of the virus happens mainly through respiratory droplets and close contact, as well as through aerosol transmission in relatively closed environments for a long time through exposure to high concentrations of aerosol.2 The mean reproductive number (R0) for COVID-19 is estimated to be 3.28, which exceeds WHO estimates of 1.4–2.5.3 Clinically, patients with COVID-19 present with respiratory symptoms, with a very similar presentation to other respiratory virus infections.1
Effective screening of COVID-19 patients is a critical step in bringing infected individuals to immediate care and treatment, as well as quarantining them to reduce the transmission of the virus. So far, the gold-standard screening method used for detecting COVID-19 cases is reverse transcriptase–polymerase chain reaction (RT-PCR)4 testing, which can detect the nucleic acid of the virus from nasopharyngeal or oropharyngeal swabs.
However, it has been reported that RT-PCR can detect only 60–70% of the true COVID-19 symptomatic patients and as low as 18–33% of the asymptomatic cases. In other words, the test misses 30–40% of actual COVID-19 cases, which, in turn, facilitates the rapid transmission of the virus.5,6 Furthermore, molecular laboratory facilities, including reagents and consumables, are scarce in resource-limited countries such as Ethiopia, for conducting RT-PCR in screening COVID-19 cases.
As an alternative screening method, chest radiological imaging, such as computed tomography (CT) and X-ray, has a vital role in the early diagnosis and treatment of COVID-19.7 Even if RT-PCR results are negative, symptoms can be detected by examining the radiological images of patients.8,9 It is evident that combining clinical imaging features with laboratory results may help in the early detection of COVID-19.10–14 Characteristic changes in chest X-ray and CT images have been identified even before the appearance of COVID-19 symptoms.1
In comparison to CT, chest X-ray imaging is advantageous for screening14 COVID-19 amid the global pandemic, owing to its rapid triage, availability, accessibility, and portability, which makes it a good complement to PCR testing, even exhibiting higher sensitivity.6 However, chest radiographic analysis has limitations in detecting the early stages of COVID-19 features, ground-glass opacities (GGO), and others, even when expert radiologists are available to interpret the images.7
To overcome this drawback, researchers are developing deep learning or machine learning models that can focus on points that are invisible to the human eye. The application of artificial intelligence (AI) for automatic detection in medicine is becoming an interesting tool for physicians.15 A study conducted in the USA on the application of artificial intelligence in medical image analysis found that a deep learning algorithm could classify clinically important abnormalities on chest radiographs at a performance level comparable to practicing radiologists.16 Earlier research has successfully used artificial intelligence in detecting arrhythmia, breast cancer, pneumonia, brain disease, skin cancer, and pneumonia from medical images.17
Since the emergence of the COVID-19 pandemic, several artificial intelligence-mediated machine learning models have indicated that chest X-ray images can screen and classify COVID-19 patients with 80–98% sensitivity and 70–87% specificity.18–20 Hence, the application of computer-assisted diagnostic models can help radiologists to more quickly and accurately interpret chest X-ray images to screen and classify COVID-19 patients. The technology can also be helpful to mitigate the shortage of expert radiologists in remote areas.
Machine learning algorithms that are used for COVID-19 diagnosis can be categorized into two types, namely: supervised and unsupervised learning approaches.21 In this study, we applied the support vector machine (SVM) learning algorithm, which is one of the most widely used supervised machine learning approaches.
Therefore, this research was designed to develop a classical machine learning-based model using chest X-ray images for the automatic detection of COVID-19 and for distinguishing it from other pneumonia cases, with high sensitivity and specificity, and with a short computation time.
Materials and Methods
In this study, a total of 1100 chest X-ray images were randomly selected from three different open sources: the GitHub repository shared by Joseph Cohen,22 Kaggle,23 Bachir,24 and Mooney.25 The chest X-ray images in the datasets were obtained from patients and had been interpreted and reported by expert radiologists. The labels generated were then validated in an independent test set, achieving a micro-F1 score of 0.93.26 It has been documented that the images are suitable for training supervised models concerning radiographs.26,27 The datasets contain chest X-ray images of confirmed COVID-19 cases, other pneumonia, and no-findings (normal). There are plenty of normal and other pneumonia X-ray images in these open sources. However, owing to the lack of COVID-19 X-ray images, we limited the number of images for other pneumonia and no-findings to avoid problems with unbalanced data. Our experimental dataset contains 300 X-ray images of confirmed COVID-19 patients, 400 images of other pneumonia patients, and 400 normal X-ray images.
Classical Machine Learning
Machine learning is one of the most important fields of artificial intelligence. It is the process of building algorithms that are able to learn from previous datasets, and leverage that experience to predict new unseen datasets. In the case of image classification problems, applying classical machine learning involves feature extraction from the images, aided by media filters.
Since the development of convolutional neural networks (CNNs), deep learning has become a desirable technique for most AI-related problems because of its superior performance. Despite the high performance of deep networks, there are still good reasons to use classical machine learning over deep learning; for example, classical machine learning shows better performance on a small amount of data with limited financial and computational power, and it can also iterate more quickly and try out many different techniques in a shorter period of time.28 Other researchers found that a machine learning approach using the SVM algorithm gave the best prediction accuracy among all classifiers for COVID-19 diagnosis.29 Hence, in this study, we employed a classical machine learning (SVM) approach, which could perform best for our small dataset classification problem.
Feature Extraction (Media Filter)
Feature extraction is a requirement during the application of classic machine learning for image classification problems. We used a media filter on the X-ray images to emphasize several supplementary features, which were included as additional numeric attributes.
In this study, the histogram of oriented gradients (HOG) feature was extracted from our chest X-ray images dataset. Most image processing techniques use the local geometric shapes within an image and then characterize them according to the distribution of edge directions; this is called the histogram of oriented gradients (HOG).30
We preferred HOG to other local shape descriptors because of its invariance to small deformations and its robustness in terms of outliers and noise.30 HOG is a feature descriptor for images that can be used in computer vision and machine learning. In this study, we used a newly improved HOG, the pyramid histogram of oriented gradients (PHOG), proposed by Bosch et al in 2007, which takes the spatial property of the local shape into account when representing an image.31
In our particular case, the images were resized to 64×128 pixels, the most common image dimension used in the HOG feature descriptor.30 Then, the images were divided into cells at several pyramid levels. The magnitude of the gradients and the orientations are calculated as follows, where Gx and Gy are the vertical and horizontal gradients, respectively:
Each gradient orientation is then quantized into K bins. The final PHOG descriptor for an image is the concatenation of all the HOG vectors at each pyramid resolution. The concatenation of all the HOG vectors introduces the spatial information of the image. In each cell of every level, gradients over all the pixels are concatenated to form a local K-bins histogram. As a result, all of the cells at different levels are combined to form a final PHOG vector with dimension of . In our experiment, the HOG descriptor was quantized into 30 orientation bins in the range between 0 and 360. L=3 and K=30 were selected and 630 features were obtained from the PHOG descriptor for training our SVM algorithm.
Support Vector Machine (SVM)
The SVM algorithm is highly preferred owing to its significant accuracy while requiring less computational power. SVM can be used for both regression and classification tasks, but it is widely used in classification problems. The objective of SVM is to find a hyperplane in an N-dimensional space that distinctly classifies the data points. Our dimension N is equal to our number of features, which is 630.
The SVM algorithm not only is the most widely used machine learning method in COVID-19 diagnosis and outbreak prediction,21,32 but also has achieved the highest prediction accuracy, of 100%, among all classifiers.29 Hence, in this study, we employed a machine learning model trained by an SVM classifier, which was optimized by a sequential minimal optimization (SMO) algorithm, invented in 1988 by John Platt, at Microsoft Research.33
In order to achieve a good result and at the same time to avoid overfitting, the following hyperparameters were tuned: batch size=100, tolerance parameter=0.001, epsilon=1.0E-12, kernel=polykernel of E=1.0, C=250,007, number of folds=−1, and random seed of 1.
Java programming language integrated with Weka’s Java API34 was used for the implementation of the X-ray image classification system. The experiments were performed on an HP ProBook 450 G4 PC with a processor of Intel® Core i7-7500 CPU @ 2.70 GHz (4 CPUs) and RAM of 8192 MB running Windows 10 pro 64-bit with NVIDIA GeForce 930MX graphics card.
The experiment was conducted using two sets of X-ray images, namely: 1) X-ray images of three categories: no-findings, COVID-19, and pneumonia; and 2) X-ray images of two categories: COVID-19 and normal. Each class in the dataset was randomly split into two; 80% was designated for training and the remaining 20% was left for independent external testing using the holdout method.
First, we trained our model to detect and classify X-ray images in the three categories for the multi-class classification task, then the model was trained to detect and classify the two categories for the binary classification task. The performance of both binary and multi-level classification challenges was evaluated using a 10-fold cross-validation scheme during the training (Figure 1).
Figure 1 Block diagram of the proposed model for distinguishing COVID-19 from other pneumonia and no-findings.
Structure of the Proposed Model
The proposed model commences by taking chest X-ray images as an input. Then, a feature extraction or media filter is applied to the images to obtain the essential attributes. Based on the attributes acquired from the media filter, the SVM is trained and validated by a 10-fold cross-validation technique. Finally, the model is tested by an external testing dataset. The overall architecture of the proposed COVID-19, other pneumonia, and no-finding classification system is depicted in Figure 1.
Performance Evaluation Metrics
The classification performance of the model was evaluated by the most widely used metrics, including sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), F1 score, receiver operating characteristics (ROC) curve, area under the curve (AUC), kappa, and MCC. The definition and interpretation of each metric are precisely explained.
Sensitivity (recall) is a ratio of true positives to total positives in the data. It measures how likely a test is to exclude or detect a condition correctly. Specificity is a ratio of true negatives to total negatives in the data, and it represents the true negative rate, while accuracy is a ratio of correct predictions to total predictions. Positive predictive value is the probability that subjects with a positive screening test truly have the disease, and negative predictive value is the probability that subjects with a negative screening test truly do not have the disease. The F1 score is an overall measure of a model’s accuracy that combines precision and recall. The ROC curve is the plot that shows the trade-off between the sensitivity and (1 − specificity) across a series of cut-off points. The AUC is an effective and combined measure of sensitivity and specificity that describes the inherent validity of diagnostic tests.35 The kappa statistic is a measure of how closely the instances classified by the machine learning classifier match the data labeled as ground truth, controlling for the accuracy of a random classifier as measured by the expected accuracy.
The statistical values for all metrics mentioned above are bounded between 0 and 1, where 1 represents perfect prediction, while 0 denotes total failure of the model to perform correctly.
Matthews Correlation Coefficient (MCC)
The MCC was re-proposed by Baldi et al36 as a standard performance metric for machine learning with a natural extension to the multi-class case.37 The coefficient considers true and false positives and negatives, and yields high scores only if the prediction gives good rates for all four of these categories.38 Its value is bounded to [−1, 1], where a value of 1 represents perfect prediction, 0 random guessing, and −1 total disagreement between prediction and observation.
Since we used open-source data, a waiver was obtained from the IRB.
Model Speed to Complete the Classification Task
The performance of our classification model in distinguishing COVID-19 from other pneumonia and normal X-ray images was examined as follows. Applying the media filter took 350 seconds for all of our training dataset, or 0.39 seconds per image. Training the model and 10-fold cross-validation together took 3 seconds.
Multi-Class Classification Task
The proposed model’s 10-fold cross-validation results and independent testing results of the multi-class classification task are summarized in Figure 2.
Figure 2 Confusion matrix results of the multi-class classification task: (A) 10-fold cross-validation, (B) independent testing.
The multi-level classification model was able to distinguish COVID-19 patients with sensitivity of 97.92% (95% CI 95.21–99.32), specificity of 98.91% (95% CI 97.76–99.56), PPV of 97.12% (95% CI 94.14–98.60), NPV of 99.22% (95% CI 98.15–99.67), and AUC of 0.98 (95% CI 0.97–0.99) for the internal testing or cross-validation. For the independent external testing, the model showed sensitivity of 95% (95% CI 86.08–98.96), specificity of 98.13% (95% CI 94.62–99.61), PPV of 95% (95% CI 86.08–98.32), NPV of 98.13 (95% CI 94.56–99.37), and AUC of 0.97 (95% CI 0.93–0.98) for distinguishing COVID-19 from other pneumonia and no-findings (Table 1).
Table 1 Performance-Measuring Statistical Values for Distinguishing COVID-19 from Other Pneumonia and Normal Images
Binary-Class Classification Task
The results of confusion matrices for the binary classification problem in classifying/detecting COVID-19-positive and normal X-ray images are shown in Figure 3.
Figure 3 Confusion matrix results of the binary-class classification task: (A) 10-fold cross-validation, (B) independent testing.
The binary classification model was able to distinguish COVID-19 patients with sensitivity of 99.58% (95% CI 97.70–99.99), specificity of 99.69 (95% CI 98.27–99.99), PPV of 99.58 (95% CI 97.12–99.94), NPV of 99.69% (95% CI 97.83–99.96), and AUC of 0.99 (95% CI 0.98–1.0) for the internal testing or cross-validation. For the independent external testing, the model showed sensitivity of 98.33% (95% CI 91.03–99.63), specificity of 100% (95% CI 95.49–100), PPV of 100%, NPV of 98.77 (95% CI 91.97–99.82), and AUC of 0.99 (95% CI 0.96–1.0) for distinguishing COVID-19 from normal images (Table 2).
Table 2 Performance Measuring Statistical Values for Distinguishing COVID-19 from Normal Images
In both the multi-level and binary classification models, the ROC curve and the corresponding AUC value approached 1, proving that the classifier is able to perfectly distinguish correctly between all of the positive and the negative class points (Figure 4).
The successful application of artificial intelligence will have several benefits for modern-day health care, such as higher diagnostic accuracy, faster turnaround, better outcomes for patients, and better quality of work life for radiologists.39
In this study, we examined the performance of classification models for the detection of COVID-19 based on an SVM model. Evidence indicates that the SVM algorithm not only is the most widely used machine learning method but also has achieved the highest prediction accuracy, of 100%, among all classifiers in COVID-19 diagnosis.21,32
In this research, it has been demonstrated that the application of machine learning (SVM) in artificial intelligence applied on chest X-ray images could automatically detect COVID-19 pneumonia with 99.29% accuracy for the binary classification task and 97.27% performance for the multi-level classification task.
Since the emergence of the COVID-19 pandemic, several artificial intelligence-mediated machine learning models have indicated that chest X-ray images can screen and classify COVID-19 patients with 80–98% sensitivity and 70–87% specificity.18–20,32,40 Evidence indicates that the accuracy of machine learning approaches ranges from 76% to more than 99% in the diagnosis of COVID-19.32 Sethy et al developed an algorithm that could detect COVID-19 using X-ray images based on deep features and SVM with 95.38% accuracy.28 In another study, Xu et al developed an early prediction model that could distinguish COVID-19 pneumonia from influenza-A viral pneumonia and healthy cases using pulmonary CT images with deep learning techniques, with an accuracy of 86.7%.41 Our model, which is based on the SVM algorithm, showed comparable, and even higher, accuracy in detecting COVID-19 pneumonia. Similarly to our findings, researchers using a machine learning approach with the SVM algorithm reported an overall accuracy of 97.33% for thee-class classification (normal, pneumonia, and COVID-19) and 100% for the binary separation of COVID-19 from other pneumonia.42 Detection of the genetic material of SARS-CoV-2 infection using RT-PCR on nasopharyngeal and throat swab specimens is considered the gold standard in the diagnosis of COVID-19.6 However, RT-PCR has been reported to yield positive results in 30–70% of cases.6,43 Conversely, similarly to our findings, computer-aided X-ray images have been reported to have sensitivity values of 98%.43
The application of machine learning methods is beneficial not only to distinguish COVID-19 cases from other pneumonia patients, but also to help doctors to follow and predict the prognosis and treatment outcomes of their patients.32 Hence, research should be conducted using machine learning or deep learning methods on prospectively collected X-ray images, clinical/laboratory, and socio-demographic data from COVID-19 patents, and establishing the application of artificial intelligence in predicting prognosis and treatment outcomes of patients.
The application of computer-assisted diagnostic models to help radiologists to more quickly and accurately interpret chest X-ray images, to screen and classify COVID-19 patients, is highly required. The integration of this algorithm into the clinical system could help health institutions to advance patient care by reducing the time to diagnosis and increasing access to chest radiograph interpretation, as well as mitigating the shortage of expert radiologists in remote areas.
We are very grateful to the authors who deposited their X-ray images for free use.
All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.
The authors declare no conflicts of interests for this work.
1. Chan JFW, Yuan S, Kok K-H. A familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: a study of a family cluster. Lancet. 2020;395(10223):514–523. doi:10.1016/S0140-6736(20)30154-9
2. Li Q, Guan X, Wu P, et al. Early transmission dynamics in Wuhan, China, of novel coronavirus-infected pneumonia. N Engl J Med. 2020;382(13):1199–1207. doi:10.1056/NEJMoa2001316
3. Liu Y, Gayle A, Wilder-Smith A, Rocklöv J. The reproductive number of COVID-19 is higher compared to SARS coronavirus. J Travel Med. 2020;27:taaa021.
4. Wang W, Xu Y, Gao R, et al. Detection of SARS-CoV-2 in different types of clinical specimens. JAMA. 2020;323(18):1843–1844.
5. Ai T, Yang Z, Hou H, et al. Correlation of chest CT and RT-PCR testing for Coronavirus Disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;296(2):E32–E40. doi:10.1148/radiol.2020200642
6. Fang Y, Zhang H, Xie J, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020;296(2):E115–E117. doi:10.1148/radiol.2020200432
7. Zu ZY, Jiang MD, Xu PP, et al. Coronavirus disease 2019 (COVID-19): a perspective from China. Radiology. 2020;296:E15–E25.
8. Kanne JP, Little BP, Chung JH, Elicker BM, Ketai LH. Essentials for radiologists on COVID-19: an update—radiology scientific expert panel. Radiology. 2020;296:E41–E45. doi:10.1148/radiol.2020200527
9. Xie X, Zhong Z, Zhao W, Zheng C, Wang F, Liu J. Chest CT for typical Coronavirus Disease 2019 (COVID-19) pneumonia: relationship to negative RT-PCR testing. Radiology. 2020;296:E41–E45. doi:10.1148/radiol.2020200343
10. Kong W, Agarwal PP. Chest imaging appearance of COVID-19 infection. Radiology. 2020;2(1):e200028.
11. Lee EY, Ng MY, Khong PL. COVID-19 pneumonia: what has CT taught us? Lancet Infect Dis. 2020;20(4):384–385. doi:10.1016/S1473-3099(20)30134-1
12. Shi H, Han X, Jiang N, et al. Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: a descriptive study. Lancet Infect Dis. 2020;20(4):425–434. doi:10.1016/S1473-3099(20)30086-4
13. Zhao W, Zhong Z, Xie X, Yu Q, Liu J. Relation between chest CT findings and clinical conditions of coronavirus disease (COVID-19) pneumonia: a multicenter study. AJR Am J Roentgenol. 2020;214(5):1072–1077. doi:10.2214/AJR.20.22976
14. Li Y, Xia L. Coronavirus Disease 2019 (COVID-19): role of chest CT in diagnosis and management. AJR Am J Roentgenol. 2020;214(6):1280–1286. doi:10.2214/AJR.20.22954
15. Ozturk T, Talo M, Yildirim EA, Baloglu UB, Yildirim O, Rajendra Acharya U. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput Biol Med. 2020;121:103792. doi:10.1016/j.compbiomed.2020.103792
16. Rajpurkar P, Irvin J, Ball RL, et al. Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med. 2018;15(11):e1002686. doi:10.1371/journal.pmed.1002686
17. Rubin GD, Ryerson CJ, Haramati LB, et al. The role of chest imaging in patient management during the COVID-19 pandemic: a multinational consensus statement from the fleischner society. Radiology. 2020;296(1):172–180. doi:10.1148/radiol.2020201365
18. Castiglioni I, Ippolito D, Interlenghi M, et al. Artificial intelligence applied on chest X-ray can aid in the diagnosis of COVID-19 infection: a first experience from Lombardy, Italy. European Radiology Experimental. 2021;5(7). Available from: https://eurradiolexp.springeropen.com/track/pdf/10.1186/s41747-020-00203-z.pdf.
19. Narin A, Kaya C, ·Pamuk Z. Automatic detection of coronavirus disease (COVID‑19) using X‑ray images and deep convolutional neural networks. Pattern Analysis and Applications. 2021;24:1207–1220.
20. Zhang J, Xie Y, Li Y, Shen C, Xia Y. COVID-19 screening on chest X-ray images using deep learning based anomaly detection. arXiv. 2003;12338:2020.
21. Alyasseri ZAA, Al-Betar MA, Doush IA, et al. Review on COVID-19 diagnosis models based on machine learning and deep learning approaches. Expert Syst. 2021;e12759.
22. Cohen JP, Morrison P, Dao L. COVID-19 image data collection. arXiv. 2020;2003:11597.
23. Kaggle. Kaggle’s chest X-ray images (pneumonia) dataset; 2020. Available from: https://www.kaggle.com/paultimothymooney/chestAQ10 xray-pneumonia. Accessed: May 12, 2020.
24. Bachir. COVID-19 X-ray images. Available from: https://www.kaggle.com/bachrr/covid-chest-Xray,2020. Accessed May 27, 2021.
25. Mooney P. Kaggle, Kaggle’s Chest X-Ray Images (Pneumonia) dataset 2020. Available from: https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia.
26. Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers R. ChestX-Ray8: Hospital-Scale Chest X-Ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases,“ 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017;3462–3471, doi:10.1109/CVPR.2017.369.
27. Bustos A, Pertusa A, Salinas JM, de la Iglesia-vaya M. PadChest: a large chest x-ray image dataset with multi-label annotated reports. Med Image Anal. 2020;66:101797. doi:10.1016/j.media.2020.101797
28. Sethy P, Behera S, Ratha P, Biswas P. Detection of coronavirus Disease (COVID-19) based on deep features and support vector machine. Int J Math Eng Manag Sci. 2020;5(4):643–651. doi:10.33889/IJMEMS.2020.5.4.052
29. Iwendi C, Mahboob K, Khalid Z, Javed A, Rizwan M, Ghosh U. Classification of COVID-19 individuals using adaptive neuro-fuzzy inference system. Multimed Syst. 2021;15.
30. Dalal N, Triggs B. Histograms of oriented gradients for human detection. Paper presented at: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’05); 2005; San Diego, Calif, USA.
31. Bosch A, Zisserman A, Munoz X. Representing shape with a spatial pyramid kernel. Paper presented at: 6th ACM International Conference on Image and Video Retrieval (CIVR ’07); 2007; Amsterdam, the Netherlands.
32. Platt J. Sequential minimal optimization: a fast algorithm for training support vector machines technical report MSR TR-98-14; 1998. Available from: https://www.microsoft.com/en-us/research/uploads/prod/1998/04/sequential-minimal-optimization.pdf. Accessed on May 23, 2020.
33. Platt J. Sequential minimal optimization: a fast algorithm for training support vector machines technical report MSR TR-98-14; 1998.
34. Frank E, Hall M, Witten I. The WEKA Workbench. Online Appendix for “Data Mining: Practical Machine Learning Tools and Techniques“, Morgan Kaufmann, Fourth Edition. 2016.
35. Kumar R, Indrayan A. Receiver operating characteristic (ROC) curve for medical researchers. Indian Pediatr. 2011;48(4):277–287. doi:10.1007/s13312-011-0055-4
36. Baldi P, Brunak S, Chauvin Y, Andersen CA, Nielsen H. Assessing the accuracy of prediction algorithms for classification: an overview. Bioinformatics. 2000;16(5):412–424. doi:10.1093/bioinformatics/16.5.412
37. Gorodkin J. Comparing two K-category assignments by a K-category correlation coefficient. Comput Biol Chem. 2004;28(5–6):367–374. doi:10.1016/j.compbiolchem.2004.09.006
38. Chicco D, Jurman G. Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone. BMC Med Inform Decis Mak. 2020;20:16. doi:10.1186/s12911-020-1023-5
39. James H, Xiang L, Quanzheng L, et al. Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success. J Am Coll Radiol. 2018;15:504–508. doi:10.1016/j.jacr.2017.12.026
40. Khan MA. An automated and fast system to identify COVID-19 from X-ray radiograph of the chest using image processing and machine learning. Int J Imaging Syst Technol. 2021;31(2):499–508.
41. Xu X, Jiang X, Ma C, et al. Deep learning system to screen coronavirus disease 2019 peumonia. arXiv. 2002;09334:2020.
42. Novitasari DC, Hendradi R, Caraka R, et al. Detection of COVID-19 chest X-ray using support vector machine and convolutional neural network. Commun Math Biol Neurosci. 2020;1–19:2020.
43. Wang S, Kang B, Ma J, et al. A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19). European Radiology 2021;31:6096–6104.