You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

A review of intelligent medical imaging diagnosis for the COVID-19 infection

Abstract

Due to the unavailability of specific vaccines or drugs to treat COVID-19 infection, the world has witnessed a rise in the human mortality rate. Currently, real time RT-PCR technique is widely accepted to detect the presence of the virus, but it is time consuming and has a high rate of eliciting false positives/negatives results. This has opened research avenues to identify substitute strategies to diagnose the infection. Related works in this direction have shown promising results when RT-PCR diagnosis is complemented with Chest imaging results. Finally integrating intelligence and automating diagnostic systems can improve the speed and efficiency of the diagnosis process which is extremely essential in the present scenario. This paper reviews the use of CT scan, Chest X-ray, lung ultrasound images for COVID-19 diagnosis, discusses the automation of chest image analysis using machine learning and deep learning models, elucidates the achievements, challenges, and future directions in this domain.

1.Introduction

Novel coronavirus or the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a pandemic that has claimed numerous lives across the globe. Though initially an outbreak in China [1], soon it was declared a pandemic because in no time the virus had spread across the globe as there was no specific vaccine or drug available to treat the infection. It is believed that SARS-CoV-2 [2, 3, 4] first infected the bats and later spread to humans. People get infected by this virus if they encounter the virus containing droplets expelled by the infected person while sneezing or coughing which could be spread across various surfaces in and around the human carrier of the virus. The time span of incubation of the virus ranges between 2 to 14 days. and the commonly identified symptoms of the infection are cough, fever, sore throat, fatigue, breathlessness, malaise, headache, among others [5]. People with good immunity are not affected much and many turned out to be asymptomatic. People of age group 60 and those with underlying medical conditions, and children below age of 6 are found to be vulnerable and at a higher probability of the infection. In some cases, the infection may be mild but in some it may lead to acute respiratory distress syndrome (ARDS), pneumonia and dysfunction of multiple organs leading to death of the patient. The mortality rates of infected people have increased astonishingly over the past few months in many of the countries. This has challenged the health infrastructure of many of the affected countries. Complete lockdown was enforced in most countries urging people to stay home to be safe. Though the lockdown resulted in controlling the situation in most of the countries nevertheless it also had a huge adverse impact on the economy of those countries. This increased the need for identifying the infected people, isolating them, thus preventing the further transmission of the disease and aid in bringing back normalcy in our lives. In addition to this, timely and accurate diagnosis is the need of the hour that can save many lives. Currently, the sensitivity of the most used screening technique for coronavirus, the reverse-transcription polymerase chain reaction (RT-PCR) is relatively poor. Negative RT-PCR result does not exclude the possibility of virus being absent in the suspect. Therefore, it is of prime importance to find complementary or substitute methods to yield higher accuracy results. In this respect, though medical imaging techniques [6] like chest computed tomography (CT), lung ultrasound, Chest X-ray seem to be better choices. However, analysis of medical images for COVID-19 classification increases the demand for skilled medical imaging professionals which in turn may escalate the pressure on these skilled professionals for a faster and accurate diagnosis owing to the increasing rate of COVID-19 cases. This stresses on the importance of automating COVID-19 diagnosis process to reduce the burden on medical professionals. As deep learning is already popular in the medical domain, employing it for COVID-19 automation is highly advisable.

The following sections of the paper are organized as mentioned. The Section 2 of the paper describes the research methodology of literature review process followed. The Section 3 of the paper discusses RT-PCR technique, its shortcomings, and importance of medical images in diagnosis of COVID-19. Section 4 compares viral pneumonia with bacterial pneumonia and identifies features differentiating COVID-19 pneumonia from other types of viral pneumonia. Section 5 focuses on use of AI in COVID-19 diagnosis, image analysis stages, machine learning and deep learning approaches to COVID-19 medical image analysis and sheds light on approaches used for addressing limited dataset problems. Section 6 highlights open research challenges and future directions followed by conclusion of the review.

2.Research methodology

This section gives an overview of the methodology followed in writing this review paper. The methodology is structured as follows:

  • Problem formulation: As the world is battling with the COVID-19 pandemic, Coupling RT-PCR testing method with medical imaging and automating the diagnosis process using deep learning can lead to time efficient and accurate diagnosis of the disease.

  • The purpose of the review: The purpose is to gain up to date knowledge on the medical imaging application for the COVID-19 diagnosis, provide collective information about the related works, findings, and limitations, identify the open challenges and future scope which can be particularly useful to the researchers in this field.

  • Identifying sources of literature review:

    • Sources: COVID-19 19 articles on World Health Organization (WHO) websites, IEEE transaction papers, ScienceDirect Journals, Springer proceedings, medical domain journals.

    • Domain: COVID-19 diagnosis, pneumonia diagnosis, medical imaging, machine learning, deep learning in COVID-19 diagnosis, automation of COVID-19 diagnosis.

    • Reference types: Review articles, research articles.

  • Analysis of findings: Analyse the related work by various researchers and draw comparisons about their work, identify the limitations and drawbacks.

  • Identify the open challenges and future scope: Based on the analysis of the other researchers work identify the research gaps and future directions in this domain of research.

Figure 1 provides an overview of the steps in the research methodology of literature review.

Figure 1.

Research methodology.

Research methodology.

3.Coronavirus diagnosis

Figure 2 shows the taxonomy of various testing methods for COVID-19 diagnosis.

Figure 2.

Taxonomy of COVID-19 diagnosis methods.

Taxonomy of COVID-19 diagnosis methods.

3.1Lab tests for coronavirus diagnosis

The tests available [7, 8, 9] currently for diagnosing the COVID-19 infection are viral (molecular tests), antibody (serology) tests and antigen tests. The molecular tests are best suited if a person exhibits symptoms of coronavirus infection or chances of being exposed to someone with the virus as it gives positive results only if the person is currently infected by the virus. Antibody is suggested if a person has previously been infected by the virus or suspected of having COVID-19 as it detects the presence of antibodies to SARS-CoV-2. The antigen tests detect SARS CoV-2 proteins in respiratory samples, but currently have not received widespread acceptance. The sample collected in case of molecular tests is nasopharyngeal or nasal or a throat swab of a patient. In the case of an antigen test, the blood sample of the patient/suspect is used for testing.

3.2Medical imaging for COVID-19 diagnosis

Medical Imaging [10, 11, 12, 13, 14] like Lung ultrasound, Chest X-rays (CXR) and CT scan are important in the recognition of lesions in the lungs and assessing the evolution, size, and density of the lesions. Examination of CXR is quick, easy and time efficient, but the specificity and sensitivity for patients with mild symptoms are comparatively minimal and is not advised for initial stage COVID-19 patients. Chest CT images can show nearly all abnormalities containing mild initial exudative lesions, Hence useful in early stage COVID-19 pneumonia diagnosis. Lung ultrasound [15] seems suitable for inspection of lung abnormalities in suspected or infected patients because it is flexible, portable, and convenient. Figure 3 shows the taxonomy of image-based diagnosis modalities, components, AI approaches and methods to address limited dataset issues.

Figure 3.

Taxonomy of image based diagnosis of review of COVID-19.

Taxonomy of image based diagnosis of review of COVID-19.

3.3RT-PCR

Real time reverse transcriptase-polymerase chain reaction (RT-PCR) [16, 17, 18] is based on nucleic acid detection. At present it is the widely accepted standard coronavirus detection test as it is a simple and specific qualitative assessment method. One of the major drawbacks of this technique is the danger of producing results that are false positive and false negative. A negative result for COVID-19 test does not guarantee the absence of the virus in the suspect, hence patient treatment decisions must not solely depend on this test. There are many RT-PCR testing kits [19] currently available in the market but none of them give 100 percent accuracy. Hence, it facilitates the need to complement RT-PCR with other methods of diagnosis for an effective approach towards handling the pandemic. Notably, blending real-time RT-PCR and medical image analysis shows a Route discovery mechanism using high power level promising direction to find complementary testing methods for COVID-19 diagnosis.

3.4Chest X-rays in COVID-19 diagnosis

Chest X-ray [20] plays an important role in detecting covid19 as they display pneumonia like patterns which can aid in identifying the infection. The most regular radiograph discoveries include ground glass opacity, consolidation, distributions identified as bilateral, peripheral, and lower zone are predominant. Chest X-ray (CXR) [21] displays lower sensitivity compared to CT images in the recognition of COVID-19 lung disease. In CXR, sometimes the pulmonary opacities can be blurry, challenging the task of anomaly identification. Multifocal air-space condition can be essential in the identification of covid19 infection in the CXR report. The air space disease is discovered to be bilateral and mostly concentrated in the lower lung distribution according to the initial investigations conducted on COVID-19. Unique features of COVID-19 include peripheral air space opacities. CXR can easily identify Peripheral lung opacities that are patchy and multifocal. Even though CT scan is better in COVID-19 detection than CXR, Chest X-rays remain a good choice because it is cheaper than CT scan.

3.5Chest CT scan images in COVID-19 diagnosis

The Chest CT scan [22, 23, 24] images of covid19 suspects are evaluated for checking of the presence of GGO (ground glass opacity), consolidation, laterality between GGO and consolidation, presence of nodules, number of lobes affected, presence of pleural effusion, fibrosis, airway abnormalities, axial distribution of disease and degree of contribution of each of the lung lobes. The most common early finding of COVID-19 on Chest CT scan is supposed to be GGO. Apart from GGO, bilateral shadow patches, consolidation, multiple lesions, pulmonary fibrosis, and crazy paving patterns are most frequently seen in the CT scan reports of coronavirus patients. Based on the results of some studies [25] on current RT-PCR testing, it is noted that patients with RT-PCR negative results (81 percent) but with positive CT scans were identified as covid19 affected cases. CT scan reports revealed pulmonary irregularities inconsistent with COVID-19 patients with preliminary RT-PCR negative results. Hence it conveys a message that RT-PCR tests is a time-consuming procedure that lacks sensitivity and stability. In such a situation CT scan diagnosis can be considered a complementary boon in detecting the infection caused by the deadly virus.

Table 1

Viral pneumonia v/s bacterial pneumonia

TypeCausesSymptomsPredicting factorsRadiology image differentiation
Viral

  • Adenovirus

  • Influenza virus

  • Herpes simplex virus (HSV)

  • Coronavirus

  • Respiratory syncytial virus

  • Metapneumovirus

  • Rhinovirus

  • Hantavirus

  • Headache

  • Dry cough

  • Fever

  • Throat infection

  • Muscular pain

  • Reduced appetite

  • Weakness

  • High grade fever

  • Difficulty in breathing

  • Mucus associated with cough

  • Radiology images showing Ground-glass opacity (GGO)

  • Rhinorrhoea multivariate

  • Lower serum creatinine

  • White blood cells having larger quantity of lymphocytes percentage

  • Bilateral infiltrates

  • Interstitial infiltrates

  • Pneumonia-like syndrome with an unremarkable chest X-ray

  • Patchy distribution of interstitial infiltrates

Bacterial

  • Streptococcus pneumoniae

  • Streptococcus A, B

  • Hemophilus influenzae

  • Chlamydia

  • Legionella species

  • Mycoplasma pneumoniae

  • Coughing and heaviness in breathing combined with chest pain or abdominal pain

  • High grade fever (up to 105 degrees)

  • Sweating

  • Heavy breathing

  • Extreme chills

  • Tiredness (fatigue)

  • Loss of appetite

  • Cough along with mucus

  • Confused mental state usually in older patients

  • Acute onset of symptoms

  • Age > 65

  • Comorbidity

  • Leukocytosis or leukopenia

  • Fever

  • Headache

  • Cervical painful lymph nodes

  • Diarrhoea

  • Rhinitis

  • Lobar consolidation

  • Alveolar infiltrates

  • Pleural effusion

  • Nodular densities

Table 2

Medical imaging based COVID-19 unique features

AuthorsImage typeCOVID-19 pneumonia features
Yang et al. [10]Chest X-ray

  • Various minor patchy shadows and observed interstitial variations in the lower portion of lungs

  • Many consolidations and patched distribution as disease progresses

  • Multifocal or diffuse in the lungs, displayed as “white lung” in critical case

Zheng et al. [11]Chest X-ray

  • Abnormality in the chest and presence of GGO is observed in 56 percent and 24 percent of infected patients

  • Pneumothorax condition in 1% of COVID-19 patients

Sarkodie et al. [20]Chest X-ray

  • Airspace opacities like consolidation and GGO

  • Bilateral, peripheral, and lower zone distribution predominant

Jacobi et al. [21]Chest X-ray

  • Patchy, reticular, hazy, irregular, and widespread ground glass opacities

  • Peripheral lung opacities

Yang et al. [10]Chest CT

  • Multifocal GGO linked with air filled bronchi

  • Presence of crazy-paving pattern

  • Lobar or segmental lesions

  • Partially confluent distribution located in Subpleural

Li et al. [58]Chest CT

  • Presence of peripheral distribution

  • Lesion with size > 10 cm

  • Involvement of five lung lobes

  • Mediastinal lymph node broadening and occurrence of hilar.

  • Pleural effusion is not seen in COVID-19 patients

Bai et al. [59]Chest CT

  • Above average specificity and mid-level sensitivity

  • Presence of peripheral distribution

  • Presence of ground glass opacity

  • Existence of vascular thickening

Shi et al. [60]Chest CT

  • Bilateral distribution

  • Subpleural effusion

  • Ground-glass opacities along with air filled bronchi

  • Not well-defined margins

  • Dominance is observed in the lower right lung lobe

Zhao et al. [61]Chest CT

  • Presence of ground-glass opacities (GGO) (about 86.1%)

  • Combination consolidation and GGO observed (64.4%)

  • Visible vascular expansion in the lung lesion (71.3%).

Hani et al. [62]Chest CT

  • No trace of centrilobular nodules

  • Absence of mucoid impactions when there is no superinfection

  • Peripheral distribution

  • Fewer common lymphadenopathy and pleural effusion

  • Increase in size and number of GGO’s with multifocal consolidation

  • Septal thickening

  • Crazy

  • Pave pattern

Salehi et al. [31]Chest CT

  • Bilateral multilobe ground glass opacity (GGO)

  • Posterior/peripheral distribution seen in lower lung lobes or in the centre lung lobe

  • Consolidative opacities superimposed on GGO

Bonadia et al. [26]Lung/thoracic ultrasound

  • Regular/irregular pleural line is observed along with artifacts that are vertical and non-confluent

  • Also, irregularity in pleural lines is observed along with multiple artifacts that are vertical and confluent and consolidations that are subpleural

  • Densely expanded areas of white lung associated with larger or normal consolidations

Xing et al. [27]Lung/thoracic ultrasound

  • Abnormalities observed in consolidation patterns, B lines and pleural line

  • Predominant bilateral distribution seen in posterior part of the lungs

Volpicelli et al. [28]Lung/thoracic ultrasound

  • Multiple forms of B Lines including coalescent and separate.

  • Pleural line that is fragmented or irregular

  • Smaller peripheral consolidations

Sofa et al. [29]Lung/thoracic ultrasound

  • B lines shown as diffused and multifocal in nature

  • Patterns inditing white lungs condition

  • Consolidations in the lung

Table 2, continued

AuthorsImage typeCOVID-19 pneumonia features
Aggeli et al. [30]Lung/thoracic ultrasound

  • Irregularly positioned multiple B lines

  • Smaller subpleural consolidations

Sultan et al. [33]Lung/thoracic ultrasound

  • Predominant posterior and bilateral distribution

  • Multiple focal and or diffuse B lines with some areas displaying thickened subpleural interlobular septa

  • Irregularly shaped and thick pleural lines along with spread out discontinuities

  • Subpleural consolidations associated with localised and discrete pleural effusion

  • Inflammatory lung lesions represented as avascular in images from Colour Doppler

  • Alveolar consolidation, with either static or dynamic air bronchogram indicates progressive and severe case of the disease.

  • During recovery stage bilateral A-lines reappeared and aeration was restored

3.6Lung ultrasound image analysis for COVID-19 diagnosis

Chest CT scans are highly recommended as an alternative measure to RT-PCR testing of coronavirus because of its high sensitivity and detection of COVID-19 traces even when RT-PCR gives false negative results. But the price and the huge size of the CT scan machines, makes it unavailable outside the hospital settings. This paves a way to find a portable device without compromising on the quality of imaging. Lung/thoracic ultrasound [26, 27, 28, 29, 30, 31, 32] have been considered to detect COVID-19 infection due its portable nature. The abnormalities found after lung ultrasound primarily included pleural line, consolidation, B-lines, bilateral involvement with prevalent distribution seen in the posterior portion of infected patient’s lungs. The compositions involving consolidation regions and various B-lines densities exhibited parallel variations with the severity of the infection. Interstitial diffuse of bilateral pneumonia displayed as lesions in patchy and asymmetric distributions periphery of the lungs is identification presence of coronavirus which can be effectively recognized under an ultrasound analysis. Lung ultrasound images can also depict ground glass opacity (GGO) alternation with crazy paving patterns as well as consolidations. But lung ultrasonography [33] fails to identify deep lesions in the lungs, transmission of ultrasound waves is obstructed by the aerated lungs.

Nevertheless, lung ultrasound scan images can be considered as an important tool for the identification and tracking of progress abnormalities in the lung lesions indicating the presence of COVID-19 pneumonia because of its cost-effective, flexible and radiation free nature.

4.Coronavirus pneumonia

Pneumonia [34] is a medical condition that is caused by virus, bacteria and fungi which involves the inflammation of the lungs and blockage of oxygen supply to the lungs which can eventually lead to breathlessness and finally death. The viral pneumonia varies from bacterial pneumonia in terms of symptoms, treatments, and diagnosis. Viral pneumonia usually appears as a resulting infection from other viruses such as coronavirus, adenovirus, influenza, parainfluenza, and respiratory syncytial virus (RSV). Antibiotics are effective in treating bacterial pneumonia, but this medication is ineffective in treating viral pneumonia. Fungal Pneumonia usually infects when a spore enters the lungs and begins to multiply in the infected person. People with weak immune systems or are having underlying health conditions that are in chronic stage are the most vulnerable to this. COVID-19 pneumonia [35] is caused by a family of viruses belonging to Coronaviridae and cannot be treated with antibiotics. Unfortunately, even existing viral pneumonia vaccines are not effective against coronavirus. Since most of the physiological symptoms are common to other types of viral pneumonia distinguishing COVID-19 from the other types has become a challenging task. Recent work on this has revealed that chest imaging can be extremely helpful in differentiating COVID-19 pneumonia from the others.

The Table 1 [36, 37, 38, 39] describes the types of pneumonia, their symptoms, and predictors for their diagnosis. Table 2 gives an overview of significant findings by researchers in distinguishing COVID-19 and other viral pneumonia based on medical imaging reports.

5.AI in medical imaging

5.1Medical image diagnosis workflow

Medical images of lungs captured from imaging techniques such as X-ray, CT and ultrasonography have been considered the complementary measures in diagnosing the COVID-19 pneumonia infected patients. Basically, an imaging-based diagnosis workflow includes three stages namely Scan preparation stage, image acquisition stage and disease diagnosis stage. In the preparation stage, the patient is assisted by a technician to prepare for the scan. During the image acquisition stage, image modality machines capture and acquire the X-ray or CT images with necessary reconstruction of images. In the final stage the images are captured and analyzed for diagnosis. Computer aided Image Analysis [40] comprises segmentation, feature extraction and classification. However, analysis of medical images for COVID-19 classification involves a radiologist and this increases the demand for them as COVID-19 infection is growing at a rapid rate. This puts the medical professionals at a higher risk of contracting the virus and escalates the pressure to perform diagnosis in considerably less amount of time. Employing AI powered [41, 42] contactless diagnosis systems are very much needed to avoid severe risks to the medical health care professionals, lessen their burden and accelerate the diagnosis process. In this section we focus on automation of COVID-19 related image analysis and diagnosis.

5.1.1Image segmentation

Image segmentation [43] is the method of dividing an image into several segments and detecting objects and margins in images. It delineates the regions of interest in the lung images like the lung lobes, lesions, infected areas, bronchopulmonary segments for further assessment and quantification. Image Segmentation process can be performed as manual, semi-automatic or fully automated process. Manual segmentation is time consuming and suitable for only a small dataset because it involves detection of regions of interest in images by experts and accurately annotates each pixel in the image. In Semi-automatic segmentation, automated algorithms are used for accurate segmentation with only some user interactions at a certain level [44]. There is no interaction at any level in fully automatic segmentation techniques. Current methods used for segmentation [45] are thresholding-based, region-based, shape-based, neighboring anatomy–guided, machine learning and deep learning methods. Segmentation in COVID-19 cases is grouped into lung region-oriented segmentation and lung lesion-oriented segmentation. In the lung region segmentation, whole lung and lung lobes are separated from other unnecessary background details in CT or X-ray images. In the lung lesion-oriented method concentration is on separating lesions in the lung from the lung region. As the size of lesions may be extremely small and may also vary in patterns lung lesion segmentation is a challenging task. Projecting of ribs into soft tissues in a 2D X-ray image makes the segmentation of X-ray images a particularly challenging task. Segmentation is considered as the most important prerequisite step in the COVID-19 image analysis process. The attention mechanism which is supposed to be effective in localization tasks [41], can be adopted in dealing with X-ray images for COVID-19 diagnosis.

5.1.2Feature extraction

Analysis of images to identify and extract the most prominent features representing the categories of different objects and images is termed as feature extraction [46] procedure which is an essential part of image analysis. Shape descriptor features are calculated from object’s contour [47, 48]. Texture of an image is defined by the spatial association of the values of each pixel in the image they are a part of. Variations in local spatial frequency is dependent on any sort of variations in the local texture of the image [49]. Texture analysis identifies the texture primitives from which it extracts essential features to construct spatial or statistical distribution of primitives based on identified features. Parametric mapping usually identifies functionally dedicated reactions. It is mainly used to characterize functional anatomy and variation related to a particular disease. Lately, researchers have worked towards employing Machine Learning deep learning techniques for better feature extraction process in COVID-19 diagnosis cases [50, 51, 52].

5.1.3Image classification

Classification of COVID-19 patients based on medical image diagnosis involves identifying the abnormalities that are related to coronavirus pneumonia. Classification of images [53] is a supervised learning problem which involves categorizing the segmented input images of CT, X-ray or Ultrasound into various predefined disease classes or sometimes binary classification of whether disease is present or not. Segmentation and feature extraction [54] form the basics steps in pre-processing the images before the classification. After the segmentation, in the feature extraction phase, the features based on shape and texture are extracted which may then be passed to any classifier model used to classify the images. Imaging modalities are widely performed to provide evidence for radiologists due to their quick procurement nature. However, CT images of the chest consist of numerous slices due to which duration of diagnosis might be longer. Also, COVID-19 pneumonia has c indicators comparable to other viral pneumonia, which facilitates the need for skilled and experienced radiologists for an accurate diagnosis which makes COVID-19 image diagnosis a crucial and a challenging task. Thus, AI-supported diagnosis of medical images is extremely desirable.

5.2AI Approaches

5.2.1Machine learning approaches

Machine Learning (ML) [55, 56] is the ability of computers to self-learn a task without the assistance of manual programming instead learn from experience or historical data.

Machine learning is extremely helpful in medical practices [63] that are dependent on imaging, consisting of radiology, radiation therapy and oncology. Machine learning approaches are applicable to image analysis components such as segmentation and classification to automate the image analysis process. Categories of machine learning are supervised learning (using labelled dataset), unsupervised learning (unlabeled dataset) and reinforcement learning. Some of the supervised learning algorithms [64] are K-Nearest Neighbors [65], Logistic Regression [66], Decision Trees [67], Linear Regression [68], Support Vector Machines [69], Naïve Bayes [70] and Artificial Neural Networks [71]. As Supervised learning methods require labelled dataset, labor intensive data labelling is a time-consuming process which is considered as the major drawback in using these methods. On the other hand, an unsupervised learning algorithm takes unlabeled datasets as inputs and works towards finding similar patterns in the data and grouping the instances with similar traits into groups or clusters. These include algorithms such as K-means clustering [72], hierarchical clustering [73], DBSCAN [74], Gaussian mixture modeling ISODATA (iterative self-organizing data) [75, 76]. Automated Image Segmentation [77] splits the images based on visible dissimilar regions. Most ML based segmentation techniques are supervised that need training data which is well annotated. Also, huge variation in the form of color, shape and texture in patient images pose additional challenges to the automated segmentation algorithms [78]. Variations in the images are also caused due to existence of noise and inconsistency in the data acquisition process. These variations have limited the application of on machine learning (ML) based approaches as they lack in global applicability for most cases. Besides, manual engineering features techniques are time intensive and not easily adaptable for new information. Machine learning techniques like KNN, Neural Networks, SVM have been applied in the past for the classification of medical images [40, 79, 80]. The use of the traditional machine learning methods for medical image classification are limited by time consuming feature extraction/selection process and highly variable from one application to the other [81].

5.2.2Deep learning approaches

Deep learning in healthcare [82, 83, 84, 85] has shown a promising technological advancement that may revolutionize AI in

health sector. Deep learning [86] methods employ automatic feature engineering and learn complex and sophisticated patterns in the data than conventional machine-learning techniques. This becomes an advantage in the field of medical imaging analysis as manual feature determination might take a longer time duration. Application of deep learning algorithms upgrades the efficiency, accuracy, quality and reduces the time of diagnosis. Convolutional neural networks (CNNs) [87] is the most widely used deep learning model for image classification. Majority of deep learning models are applied on medical image types like CT and MRI for applications like segmentation, classification, and diagnosis [88]. Diagnosis performance of deep learning models [89] has proven to be equivalent to that of the medical professionals. Deep Neural networks [90] are like Artificial neural networks [71] structures with many hidden layers and automatic feature extraction ability. Additional layers in DNN allow modeling of complex data by composing from lower to upper layers. Research is in progress on several deep learning models like deep neural network, deep autoencoder, convolutional neural networks, deep belief network, deep conventional extreme machine learning, deep Boltzmann machine, recurrent neural network (RNN). Particularly, convolutional neural networks (CNNs) have been widely accepted and applied for segmentation and classification [91] of natural images. This accomplishment mainly due automatic feature extraction combined with substantial advances in computational power. But this automatic feature extraction in deep learning is heavily dependent on the availability of huge training dataset. Recent years have seen a tremendous rise in deep learning applications owing to CPU and GPUs with high computational power that has reduced training and execution time to a greater extent and generation of huge volumes of Big data [92]. Convolutional neural networks are used even in medical image analysis to augment the performance of computer aided medical image analysis processes.

5.2.3Convolution neural network

CNN [93] is a deep learning model for handling images and is intended to acquire knowledge on features’ spatial hierarchies from low to high level adaptively and automatically. CNNs [94] perform dimensionality reduction by preserving local image relations which is significant in capturing feature relationships in images and reduces the parameters desired to be computed which further increases the computational efficiency of the CNN models. CNNs can accept and process both 2-D and 3-D images with minor changes. This acts as an added advantage for designing automated systems for hospitals as medical images could be 2D or 3D. X-rays are 2D while CT or MRI are 3D. CNN architectures [95] such as 2D U-Net, 3D U-Net, multichannel 2D U-Net are widely used in the medical image segmentation process because they do not rely on user-defined image features instead can determine their own features.

Table 3

Deep learning and COVID-19 diagnosis

AuthorImage type/ applicationModels usedResultsLimitations
Varshni et al. [98]Chest X-ray/ feature extraction + classificationVarious combinations of CNN based feature extractor with supervised classifierDenseNet (feature extractor) + SVM (Classifier) better classified the pneumonia affected images from the non-pneumonia imagesOnly frontal chest X-ray images were used under study, but lateral view X-ray images were better for diagnosis
Kumar et al. [99]Chest X-ray/ feature extraction + classificationEnsemble-based DeQueezeNet model including SqueezeNet1.0 and DenseNet121Accuracy of 96.15 in identification of COVID-19

  • Imbalanced dataset

  • limited dataset

Elaziz et al. [100]Chest X-ray/ feature extraction + classificationFeature extraction using Manta ray foraging optimization (MRFO) which is based on differential evolution (DE). KNN for classification

  • Accuracy 0.9809

  • Recall 0.9891

  • Precision 0.9891

Limited dataset degrades performance
Mohammad et al. [101]Chest X-ray/ feature extraction + classification

  • Feature selection by inceptionV3

  • 12 supervised classifiers used

SVM classifier has best performanceLimited dataset
Apostolopoulos et al. [50]Chest X-ray/ feature extraction + classificationMobileNet V2Accuracy of 99.18%, sensitivity of 97.36% and specificity of 99.42% specificityLimited and imbalanced dataset
Wang et al. [102]Chest CT/ segmentation + classification

  • Segmentation using 3D-U-net

  • Classification using 3D-resnet with prior attention mechanism

  • 93.3% of Accuracy

  • 87.6% of Sensitivity

  • 95.5% of Specificity

  • Fail to detect lung lesions in early stages of COVID-19 and misclassify the normal scans

  • Segmentation of lung regions are solely based on the information generated by 3D-UNet

  • Weight factor used is fixed

Singh et al. [103]Chest CT/ segmentation + classificationMulti objective differential evolution (MODE) based CNN

  • Accuracy 1.9789%

  • F-measure 2.0928%

  • Sensitivity 1.8262%

  • Specificity 1.6827%

  • Kappa statistics 1.9276%

Limited dataset
Kang et al. [104]Chest CT/feature extraction + classification

  • Extraction of lung lobes, lesions and pulmonary segments is done using V-Net model

  • Latent representation of features is used

  • Multiview representation machine learning classifier (group of backward neural networks)

  • Accuracy 95.5%

  • Sensitivity 96.6%

  • Specificity 93.2

  • Binary classification only (Covi19 and CAP pneumonia)

  • Clinical characteristics can be integrated to the same framework for better diagnosis

Hasan et al. [52]Chest CT/feature extraction + classificationQ-deformed entropy + deep feature extractor + LSTM classifierAccuracy 99.68%Limited datasets
Roy et al. [105]Lung ultrasound/ segmentation + ClassificationCNN + Reg-STN + SORD

  • Pixel wise segmentation achieved 96% accuracy and a binary Dice score of 0.75

  • Classification based on videos

  • 61% (F1 score)

  • 70% (precision)

  • 60% (recall)

  • Frame based classification

  • F1 score 65.9%

Need larger, heterogenous and balanced dataset

Table 3, continued

AuthorImage type/ applicationModels usedResultsLimitations
Hu et al. [106]Chest CT/ segmentation + classification

  • Weakly supervised multiscale learning model

  • U-net based segmentation network

  • CNN classifier

  • Accuracy 96.2%

  • Precision 97.3%

  • Sensitivity 94.5%

  • Specificity 95.3%

  • AUC 0.970

  • Not discriminative enough to separate community acquired pneumonia from COVID-19 pneumonia

  • Training was performed on all individual slices (images)

Fan et al. [107]Chest CT/ segmentation + classificationCNN with implicit reverse attention and explicit edge-attention modules

  • Sensitivity 0.870

  • Specificity 0.974

  • Focuses on lung infection segmentation but in clinical procedure it is required to first classify COVID-19 patients and then perform segmentation

  • While dealing with infected slices the accuracy of the multiple class labelling framework reduces

Pathak et al. [108]Chest CT/ segmentation + classificationMemetic adaptive differential evolution (MADE) with deep bidirectional long short-term memory network with mixture density network (DBM) model

  • Accuracy 1.7912%

  • AUC 1.5256%

  • F-measure 1.8372%

  • Sensitivity 1.9272%, specificity 0.4382%, recall 1.6382%, and precision 1.5256%, respectively

Limited dataset
Amyar et al. [109]Chest CT/ segmentation + classification

  • 2D U-Net for image construction and infection segmentation

  • Fully connected CNN for classification

Accuracy 86%

  • Lack of annotated data

  • Heterogenous data

Farid et al. [51]Chest CT/feature extraction + classificationHybrid feature extraction model + stacked hybrid classifierAccuracy 96.07%Limited datasets

Figure 4.

Category wise paper references.

Category wise paper references.

Popularly used convolution neural networks for detection and classification of [96] are Alex Nets, ResNet50 and GoogLeNet. The components [97] of CNN are convolution layers, fully connected and pooling layers. Convolution layer is the fundamental part of CNN which performs feature extraction consisting of linear convolution operation and non-linear activation operation. Convolution operation results are activated through a nonlinear function. A down sampling operation in the pooling layer introduces a small variation in translation and distortions to decrease dimensionality in feature maps and number of learnable parameters.

The results of the final convolution or pooling layer is usually converted to a 1D array of feature vectors which are connected to fully connected layers, where there is a connection between every input and output associated with a learnable weight. Number of desired classes is equal to the number output nodes in the final fully connected layer. A nonlinear activation function follows every fully connected layer. The final layer activation functions [97] varies based on the type of classifications. For binary classification activation function used is Sigmoid, for multiclass single-class classification the SoftMax is used, for multiclass multi-class classification the activation function used is sigmoid and for regression to continuous values the activation function used is identity. As discussed in the earlier sections higher accuracy of COVID-19 diagnosis is achieved with the mixture of RT-PCR test and Chest CT scan or Chest X-ray or Lung Ultrasound images and integrating automation into these methods using deep learning framework provides faster diagnostic systems.

Table 3 displays summary of related work by different authors on applying deep learning for medical image analysis of COVID-19 diagnosis, achieved results and the limitations of the work.

5.3Addressing problem of limited dataset

Enormous amounts of good quality training data is essential for deep learning [90] models to achieve higher accuracy. However, unavailability of a balanced dataset is the major obstacle for successfully applying deep learning in medical imaging. Generation of huge annotated medical imaging data is also an extremely daunting and time-consuming job. Furthermore, annotation may not be possible due to absence of competent experts. Another key quite common and key issue in the health sector is imbalanced data because rare infections like COVID-19 are not clearly represented in the data sets. As discussed in the earlier sections diagnosis of coronavirus is more effective if radiology imaging is combined with clinical lab tests. The ongoing research in using radiology images for COVID-19 diagnosis and application of deep learning models to enhance their performance is limited with non-availability of data related affected COVID-19 patients which may ultimately lead to overfitting problem and degrade the performance of the model. As Deep Learning models need huge amounts of data for giving accurate results, researchers have tried various methods like transfer learning, data augmentation and General Adversarial networks (GAN)to handle the issue of limited and imbalanced datasets. Each of the techniques is discussed in the next sections.

Table 4

Limited dataset related work

AuthorImage typeTechnique usedModelResults
Pereira et al. [118]Chest X-rayResamplingEarly fusion with combination of BSIF, LPQ and EQP features resampled using SMOTE_ + TLF1 score (0.89) in the COVID-19 detection
Apostolopoulos et al. [119]Chest X-rayTransfer learning

  • VGG19

  • MobileNet v2

  • VGG19 resulted in better accuracy whereas MobileNet v2 outperformed VGG19 in terms of specificity

  • Accuracy is 96.78%

  • Specificity is 96.46%

  • Sensitivity is 98.66%

Loey et al. [120]Chest X-rayGAN + transfer learning

  • GAN for synthetic image generation

  • AlexNet, GoogleNet, Resnet18 for classification

4 classes:

  • Alexnet: Accuracy 66.67%

  • Googlenet: Accuracy 80.56%

  • Renet18: Accuracy 69.46%

3 classes:

  • Alexnet: Accuracy 85.19%

  • Googlenet: Accuracy 81.48%

  • Resnet18: Accuracy 81.48%

2 classes:

  • Alexnet: Accuracy 100%

  • Googlenet: Accuracy 100%

  • Resnet18: Accuracy 100%

Khalifa et al. [121]Chest X-rayGAN + transfer learning

  • GAN for pre-processing and generating new images

  • Resnet18 for classification

2 classes: Accuracy 99.00%
Waheed et al. [122]Chest X-rayGAN + CNNAuxiliary classifier generative adversarial network (ACGAN) + CNNAccuracy 95%
Hu et al. [123]Chest-CTClassical data augmentationShuffleNet V2AUC 0.9689
Loey et al. [124]Chest CTClassical data augmentation + GAN + transfer learningData augmentation + conditional GAN (CGAN) for synthetic image generation + Resnet50Accuracy 81.4%
Mizuho Nishio et al. [125]Chest X-rayConventional data augmentation + transfer learningConventional data augmentation method and mix-up with layer freezing + VGG16

  • 3 classes

  • Accuracy-83.6%

  • Sensitivity 90%

Ucar and Korkmaz [126]Chest X-rayMultiscale offline data augmentation + transfer learningDeep Bayes squeeze net

  • 3 classes

  • Accuracy 98.3 %

Ahammed et al. [94]Chest X-rayRandom under samplingCNNAccuracy 94.03%
Abbas et al. [127]Chest X-rayData augmentation + class decomposition + transfer learningDe-trac deep resnet18Accuracy of 92.5%, sensitivity of 65.01%, and specificity of 94.3%

Table 5

Research gaps and proposed solution

Identified research gapFuture direction

  • 1. The related work was not carried out on standard datasets, so the results obtained vary and cannot be accepted as standard results.

  • Aggregation of COVID-19 related medical images from multiple sources to form standardized (benchmark) datasets and uploaded to a public repository that is accessible to researchers across the globe.

  • 2. Most researchers have worked on Chest X-ray images even though Chest CT proves to provide better diagnosis than X-ray and ultrasound scan is better as it is radiation free compared to the other two especially for pregnant patients. This drawback is due to unavailability of datasets.

  • Focus on Dataset creation of CT scan and ultrasound scan images of COVID-19 infected persons.

  • Employ GAN networks to augment CT scan and ultrasound scan images.

  • 3. As most work is focused on supervised techniques which is heavily dependent on annotated data and lack of annotated data affects the model performance.

  • Use unsupervised or semi-supervised approaches to solve limited annotated dataset problems.

  • Meta Learning approaches such as few shots learning, and one-shot learning can be explored to address limited dataset problems.

  • 4. Deep learning models are uninterpretable, and this poses a major challenge in the medical field where doctors must be explainable about their diagnosis.

  • Building explainable and interpretable deep learning models has shown tremendous scope for research in this direction.

5.3.1Transfer learning

Transfer learning [97, 110] is an effective method of using a pretrained model usually trained on huge datasets such as ImageNet [111], and re-use them for a chosen task. The idea behind using transfer learning is that knowledge acquired while solving one problem can be utilized to solve a different but related problem. This gives an advantage to apply learned generic features to several small dataset task domains. Some of the publicly available pretrained models are Resnet, VGG, AlexNet, DenseNet and Inception. Fine tuning and Fixed feature extraction are the two ways of using pretrained models. A fixed feature extraction method is a procedure to eliminate fully connected layers from a network pretrained on some huge dataset but maintaining convolution and pooling layers constituting as the convolutional base which is a fixed feature extractor. On top of this fixed feature extractors any conventional machine classifiers or your own series of fully connected layers can be added. This simplifies the training by limiting it only to the additional classifier on the dataset of the chosen task. Due to dissimilarity between ImageNet images and medical images, the above-mentioned approach is seldom used for medical image diagnosis. On the other hand, Fine-tuning approach has been widely accepted in medical image diagnosis. In this method along with replacing fully connected layers, all the kernel parts in the convolution and pooling layers are also fine-tuned using backpropagation. In some of the situations, few previous layers can remain unchanged and the rest of the deeper layers can be fine-tuned to suit the chosen task domain. This is because the earlier layers are pertaining to generic features and higher layers more specific to domain and tasks. CNN based classification models have proved to be good feature extractors which are evident in performance of most transfer learning approaches. To improve the performance of the models, it can be re-trained with fresh labelled datasets. These results combined with other existing architectures can boost the performance of the models.

5.3.2Data augmentation

Data Augmentation [112, 113] is a technique to address the problem of limited and imbalanced datasets by synthetically generating additional images. Data augmentations unnaturally increase the quantity of data needed for training by using oversampling or data warping [114] techniques. In data warping transformations are applied in dataspace and in oversampling synthetic images are generated in feature space. Oversample augmentations such as feature space augmentations, mixing images and general Adversarial networks (GANs) generate unreal instances that are included in the training set. The Synthetic Minority Over-Sampling Technique (SMOTE) [114] was applied to solve class imbalance problems in digital handwritten recognition tasks and also has been extensively used on medical datasets having significant minority class. In SMOTE, a fresh artificial sample is formed by picking a random point in feature-space along a line crossing k randomly selected same class sample. Classical transformations [115] use a combination of affine transformations to operate on training data using rotation cropping, zooming, histogram-based methods. Though, the earlier data augmentation [116] approaches based which are a combination of color modification and affine image transformations are easy, quick, and effective, they are vulnerable to adversarial attacks and fail to create fresh visual structures of the images.

5.3.3General adversarial networks

The two components of General Adversarial Networks (GAN) are generator and discriminator. The generator produces synthetic data based on a random noise vector. The discriminator distinguishes between original data and the generated artificial data. The input to the generator [117] is a fixed-length random vector based on which it generates unreal samples in the chosen domain. The discriminator model accepts any sample original or artificial to predict class label. Generative Adversarial Networks (GANs) are efficient in synthetizing images from the scratch of any given domain and combining it with other methods can yield desirable results. Generally, input to GANs is a random noise vector but additional parameters can be also added to the input signal to permit a variation or adaptation in network output. Such Conditional Generative Adversarial Networks [113] are GANs that accepts an additional input. Most researchers [120, 121, 122] used generative adversarial networks (GAN) to augment the COVID-19 dataset and combined it with transfer learning models to construct a better classifier model to detect COVID-19 based on radiology images. The experimental results claimed that GAN improved the robustness of the model and overcame the overfitting problem. GAN along with fine-tuned Deep Transfer learning models solved the problem of limited and imbalanced dataset and upgrade the classifier model’s accuracy to a greater extent. But there are few limitations of GANs like need of high computational power, lack of idea of perspective, problems with counting and trouble coordinating the global structure [128]. Table 4 represents a summary of the related by many researchers to address the limited dataset problem. Figure 4 graphically represents the category wise reference papers referred in this review paper.

6.Open research challenges

The review highlights that using deep learning for analyzing medical images for COVID-19 diagnosis is still in its infancy stage. Although several researchers have headed in this direction, there are still many issues that need attention as mentioned in Table 5.

7.Conclusion

Increasing number of positive cases of coronavirus, rise in mortality cases at an alarming rate has put the countries in a state of emergency to end this pandemic. One of the most effective ways to deal with this, is to identify people infected with the virus so that they can be isolated and stop the community transmission. In this paper, we have reviewed various tests for coronavirus diagnosis and conclude that RT-PCR test along with medical image analysis proves to be an effective way in correctly diagnosing disease. Automating the medical image analysis using deep learning models not only reduces the burden and risks to medical professionals but also speeds up the diagnosis process. But the performance of deep learning models is restricted by unavailability of relevant datasets. Even though few researchers have found solutions to address this issue, most of the work was carried out on CXR images. From the review it was concluded that RT-PCR combined with CT scan image or ultrasound images is a best choice for the COVID-19 diagnosis because most finer details of chest/ lungs are captured better in CT and ultrasonography than CXR. But CXR is cheaper, portable, and is a faster solution. Ultrasound has an upper hand in terms of zero exposure to radiation which is a matter of concern in CT and CXR imaging modalities. This review summarizes the related works of applying deep learning models to coronavirus diagnosis, challenges faced and highlights future directions of research to find an accurate, efficient, and faster COVID-19 automatic diagnosis model which is the need of the hour.

References

[1] 

Segars J, Katler Q, McQueen DB, Kotlyar A, Glenn T, Knight Z, et al. Prior and novel coronaviruses, coronavirus disease 2019 (COVID-19), and human reproduction: What is known? Fertility and Sterility. (2020) ; 113: (6): 1140-1149. doi: 10.1016/j.fertnstert.2020.04.025.

[2] 

Jin YH, Cai L, Cheng ZS, Cheng H, Deng T, Fan YP, et al. A rapid advice guideline for the diagnosis and treatment of 2019 novel coronavirus (2019-nCoV) infected pneumonia (standard version). Military Medical Research. (2020) ; 7: (4): 1-23. doi: 10.1186/s40779-020-0233-6.

[3] 

Coronavirus disease (COVID-19)[Internet]. World Health Orgamization. [cited 2020 Nov] Available from: https://www.who.int/health-topics/coronavirus.

[4] 

Coronavirus cause: Origin and how it spreads[Internet]. Medical News Today; 2020 [updated 2020 Jun 12; cited 2020 Nov]. Available from: https://www.medicalnewstoday.com/articles/coronavirus-causes.

[5] 

Singhal T. A review of coronavirus disease-2019 (COVID-19). Indian J Pediatr. (2020) ; 87: (4): 281-286. doi: 10.1007/s12098-020-03263-6.

[6] 

Diagnosis and treatment protocol for COVID-19 (trial version 7)[Internet]. National Health Commission of the People’s Republic of China; 2020[updated 2020; cited 2020 Nov]. Available from: http://en.nhc.gov.cn/2020-03/29/c_78469.htm.

[7] 

COVID-19 Testing overview[Internet]. Centres for Disease Control and Prevention; 2020[updated 2022 Feb 1; cited 2020 Nov]. Available from: https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/testing.html.

[8] 

Coronavirus (COVID-19) Testing[Internet]. Testing.com; 2020[updated 2021 Nov 9; cited 2020 Nov]. Available from: https://labtestsonline.org/tests/coronavirus-covid-19-testing.

[9] 

Coronavirus testing basics[Internet]. U.S. Food and Drug Administration; 2020[updated 2022 Feb 2; cited 2020 Nov]. Available from: https://www.fda.gov/consumers/consumer-updates/coronavirus-disease-2019-testing-basics.

[10] 

Yang Q, Liu Q, Xu H, Lu H, Liu S, Li H. Imaging of coronavirus disease 2019: A Chinese expert consensus statement. Eur J Radiol. (2020) ; 127: : 109008. doi: 10.1016/j.ejrad.2020.109008.

[11] 

Zheng Z, Yao Z, Wu K, Zheng J. The diagnosis of pandemic coronavirus pneumonia: A review of radiology examination and laboratory test. J Clin Virol. (2020) ; 128: : 104396. doi: 10.1016/j.jcv.2020.104396.

[12] 

Use of chest imaging in COVID-19: A rapid advice guide. [Internet]. World Health Organization; 2020 [updated on 2020 Jun 11; cited 2020 Nov]. Available from: https://www.who.int/publications/i/item/use-of-chest-imaging-in-covid-19.

[13] 

Stogiannos N, Fotopoulos D, Woznitza N, Malamateniou C. COVID-19 in the radiology department: What radiographers need to know. Radiography (Lond). (2020) ; 26: (3): 254-263. doi: 10.1016/j.radi.2020.05.012.

[14] 

Shuja J, Alanazi E, Alasmary W, Alashaikh A. COVID-19 open source data sets: A comprehensive survey. Appl Intell (Dordr). (2021) ; 51: (3): 1296-1325. doi: 10.1007/s10489-020-01862-6.

[15] 

Buonsenso D, Pata D, Chiaretti A. COVID-19 outbreak: Less stethoscope, more ultrasound. Lancet Respir Med. (2020) ; 8: (5): e27. doi: 10.1016/S2213-2600(20)30120-X.

[16] 

Shen M, Zhou Y, Ye J, Abdullah Al-Maskri AA, Kang Y, Zeng S, Cai S. Recent advances and perspectives of nucleic acid detection for coronavirus. J Pharm Anal. (2020) ; 10: (2): 97-101. doi: 10.1016/j.jpha.2020.02.010.

[17] 

Lipsitch M, Perlman S, Waldor MK. Testing COVID-19 therapies to prevent progression of mild disease. Lancet Infect Dis. (2020) ; 20: (12): 1367. doi: 10.1016/S1473-3099(20)30372-8.

[18] 

Tahamtan A, Ardebili A. Real-time RT-PCR in COVID-19 detection: Issues affecting the results. Expert Rev Mol Diagn. (2020) ; 20: (5): 453-454. doi: 10.1080/14737159.2020.1757437.

[19] 

van Kasteren PB, van der Veer B, van den Brink S, Wijsman L, de Jonge J, van den Brandt A, et al. Comparison of seven commercial RT-PCR diagnostic kits for COVID-19. J Clin Virol. (2020) ; 128: : 104412. doi: 10.1016/j.jcv.2020.104412.

[20] 

Sarkodie BD, Osei-Poku K, Brakohiapa E. Diagnosing COVID-19 from chest X-ray in resource limited environment-case report. Med Case. (2020) ; 6: (2): 135. doi: 10.36648/2471-8041.6.2.135.

[21] 

Jacobi A, Chung M, Bernheim A, Eber C. Portable chest X-ray in coronavirus disease-19 (COVID-19): A pictorial review. Clin Imaging. (2020) ; 64: : 35-42. doi: 10.1016/j.clinimag.2020.04.001.

[22] 

Dong D, Tang Z, Wang S, Hui H, Gong L, Lu Y, et al.,The role of imaging in the detection and management of COVID-19: A review. In IEEE Reviews in Biomedical Engineering. (2021) ; 14: : 16-29. doi: 10.1109/RBME.2020.2990959.

[23] 

Li M. Chest CT features and their role in COVID-19. Radiol Infect Dis. (2020) ; 7: (2): 51-54. doi: 10.1016/j.jrid.2020.04.001.

[24] 

Miao C, Jin M, Miao L, Yang X, Huang P, Xiong H, et al. Early chest computed tomography to diagnose COVID-19 from suspected patients: A multicenter retrospective study. Am J Emerg Med. (2021) ; 44: : 346-351. doi: 10.1016/j.ajem.2020.04.051.

[25] 

Ai T, Yang Z, Hou H, Zhan C, Chen C, Lv W, et al. Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in china: A report of 1014 cases. Radiology. (2020) ; 296: (2): E32-E40. doi: 10.1148/radiol.2020200642.

[26] 

Bonadia N, Carnicelli A, Piano A, Buonsenso D, Gilardi E, Kadhim C, et al. Lung ultrasound findings are associated with mortality and need for intensive care admission in COVID-19 patients evaluated in the emergency department. Ultrasound Med Biol. (2020) ; 46: (11): 2927-2937. doi: 10.1016/j.ultrasmedbio.2020.07.005.

[27] 

Xing C, Li Q, Du H, Kang W, Lian J, Yuan L. Lung ultrasound findings in patients with COVID-19 pneumonia. Crit Care. (2020) ; 24: : 174. doi: 10.1186/s13054-020-02876-9.

[28] 

Volpicelli G, Gargani L. Sonographic signs and patterns of COVID-19 pneumonia. Ultrasound J. (2020) ; 12: (1): 22. doi: 10.1186/s13089-020-00171-w.

[29] 

Sofia S, Boccatonda A, Montanari M, Spampinato M, D’ardes D, Cocco G, et al. Thoracic ultrasound and SARS-COVID-19: A pictorial essay. J Ultrasound. (2020) ; 23: (2): 217-221. doi: 10.1007/s40477-020-00458-7.

[30] 

Aggeli C, Oikonomou E, Tousoulis D. A reappraisal of the role of transthoracic ultrasound in the era of COVID-19: Patient evaluation through new windows. Hellenic J Cardiol. (2021) ; 62: (2): 180-181. doi: 10.1016/j.hjc.2020.06.003.

[31] 

Salehi S, Abedi A, Balakrishnan S, Gholamrezanezhad A. coronavirus disease 2019 (COVID-19): A systematic review of imaging findings in 919 patients. AJR Am J Roentgenol. (2020) ; 215: (1): 87-93. doi: 10.2214/AJR.20.23034.

[32] 

Tan G, Lian X, Zhu Z, Wang Z, Huang F, Zhang Y, et al. Use of lung ultrasound to differentiate coronavirus disease 2019 (COVID-19) pneumonia from community-acquired pneumonia. Ultrasound Med Biol. (2020) ; 46: (10): 2651-2658. doi: 10.1016/j.ultrasmedbio.2020.05.006.

[33] 

Sultan LR, Sehgal CM. A review of early experience in lung ultrasound in the diagnosis and management of COVID-19. Ultrasound Med Biol. (2020) ; 46: (9): 2530-2545. doi: 10.1016/j.ultrasmedbio.2020.05.012.

[34] 

Pneumonia[Internet]. National Heart, Lung and Blood Institute; 2020[cited 2020 Nov]. Available from: https://www.nhlbi.nih.gov/health/pneumonia.

[35] 

Coronavirus disease (COVID-19) advice for the public: Mythbusters[Internet]. World Health Organization; 2020 [cited 2020 Nov]. Available from: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters.

[36] 

Coronavirus and Pneumonia[Internet].WebMd; 2020[cited 2020 Nov]. Available from: https://www.webmd.com/lung/covid-and-pneumonia#1.

[37] 

Pneumonia caused by coronavirus is likely to be more severe than other types of pneumonia[Internet]. The Swaddle; 2020[updated 2020; cited 2020 Nov]. Available from: https://theswaddle.com/what-is-the-difference-between-covid-and-bacterial-pneumonia/.

[38] 

Pneumonia[Internet]. Cleveland Clinic[cited 2020 Nov]. Available from: https://my.clevelandclinic.org/health/diseases/4471-pneumonia

[39] 

Kim JE, Kim UJ, Kim HK, Cho SK, An JH, Kang SJ, et al. Predictors of viral pneumonia in patients with community-acquired pneumonia. PLoS One. (2014) ; 9: (12): e114710. doi: 10.1371/journal.pone.0114710.

[40] 

Kim TY, Son J, Kim KG. The recent progress in quantitative medical image analysis for computer aided diagnosis systems. Healthc Inform Res. (2011) ; 17: (3): 143-149. doi: 10.4258/hir.2011.17.3.143.

[41] 

Shi F, Wang J, Shi J, Wu Z, Wang Q, Tang Z, et al. Review of artificial intelligence techniques in imaging data acquisition, segmentation, and siagnosis for COVID-19. IEEE Rev Biomed Eng. (2021) ; 14: : 4-15. doi: 10.1109/RBME.2020.2987975.

[42] 

Swapnarekha H, Behera HS, Nayak J, Naik B. Role of intelligent computing in COVID-19 prognosis: A state-of-the-art review. Chaos Solitons Fractals. (2020) ; 138: : 109947. doi: 10.1016/j.chaos.2020.109947.

[43] 

Ying T. Gpu-based parallel implementation of swarm intelligence algorithms. 1st ed. Morgan Kaufmann; (2016) .

[44] 

Iglesias JE. Globally optimal coupled surfaces for semi-automatic segmentation of medical images. in: Information Processing in Medical Imaging: 25th International Conference, Niethammer M, Styner M, Aylward S, Zhu H, Oguz I, Yap PT, Shen D, eds. IPMI; (2017) , Boone, NC, USA. Cham, Switzerland: Springer. 610-621.

[45] 

Mansoor A, Bagci U, Foster B, Xu Z, Papadakis GZ, Folio LR, et al. Segmentation and image analysis of abnormal lungs at CT: Current approaches, challenges, and future trends. Radiographics. (2015) ; 35: (4): 1056-1076. doi: 10.1148/rg.2015140232.

[46] 

Yang F, Murat H, Yan CB, Yao J, Kutluk A, Kong XM, et al. Feature extraction and classification on esophageal X-ray images of xinjiang kazak nationality. Journal of Healthcare Engineering. (2017) ; 2040-2295. doi: 10.1155/2017/4620732.

[47] 

Pathak SD, Ng L, Wyman B, Fogarasi S, Racki S, Oelund JC, et al. Quantitative image analysis: Software systems in drug development trials. Drug Discov Today. (2003) ; 8: (10): 451-458. doi: 10.1016/s1359-6446(03)02698-9.

[48] 

Clark MW. Quantitative shape analysis: A review. Journal of the International Association for Mathematical Geology. (1981) ; 142: (4): 303. doi: 10.1007/BF01031516.

[49] 

Drabycz S, Stockwell RG, Mitchell JR. Image texture characterization using the discrete orthonormal S-transform. J Digit Imaging. (2009) ; 22: (6): 696-708. doi: 10.1007/s10278-008-9138-8.

[50] 

Apostolopoulos ID, Aznaouridis SI, Tzani MA. Extracting possibly representative COVID-19 biomarkers from X-ray images with deep learning approach and image data related to pulmonary diseases. J Med Biol Eng. (2020) ; 40: (3): 462-469. doi: 10.1007/s40846-020-00529-4.

[51] 

Farid AA, Selim GI, Khater HAA. A novel approach of CT images feature analysis and prediction to screen for coronavirus disease (COVID-19). International Journal of Scientific and Engineering Research. (2020) ; 11: (3): 1141. doi: 10.14299/ijser.2020.03.02.

[52] 

Hasan AM, Al-Jawad MM, Jalab HA, Shaiba H, Ibrahim RW, Al-Shamasneh AR. Classification of covid-19 coronavirus, pneumonia and healthy lungs in CT scans using Q-deformed entropy and deep dearning features. Entropy (Basel). (2020) ; 22: (5): 517. doi: 10.3390/e22050517.

[53] 

Balaji K, Lavanya K. Medical image analysis with deep neural networks. in: Deep Learning and Parallel Computing Environment for Bioengineering System. Sangaiah AK, ed. Academic Press; (2019) . 75-79.

[54] 

Gupta S, Walia P, Singla C, Dhankar S, Mishra T, Khandelwal A, et al. Segmentation, feature extraction and classification of astrocytoma in MR images. Indian Journal of Science and Technology. (2016) ; 9: (36). doi: 10.17485/ijst/2016/v9i36/102154.

[55] 

Fu GS, Levin-Schwartz Y, Lin QH, Zhang D. Machine learning for medical imaging. J Healthc Eng. (2019) ; 2019: : 9874591. doi: 10.1155/2019/9874591.

[56] 

Louridas P, Ebert C. Machine learning. IEEE Software. (2016) ; 33: (5): 110-115. doi: 10.1109/MS.2016.114.

[57] 

Bishop CM. Pattern recognition and machine learning. Berlin, Germany: Springer; (2006) .

[58] 

Li X, Fang X, Bian Y, Lu J. Comparison of chest CT findings between COVID-19 pneumonia and other types of viral pneumonia: A two-center retrospective study. Eur Radiol. (2020) ; 30: (10): 5470-5478. doi: 10.1007/s00330-020-06925-3.

[59] 

Bai HX, Hsieh B, Xiong Z, Halsey K, Choi JW, Tran TML, et al. Performance of radiologists in differentiating COVID-19 from non-COVID-19 viral pneumonia at chest CT. Radiology. (2020) ; 296: (2): E46-E54. doi: 10.1148/radiol.2020200823.

[60] 

Shi H, Han X, Jiang N, Cao Y, Alwalid O, Gu J, et al. Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: A descriptive study. Lancet Infectious Diseases. (2020) ; 20: : 425-434.

[61] 

Zhao W, Zhong Z, Xie X, Yu Q, Liu J. Relation between chest CT findings and clinical conditions of coronavirus disease (COVID-19) pneumonia: A multicenter study. American Journal of Roentgenology. (2020) ; 1. doi: 10.2214/AJR.20.22976.

[62] 

Hani C, Trieu NH, Saab I, Dangeard S, Bennani S, Chassagnon G, et al. COVID-19 pneumonia: A review of typical CT findings and differential diagnosis. Diagn Interv Imaging. (2020) ; 101: (5): 263-268. doi: 10.1016/j.diii.2020.03.014.

[63] 

Machine learning in medical imaging and analysis{Internet]. aitrends; 2018 Dec 18 [cited 2020 Nov]. Available from: https://www.aitrends.com/healthcare/machine-learning-in-medical-imaging-and-analysis/.

[64] 

Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. Radiographics. (2017) ; 37: (2): 505-515. doi: 10.1148/rg.2017160130.

[65] 

Zhou CY, Chen YQ. Improving nearest neighbor classification with cam weighted distance. Pattern Recognition. (2006) ; 39: (4): 635-645. doi: 10.1016/j.patcog.2005.09.004.

[66] 

Hosmer DW, Stanley L. Applied logistic regression. 2nd ed. New York, NY: Wiley; (2000) .

[67] 

Quinlan JR. Induction of decision trees. Mach Learn. (1986) ; 1: (1): 81-106.

[68] 

Seber GAF, Lee AJ. Linear regression analysis. 2nd ed. New York, NY: Wiley; (2012) .

[69] 

Cristianini N, Shawe-Taylor J. An introduction to support vector machines and other kernel-based learning methods. NY, USA: Cambridge University Press; (1999) .

[70] 

Lowd D, Daniel L, Pedro D. Naive Bayes models for probability estimation. in: Proceedings of the 22nd International Conference on Machine Learning: ICML ’05. New York, NY: Association for Computing Machinery: (2005) .

[71] 

Hornik K, Kurt H, Maxwell S, Halbert W. Multilayer feedforward networks are universal approximators. Neural Netw. (1989) ; 2: (5): 359-366. doi: 10.1016/0893-6080(89)90020-8.

[72] 

Krishna K, Narasimha Murty M. Genetic K-means algorithm. IEEE Trans Syst Man Cybern B Cybern. (1999) ; 29: (3): 433-439.

[73] 

Johnson SC. Hierarchical clustering schemes. Psychometrika. (1967) ; 32: (3): 241-254.

[74] 

Birant D, Kut A. ST-DBSCAN: An algorithm for clustering spatial-temporal data. Data Knowl Eng 2007; 60: (1): 208-221.

[75] 

Roberts SJ, Husmeier D, Rezek I, Penny W. Bayesian approaches to Gaussian mixture modeling. IEEE Trans Pattern Anal Mach Intell 1998; 20: (11): 1133-1142.

[76] 

Dunn JC. A fuzzy relative of the ISODATA process and its use in detecting compact well-separated clusters. J Cybern 1973; 3: (3): 32-57.

[77] 

Haque IRI, Neubert J. Deep learning approaches to biomedical image segmentation. J Informatics in Medicine Unlocked. (2020) ; 18: : 100297. doi: 10.1016/j.imu.2020.100297.

[78] 

Roth HR, Shen C, Oda H, Oda M, Hayashi Y, Misawa K, et al. Deep learning and its application to medical image segmentation. J Medical Imaging Technology. (2018) ; 36: (2): 1-6.

[79] 

Pathan N, Jadhav ME. Medical image classification based on machine learning techniques. in: Advanced Informatics for Computing Research: Proceedings of ICAICR; Luhach A, Jat D, Hawari K, Gao XZ, Lingras P, eds. (2019) ; Shimla, India. Singapore: Springer.

[80] 

Yoon HJ, Jeong YJ, Kang H, Jeong JE, Kang DY. Medical image analysis using artificial intelligence progress in medical physics. Korean Society of Medical Physics. (2019) ; 30: (2): 49-58.

[81] 

Yadav SS, Jadhav SM. Deep convolutional neural network based medical image classification for disease diagnosis. J Big Data. (2019) ; 6: : 113.

[82] 

Purushotham S, Meng C, Che Z, Liu Y. Benchmarking deep learning models on large healthcare datasets. J Biomed Inform. (2018) ; 83: : 112-134. doi: 10.1016/j.jbi.2018.04.007.

[83] 

Faust O, Hagiwara Y, Hong TJ, Lih OS, Acharya UR. Deep learning for healthcare applications based on physiological signals: A review. Comput Methods Programs Biomed. (2018) ; 161: : 1-13. doi: 10.1016/j.cmpb.2018.04.005.

[84] 

Dai Y, Wang G. A deep inference learning framework for healthcare. Pattern Recognition letters. (2020) ; 139: : 17-25. doi: 10.1016/j.patrec.2018.02.009.

[85] 

Yang HC, Islam MM, Jack Li YC. Potentiality of deep learning application in healthcare. Comput Methods Programs Biomed. (2018) ; 161: : A1. doi: 10.1016/j.cmpb.2018.05.014.

[86] 

LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. (2015) ; 521: (7553): 436-444. doi: 10.1038/nature14539.

[87] 

Ker J, Wang L, Rao J, Lim T. Deep learning applications in medical image analysis. IEEE Access. (2018) ; 6: : 9375-9389. doi: 10.1109/ACCESS.2017.2788044.

[88] 

Bakator M, Radosav D. Deep learning and medical diagnosis: A review of literature. Multimodal Technologies Interact. (2018) ; 2: (3): 47. doi: 10.3390/mti2030047.

[89] 

Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet Digit Health. (2019) ; 1: (6): 271-297. doi: 10.1016/S2589-7500(19)30123-2.

[90] 

Razzak MI, Naz S, Zaib A. Deep Learning for Medical Image Processing: Overview, Challenges and the Future. in: Classification in BioApps. Lecture Notes in Computational Vision and Biomechanics, Dey N, Ashour A, Borra S, eds. 26. Springer, Cham. (2017) . 323-350.

[91] 

Seo H, Badiei Khuzani M, Vasudevan V, Huang C, Ren H, Xiao R, et al. Machine learning techniques for biomedical image segmentation: An overview of technical aspects and introduction to state-of-art applications. Med Phys. (2020) ; 47: (5): e148-e167. doi: 10.1002/mp.13649.

[92] 

Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Annu Rev Biomed Eng. (2017) ; 19: (1): 221-248.

[93] 

Indolia S, Goswami AK, Mishra SP, Asopa P. Conceptual understanding of convolutional neural network – a deep learning approach. Procedia Computer Science. (2018) ; 132: : 679-688. doi: 10.1016/j.procs.2018.05.069.

[94] 

Ahammed K, Satu MS, Abedin MZ, Rahaman MA, Islam SMS. Early detection of coronavirus cases using Chest X-ray images employing machine learning and deep learning approaches[preprint]. (2020) . doi: 10.1101/2020.06.07.20124594.

[95] 

Orkun F, Mingyan W, Matthias N, Lukas P, Matthias W, Carl E K, et al. Machine learning techniques for the segmentation of tomographic image data of functional materials. Frontiers in Materials. (2019) ; 6: : 145. doi: 10.3389/fmats.2019.00145.

[96] 

Sharma N, Jain V, Mishra A. An analysis of convolutional neural networks for image classification. Procedia Computer Science. (2018) ; 132: : 377-384.

[97] 

Yamashita R, Nishio M, Do RKG, Togashi K. Convolutional neural networks: An overview and application in radiology. Insights Imaging. (2018) ; 9: (4): 611-629. doi: 10.1007/s13244-018-0639-9.

[98] 

Varshni D, Thakral K, Agarwal L, Nijhawan R, Mittal A. Pneumonia detection using CNN based feature extraction. in: Proceedings of International Conference on Electrical, Computer and Communication Technologies (ICECCT). Coimbotore, India. IEEE; (2019) . Available from: https://ieeexplore.ieee.org/document/8869364.

[99] 

Kumar S, Mishra S, Singh SK. Deep transfer learning-based COVID-19 prediction using chest X-rays[preprint]. doi: 10.1101/2020.05.12.20099937.

[100] 

Elaziz MA, Hosny KM, Salah A, Darwish MM, Lu S, Sahlol AT. New machine learning method for image-based diagnosis of COVID-19. PLoS ONE. (2020) ; 15: (6): e0235187. doi: 10.1371/journal.pone.0235187.

[101] 

Abed MM, Hameed AK, Alaa SAW, Salama AM, Shumoos AF, Musa DA, et al. Benchmarking methodology for selection of optimal COVID-19 diagnostic model based on entropy and TOPSIS methods. IEEE Access. (2020) ; 8: : 99115-99131. doi: 10.1109/ACCESS.2020.2995597.

[102] 

Wang J, Bao Y, Wen Y, Lu H, Luo H, Xiang Y, et al. Prior-attention residual learning for more discriminative COVID-19 screening in CT images. IEEE Trans Med Imaging. (2020) ; 39: (8): 2572-2583. doi: 10.1109/TMI.2020.2994908.

[103] 

Singh D, Kumar V, Vaishali, Kaur M. Classification of COVID-19 patients from chest CT images using multi-objective differential evolution-based convolutional neural networks. Eur J Clin Microbiol Infect Dis. (2020) ; 39: (7): 1379-1389. doi: 10.1007/s10096-020-03901-z.

[104] 

Kang H, Xia L, Yan F, Wan Z, Shi F, Yuan H, et al. Diagnosis of coronavirus disease 2019 (COVID-19) with structured latent multi-view representation learning. IEEE Trans Med Imaging. (2020) ; 39: (8): 2606-2614. doi: 10.1109/TMI.2020.2992546.

[105] 

Roy S, Menapace W, Oei S, Luijten B, Fini E, Saltori C, et al. Deep learning for classification and localization of COVID-19 markers in point-of-care lung ultrasound. IEEE Trans Med Imaging. (2020) ; 39: (8): 2676-2687. doi: 10.1109/TMI.2020.2994459.

[106] 

Hu S, Gao Y, Niu Z, Jiang Y, Li L, Xiao X, et al. Weakly supervised deep learning for COVID-19 infection detection and classification from CT images. IEEE Access. (2020) ; 8: : 118869-118883. doi: 10.1109/ACCESS.2020.3005510.

[107] 

Fan DP, Zhou T, Ji GP, Zhou Y, Chen G, Fu H, et al. Inf-Net: Automatic COVID-19 lung infection segmentation from CT images. IEEE Transactions on Medical Imaging. (2020) ; 39: (8): 2626-2637. doi: 10.1109/TMI.2020.2996645.

[108] 

Pathak Y, Shukla PK, Arya KV. Deep bidirectional classification model for COVID-19 disease infected patients. IEEE/ACM Transactions on Computational Biology and Bioinformatics. (2021) ; 18: (4): 1234-1241. doi: 10.1109/TCBB.2020.3009859.

[109] 

Amyar A, Modzelewski R, Li H, Ruan S. Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation. Comput Biol Med. (2020) ; 126: : 104037. doi: 10.1016/j.compbiomed.2020.104037.

[110] 

Weiss K, Khoshgoftaar TM, Wang D. A survey of transfer learning. J Big Data. (2016) ; 3: (9). doi: 10.1186/s40537-016-0043-6.

[111] 

Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun. ACM. (2017) ; 60: (6): 84-90. doi: 10.1145/3065386.

[112] 

Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. J Big Data. (2019) ; 6: : 60. doi: 10.1186/s40537-019-0197-0.

[113] 

Perez L, Wang J. The effectiveness of data augmentation in image classification using deep learning arXiv: 1712.04621. [Preprint]. (2017) . Available from: https://arxiv.org/abs/1712.04621.

[114] 

Wong SC, Gatt A, Stamatescu A, McDonnell MD. Understanding data augmentation for classification: When to warp? in: International Conference on Digital Image Computing: Techniques and Applications (DICTA); (2016) Nov-Dec. Goldcoast, USA. IEEE; 1-6. doi: 10.1109/DICTA.2016.7797091.

[115] 

Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J, Greenspan H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing. (2018) ; 321: : 321-331. doi: 10.1016/j.neucom.2018.09.013.

[116] 

Mikołajczyk A, Grochowski M. Data augmentation for improving deep learning in image classification problem. International Interdisciplinary PhD Workshop (IIPhDW); Swinoujście; (2018) . 117-122. doi: 10.1109/IIPHDW.2018.8388338.

[117] 

A gentle introduction to generative adversarial networks (GANs) [Internet]. Machine Learning Mastery; 2019 [updated 2019 Jul 19; cited Nov 2020]. Avaialble from: https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/.

[118] 

Pereira RM, Bertolini D, Teixeira LO, Silla CN, Costa YMG. COVID-19 identification in chest X-ray images on flat and hierarchical classification scenarios. Comput Methods Programs Biomed. (2020) ; 194: : 105532. doi: 10.1016/j.cmpb.2020.105532.

[119] 

Apostolopoulos ID, Mpesiana TA. Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys Eng Sci Med. (2020) ; 43: (2): 635-640. doi: 10.1007/s13246-020-00865-4.

[120] 

Loey M, Smarandache F, Khalifa NE. Within the lack of chest COVID-19 X-ray dataset: A novel detection model based on GAN and deep transfer learning. Symmetry. (2020) ; 12: (4): 651. doi: 10.3390/sym12040651.

[121] 

Khalifa NEM, Taha MHN, Hassanien AE, Elghamrawy S. The detection of COVID-19 in CT medical images: A deep learning approach. Big Data Analytics and Artificial Intelligence Against COVID-19: Innovation Vision and Approach. (2020) ; 78: : 73-90. doi: 10.1007/978-3-030-55258-9_5.

[122] 

Waheed A, Goyal M, Gupta D, Khanna A, Al-Turjman F, Pinheiro PR. CovidGAN: Data augmentation using auxiliary classifier GAN for improved Covid-19 detection. IEEE Access. (2020) ; 8: : 91916-91923. doi: 10.1109/ACCESS.2020.2994762.

[123] 

Hu R, Ruan G, Xiang S, Huang M, Liang Q, Li J. Automated diagnosis of COVID-19 using deep learning and data augmentation on chest CT [Preprint]. (2020) . Available from: https://www.medrxiv.org/content/10.1101/2020.04.24.20078998v2.

[124] 

Loey M, Manogaran G, Khalifa NEM. A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Comput Appl. (2020) ; 1-13. doi: 10.1007/s00521-020-05437-x.

[125] 

Nishio M, Noguchi S, Matsuo H, Murakami T. Automatic classification between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy on chest X-ray image: Combination of data augmentation methods. Scientific Reports. (2020) ; 10: (1). doi: 10.1038/s41598-020-74539-2.

[126] 

Ucar F, Korkmaz D. COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-ray images. Med Hypotheses. (2020) ; 140: : 109761. doi: 10.1016/j.mehy.2020.109761.

[127] 

Abbas A, Abdelsamea MM, Gaber MM. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl Intell. (2021) ; 51: : 854-864. doi: 10.1007/s10489-020-01829-7.

[128] 

Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial networks. in: Advances in Neural Information Processing Systems. Curan Associates, Inc; Ghahramani Z, Welling M, Cortes C, Lawrence N, Weinberger K Q, eds. (2014) .