Next Article in Journal
Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People
Previous Article in Journal
A Needs Learning Algorithm Applied to Stable Gait Generation of Quadruped Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

COVID-19 Detection on Chest X-ray and CT Scan: A Review of the Top-100 Most Cited Papers

by
Yandre M. G. Costa
1,*,
Sergio A. Silva, Jr.
1,
Lucas O. Teixeira
1,
Rodolfo M. Pereira
2,
Diego Bertolini
3,
Alceu S. Britto, Jr.
4,
Luiz S. Oliveira
5 and
George D. C. Cavalcanti
6
1
Departamento de Informática, Universidade Estadual de Maringá, Maringá 87020-900, Brazil
2
Instituto Federal do Paraná, Pinhais 83330-200, Brazil
3
Departamento Acadêmico de Ciência da Computação, Universidade Tecnológica Federal do Paraná, Campo Mourão 87301-899, Brazil
4
Departmento de Ciência da Computação, Pontifícia Universidade Católica do Paraná, Curitiba 80215-901, Brazil
5
Departamento de Informática, Universidade Federal do Paraná, Curitiba 81531-980, Brazil
6
Centro de Informática, Universidade Federal de Pernambuco, Recife 50740-560, Brazil
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(19), 7303; https://doi.org/10.3390/s22197303
Submission received: 9 August 2022 / Revised: 13 September 2022 / Accepted: 19 September 2022 / Published: 26 September 2022
(This article belongs to the Special Issue COVID-19 Biosensing Technologies)

Abstract

:
Since the beginning of the COVID-19 pandemic, many works have been published proposing solutions to the problems that arose in this scenario. In this vein, one of the topics that attracted the most attention is the development of computer-based strategies to detect COVID-19 from thoracic medical imaging, such as chest X-ray (CXR) and computerized tomography scan (CT scan). By searching for works already published on this theme, we can easily find thousands of them. This is partly explained by the fact that the most severe worldwide pandemic emerged amid the technological advances recently achieved, and also considering the technical facilities to deal with the large amount of data produced in this context. Even though several of these works describe important advances, we cannot overlook the fact that others only use well-known methods and techniques without a more relevant and critical contribution. Hence, differentiating the works with the most relevant contributions is not a trivial task. The number of citations obtained by a paper is probably the most straightforward and intuitive way to verify its impact on the research community. Aiming to help researchers in this scenario, we present a review of the top-100 most cited papers in this field of investigation according to the Google Scholar search engine. We evaluate the distribution of the top-100 papers taking into account some important aspects, such as the type of medical imaging explored, learning settings, segmentation strategy, explainable artificial intelligence (XAI), and finally, the dataset and code availability.

1. Introduction

Since 2020, we have observed a significant amount of works published describing solutions for the most varied problems that arose due to the COVID-19 pandemic. As a consequence of technological development, many of these works present computer-based solutions to attack those problems.
Currently, a large number of medical imaging tests are performed every day because the digital image is quite suitable both for storage and also to support examination. At the same time, it is also widely known that digital imaging is the standard input for research developed by the pattern-recognition and machine-learning communities. Hence, we have faced a boom in the number of works published by these research communities devoted to supporting medical examination from medical imaging.
Since the beginning of the pandemic, pneumonia has been one of the most common consequences of COVID-19 due to the high level of exposure to the respiratory system. CXR and CT scans are the most commonly used imaging tests for diagnosing pneumonia, and CT scan is the gold-standard imaging test that best supports the analysis of the lungs. On the other hand, CXR is cheaper and more widespread around the world. Numerous studies have been developed by the pattern-recognition and machine-learning research communities specifically using these kinds of images. Figure 1 shows one example of each of these image types.
By searching for works already published in this context, we can easily find thousands of them addressing this topic from the most varied perspectives, such as pneumonia detection, pneumonia classification (in terms of the causative pathogen), lung region segmentation, infection region segmentation, and decision explanation. However, many of these works do not present a very impressive scientific contribution. In this way, here we describe a review of the top-100 most cited works published in the literature within this context according to the Google Scholar search engine (The search was carried out on 12 July 2022). The rationale behind this choice is that the number of citations obtained by a paper is probably the most straightforward and intuitive way to verify the impact of a given work on the research community.
In this review, we aim to address some important aspects related to the top-100 selected papers as the predominant computational methods used in this field of research. By analyzing the literature, we can find other reviews evaluating the top-cited COVID-19 papers. However, it is important to point out that, to the best of our knowledge, none of them explored thoracic medical imaging from the same perspective we did here, but from a more broadly oriented point of view [2,3,4].
This paper is organized as follows: Section 2 describes the study design and illustrates a taxonomy used to conduct the discussions along this work. Section 3 describes some details of the top-25 papers according to the number of citations. Section 4 is composed of specific subsections to discuss the top-100 papers taking into account aspects like type of medical imaging, type of learning, use of a strategy for segmentation, use of XAI, and dataset and code availability. In Section 5, concluding remarks are pointed out, and finally, Appendix A describes some information about the 75 papers not explored in Section 3.

2. Study Design and Taxonomy

This section describes how we organized the search for the works we discuss in this study. The search was performed by using the Google Scholar search engine on 12 July 2022. We decided to use this platform because it integrates works of all other scientific research portals (engines) and provides a reasonable estimate of the number of citations obtained by each work. The search was performed with the two following search queries: (i) (COVID AND (X-ray OR CT scan) AND (“image processing” OR “machine learning” OR “artificial intelligence” OR diagnosis OR detection)), and (ii) (COVID AND “deep learning”). In the former query, we have excluded works that do not present computer-based solutions, and we excluded results unrelated to CXR and CT scan solutions in the last query.
Following this, we performed the first filtering (F1), excluding works that correspond to reviews, surveys, or comparative studies, which do not correspond to our purposes. After this first filtering, we excluded a total of nine works that had been obtained in the first round. Thus, we took the subsequent nine most cited papers that do not belong to the category excluded in F1 to complete the top 100. Next, we performed a second filter (F2), excluding works that had not been peer-reviewed (preprints). After performing F2, we excluded a total of 18 works, and again, we took the subsequent 18 most cited studies that do not belong to the categories excluded both in F1 and F2 to complete the top-100.
Table 1 presents the top-100 most cited papers obtained after the first round and after each filtering. These details are (i) the average number of citations for the top-100; (ii) the h-index among the top-100 papers; (iii) the number of citations of the most cited paper; (iv) the number of citations of the least cited paper in the top 100.
Figure 2 shows a taxonomy containing the main aspects we considered for conducting the discussions in this study. We evaluated five aspects: (i) medical image, chest X-ray (CXR) or computed tomography (CT scan); (ii) learning approach, deep or shallow (we use the term shallow method to refer to any method other than deep learning); (iii) segmentation strategy, manual or automated using a deep network, common deep strategies includes U-Net [5], SegNet [6], and others; (iv) explainable artificial intelligence (XAI), common strategies includes class activation maps (CAM) [7], gradient-weighted CAM (Grad-CAM) [8], local agnostic linear model (LIME) [9], layer-wise relevance propagation (LRP) [10], and others; and, (v) dataset and code availability.

3. Overview of Top 25 Most Cited Papers

This section describes the main highlights of the top 25 most cited papers. We decided to restrict the number of works detailed, aiming to keep our list as short as possible while emphasizing its most important contributions. The selection of the top 25 most cited papers is purely quantitative and does not consider any particular characteristic. Assuming that the number of citations is a metric for scientific quality and importance, it is interesting to describe the most cited papers to find out exactly what they proposed and evaluated to achieve popularity in such a short term.
Wang et al. [11] presented the most cited paper found in our search protocol, described in Section 2, with a total of 1848 citations. In that work, the authors performed COVID-19 detection by using the COVID-Net, a deep convolutional network specially tailored to detect COVID-19 from CXR images. The developed network is open source and was made available to the general public. The authors also introduced COVIDx, an open access dataset composed of 13,975 images obtained from 13,870 patients, probably with the largest number of positive cases available at that moment. The dataset was created by taking images from other sources of CXR images. In addition, the authors used an explainability method to aid clinicians in improving the screening process, adding transparency and reliability to the provided results. The work attracted much attention, probably for the following reasons: it was one of the first open-source networks designed for COVID-19, it made available a quite useful dataset with a significant number of positive exams, and finally, it was published at a very opportune time, in 2020.
Ozturk et al. [12] addressed COVID-19 detection from CXR images by using the DarkNet model as a classifier for the you only look once (YOLO) real-time object-detection system. The work is the second-most cited paper in the list obtained in our review, with a total of 1523 citations. The problem was addressed both as binary (COVID-19 vs. no findings) and multi-class classification (COVID-19 vs. no findings vs. pneumonia). The classification accuracy obtained was 98.08% for binary classification and 87.02% for multi-class. The authors implemented 17 convolutional layers in the model, including different filtering on each layer. The model was made available on GitHub. The main positive remarks of this work were the impressive moment when it was published, in April 2020, the evaluation of the results by radiologists, and also the availability of the model to the public. However, the authors admit that the work was done with a limited dataset, and future improvements on a more robust dataset should be pursued.
Apostolopoulos et al. [13] experimented with automatic COVID-19 detection from X-ray images by using convolution neural networks with transfer learning. For this, the authors composed two datasets (i.e., Dataset_1 and Dataset_2) by using images taken from three different sources: (i) the collection of X-ray images of Professor Joseph Cohen from the University of Montreal; (ii) a set of X-ray images obtained from websites such as the Radiological Society of North America, Radiopaedia, and the Italian Society of Medical and Interventional Radiology; (iii) and finally, a collection of common bacterial–pneumonia X-ray scans was included, to train the model to distinguish COVID-19 from other types of pneumonia. Dataset_1 was composed of 224 positive COVID-19 images, 700 bacterial pneumonia images, and 504 healthy lungs images. Dataset_2 was composed of 224 positive COVID-19 images, 504 healthy images, and 714 images of both bacterial and viral pneumonia (400 bacterial and 314 viral). The images were all resized to 200 × 266, and they were evaluated by using the following models: VGG19, MobileNetV2, Inception, Xception, and Inception ResNet v2. The fine-tuning was performed separately for each model evaluated, so each one had its own parameters defined. The training and evaluation were done by using 10-fold cross-validation, and the best results were obtained with the MobileNet v2 model, which achieved an accuracy of 96.78% and 94.72 for binary and three classes classification, respectively.
Narin et al. [14] performed COVID-19 detection from X-ray images by using five convolutional neural network models and three different public datasets (i.e., Dataset_1, Dataset_2, and Dataset_3). Dataset_1 is composed of 341 X-ray images obtained from Dr. Joseph Cohen’s open source GitHub repository, Dataset_2 has 2800 healthy chest X-ray images from the ChestX-ray8 database, and Dataset_3 is made of 2772 bacterial and 1493 viral pneumonia chest X-ray images from the Kaggle Chest X-Ray Images (Pneumonia) repository. Five pre-trained models were used in this work: ResNet50, InceptionV3, ResNet101, Inception-ResNetV2, and ResNet152. The authors performed their experiment by using three different binary classes: Binary Class-1 (COVID-19 vs. healthy), Binary Class-2 (COVID-19 vs. viral pneumonia), and Binary Class-3 (COVID-19 vs. bacterial pneumonia). The evaluation used a five-fold cross-validation and the ResNet50 pre-trained model obtained the bests results, with an accuracy of 96.1% in Binary Class-1, 99.5% in Binary Class-2, and 99.7% in Binary Class-3. The highlights of this paper include the fact that it used more data than many other articles at the time it was published, as well as its significantly high performance.
Wang et al. [15] hypothesized that by analyzing CT scan images taken from the lungs, it is possible to extract graphical features of COVID-19 providing a clinical diagnosis ahead of the pathogenic test typically made by laboratories. Thus, the authors performed experiments on a dataset composed of 1065 CT scan images of COVID-19 confirmed patients and others taken from typical viral pneumonia. Three Chinese hospitals provided the images. In the proposed pipeline, the authors first performed some preprocessing of the images and manually delineated the regions of interest (RoIs) on the images. Transfer learning was done by using a predefined model (i.e., GoogleNet Inception V3) already trained on 1.2 million images from ImageNet labeled into 1000 categories. The authors proposed a modified inception (M-inception) for classification by changing the last of the fully connected layers. The feature’s dimensionality was reduced before it was sent to the final classification.
Finally, the authors performed a robust evaluation of the method, addressing some critical points closely related to the practical feasibility of the employment of the proposal. For the performance evaluation, the authors first trained and tested the system, exclusively using images from the same hospital. In this scenario, the accuracy rate was 89.5%. Next, another round of experiments using images from the three hospitals was performed, and the obtained accuracy was 82.5%. Another important comparison was between the results obtained by the system and radiologist prediction. Two radiologists assessed the images and achieved an accuracy of approximately 55%. This result demonstrates the advantage of the use of the proposed method. Lastly, the authors experimented on 54 images incorrectly predicted (false negatives) by using nucleic acid testing, the gold standard for COVID-19 diagnosis. The system was able to predict 46 out of the 54 images correctly.
Xu et al. [16] sought to develop an early screening model to detect COVID-19 from pulmonary CT images by using deep learning techniques. The dataset used contained 618 transverse-section CT samples (219 COVID-19, 224 Influenza-A viral pneumonia, and 110 healthy) provided by three Chinese hospitals. In the first step of their approach, the authors preprocessed the CT images to select the most effective pulmonary regions. Afterward, a total of 3957 candidate image cubes were segmented by a 3D CNN segmentation model; because the cube’s middle region contained the most amount of information about the infection, the cube center image and its two neighbors were selected, totaling 11,871 image patches used for training and classification. The authors evaluated two models: a traditional ResNet-18 based model and a ResNet-18 model concatenated with a location–attention mechanism. In the first step of the evaluation, the authors tested the classification for a single image patch; ResNet-18 achieved an accuracy rate of 78.5% whereas ResNet-18 plus location attention mechanism achieved 79.4%. Because of the lower performance, the ResNet-18 model was not used in further experiments, The authors then analyzed the classification of CT samples as a whole, and an overall accuracy rate of 86.7% was achieved by the ResNet-18 plus location attention mechanism.
Khan et al. [17] introduced the CoroNet, a deep convolutional network specially designed for COVID-19 detection from CXR images. The model was based on Xception architecture pre-trained on the ImageNet dataset and end-to-end trained on an image collection curated for the development of the study. The model was evaluated on two different scenarios, the first considering four classes (COVID-19 vs. pneumonia bacterial vs. pneumonia viral vs. normal), obtaining an accuracy of 89.6%, and the second with three classes (COVID-19 vs. pneumonia vs. normal), achieving an accuracy of 95%. The work was presented in May 2020, at the pandemic’s beginning. One of the main contributions of the work was to point some directions and to indicate that deep models could adequately be used to address COVID-19 detection from CXR images with minimum pre-processing of data. In addition, the authors also claim that further improvements could be achieved with more extensive sets of data.
To overcome the limited availability of annotated medical images in the context of COVID-19 diagnosis, Abbas et al. [18] experimented with a deep CNN called decompose, transfer, and compose (DeTraC) for COVID-19 classification from CXR images. DeTraC is intended to properly deal with irregularities present in the dataset by using a class decomposition mechanism to investigate its class boundaries. For this, a class composition layer is introduced to clarify the final classification. A class decomposition component is included before the knowledge transformation from an ImageNet pre-trained CNN model, and a class composition component is included after that. The model was evaluated on a comprehensive dataset composed of images taken from several hospitals worldwide. An accuracy of 93.1% was obtained in detecting COVID-19 from normal and severe acute respiratory syndrome cases.
Song et al. [19] developed a deep learning-based CT diagnosis system evaluated on a dataset composed of CT scan images obtained from 88 patients diagnosed with COVID-19, 100 patients infected with bacterial pneumonia, and 86 healthy persons for comparison and modeling. The proposed system was very successful in detecting the primary lesions present in CT images. Moreover, we can highlight that the work was developed at a very early stage of the COVID-19 pandemic. Although the work was published in March 2021, the first version of the manuscript was submitted in April 2020, when very few positive COVID-19 images were available. The deep learning solution proposed was based on three main steps: first, the central region of the lung is extracted. Next, a details relation extraction neural network (DRENet) was used to obtain image-level predictions. Finally, the image-level predictions were aggregated to obtain the person’s diagnosis. The model could discriminate COVID-19 from bacterial pneumonia with a recall of 0.96. For discriminating COVID-19 from bacterial pneumonia and healthy persons, the system obtained a recall of 0.95. The authors made the system available for COVID-19 diagnosis at an online server, and the source codes and the datasets were also made available. However, one drawback of the proposal is that it could not keep good prediction rates when evaluated on external data.
Oh et al. [20] also investigated COVID-19 features on CXR images at an early stage of the pandemic. At that moment, there was a huge scarcity of data. Thus, the authors proposed a patch-based CNN approach with a relatively small number of trainable parameters. The method was based on the use of statistical analysis of the potential biomarkers of CXR images. The first step in the proposed general framework is the data normalization, as a pre-processing stage. In addition, a segmentation network is used to isolate the lung areas as regions of interest. Then, patches are obtained from the lung area and used for training a classification network. For testing, the decision for each image is based on the majority voting involving the decisions taken for the patches created from the image. The experimental results demonstrated that the method was able to get state-of-the-art performance.
Ardakani et al. [21] experimented with 10 convolutional neural networks to evaluate the application of deep learning techniques in routine clinical practice. The authors used CT images from 194 patients (108 COVID-19 and 86 non-COVID-19) in their study. The region of interest of the images—that is, the region of infection—were segmented manually by an expert. After being segmented, they were cropped and resized to 60 × 60 pixels. The authors used various performance metrics to evaluate their and a diagnosis from a radiologist expert. The best results in accuracy were found for the ResNet-101 (99.63%) and Xception (99.38%) networks; however, ResNet-101 was able to diagnose COVID-19 with higher sensitivity when compared to Xception, which is highly desirable when diagnosing diseases. When analyzing a single image patch, the radiologist was no match for the CNNs; he achieved better performance when analyzing the whole CT slice, but his accuracy was still lower than most CNNs. Overall, the authors successfully created a computer-aided diagnosis with promising results.
Chen et al. [22] developed a study proposing a deep model for COVID-19 detection from high-resolution CT scan images. For this purpose, the authors curated a collection composed of 46,096 images obtained from 106 patients at the Renmin Hospital of Wuhan University (China). Among these patients, 51 were affected by COVID-19 pneumonia, confirmed by laboratory test. The other 55 were control patients of other diseases. The model was built on top of U-Net++ architecture, having all the parameters loaded from ResNet-50 pre-trained on the ImageNet dataset. An external evaluation was conducted at another hospital to verify the system’s robustness, and it obtained an accuracy of 98.85% per image and 95.24% per patient. The system performance was also compared to expert radiologists on data from 27 prospective patients of Renmin Hospital. The system performance was considered comparable to the human expert’s performance, and, in addition, the time consumed by humans to perform the evaluation assisted by the system decreased by 65%. Lastly, it is worth mentioning that this study was published in November 2020.
Ucar et al. [23] proposed a new model for diagnosing COVID-19 based on deep Bayes–SqueezeNet called COVIDiagnosis-Net. The authors used chest X-ray images from the available CovidX dataset to train their model. This dataset is made of three classes: normal, pneumonia, and COVID-19. Compared to the other two classes, there are few images of COVID-19 in the dataset. Therefore, the authors performed a detailed offline augmentation over the COVID-19 class to overcome the imbalance ratio. The authors proposed model reached an accuracy rate of 98.3%, better performance compared to the state-of-the-art methods at the time (early 2020). In addition, the COVIDiagnosis-Net is significantly smaller than other models, such as AlexNet, hence, being ideal for implementation in embedded and mobile systems.
Afshar et al. [24] proposed a capsule network, called COVID-Caps, aiming to circumvent the difficulty of CNNs in dealing with spatial information between different instances of the image. The proposed capsule network presents four convolutional layers and three capsule layers. The network is fed with 3-D X-ray images. The loss function was also modified to deal with the class imbalance issue. At that moment of the pandemic, obtaining enough data for experimentation was not easy. The model was capable of obtaining an accuracy of 95.7%. In addition, aiming to get better results, the authors experimented with training and transfer learning based on an external dataset. Differently from other works, the authors conducted the pre-training by using X-ray images. Following this protocol, the authors obtained an accuracy of 98.3%. Lastly, it is essential to mention that the model was made available publicly for open access.
Panwar et al. [25] proposed a deep learning neural network method called nCOVnet to create an alternative fast screening method for detecting COVID-19. The authors used Dr. Joseph Cohen’s open source GitHub repository and Kaggle’s Chest X-Ray Images (pneumonia) as the dataset. Data augmentation techniques were applied to overcome the dataset limitations. It is worth noting that the authors took extra precautions to prevent data leakage. The nCOVnet uses the VGG16 model as the base layer of the architecture and adds to it five custom layers as a head model. In conclusion, nCOVnet could predict COVID-19 from CXR images with 97.97% confidence.
Huang et al. [26] presented a quantitative evaluation of burden changes in COVID-19 patients by using a deep learning method on serial CT scan images. The method was based on the evaluation of a quantitative image parameter (called QCT-PLO), automatically generated by a deep learning software tool from chest CT scans. The authors concluded that the quantification of lung opacification was significantly different among COVID-19 patient groups with different levels of severity. In conclusion, they claim that this method could eliminate the subjectivity in the initial assessment and follow-up of pulmonary findings for COVID-19.
Togaçar et al. [27] performed COVID-19 detection from CXR with a dataset containing three classes: COVID-19, pneumonia, and healthy. The data classes were restructured by using Fuzzy Color, an image-stacking technique. Image stacking combines multiple images, aiming to improve the quality of the images in the dataset, eliminating noises from them. The deep learning models MobileNetV2 and SqueezeNet were used to train the stacked dataset and the feature sets obtained were processed by using the Social Mimic Optimization (SMO) method. Lastly, efficient features were combined, and the classification was performed by using support vector machine (SVM). The overall classification rate obtained was 99.27%. The authors claim that the proposed preprocessing enhanced the feature extraction efficiency by using the SMO algorithm. In addition, they also demonstrate the usability of the proposed approach in mobile devices.
Pereira et al. [28] investigated COVID-19 identification from CXR images considering different perspectives. In the first scenario, the authors performed multiclass classification by using CXR images containing pneumonia caused by different pathogens (COVID-19, SARS, MERS, streptococcus, and pneumocystis). Then, the authors identified a hierarchy between the different pathogens and investigated the classification considering a hierarchical scenario. The authors also experimented with resampling algorithms to deal with the natural imbalance between the different types of pneumonia. In addition, a dataset (named RYDLS-20) was composed of publicly available datasets. Lastly, it is important to mention that they also experimented with the use of handcrafted features by evaluating a comprehensive set of texture descriptors and non-handcrafted features, automatically obtained by using deep learning models. The best result obtained for COVID-19 was found in the multiclass scenario, with an F-Score of 0.89. The paper was published at the beginning of May 2020.
Wang et al. [29] presented a fully automated deep learning system to diagnose COVID-19 and stratify patients into high- and low-risk groups. The authors used a large dataset with 5372 computed tomography exams. The dataset was collected from various cities or provinces of China. To acquire the lung mask of the CT images, the authors performed lung segmentation by using the DenseNet121-FPN deep learning method. After that, non-lung tissues and organs that may still exist in the region of interest were suppressed. For the diagnosis and prognosis, the researchers used their proposed model, COVID19Net, which uses a DenseNet-like structure. The training of COVID19Net was performed in two steps: (i) train the model with a large dataset (4106 patients) of lung cancer; (ii) transfer the pre-trained model to the COVID-19 dataset. For prognostics, the authors combined the 64-dimensional feature from the COVID-19Net and combined it with clinical features (age, sex, and comorbidity). This new feature vector was used to build a multivariate Cox proportional hazard model. COVID-19Net reached an AUC of 0.90 in the training set and obtained similar results in two other validation sets, 0.87 and 0.88, respectively. Regarding the prognostic, Kaplan–Meier analysis showed that patients classified in the high-risk group had a higher hospital stay time when compared to the low-risk group.
The lack of a publicly available dataset with CXR and CT scan images is one of the biggest obstacles that obstruct the research of COVID-19 artificial intelligence-based solutions. Aiming to circumvent this problem, Maghdid et al. [30] presented a comprehensive dataset of both these types of images, obtained from multiple sources. The dataset comprised 170 CXR images and 361 CT scan images in its first version. In addition, they also presented a simple CNN and modified pre-trained AlexNet model and experimented on CXR and CT scan images. The experimental results achieved an accuracy of up to 94.1% by using the first model and up to 98% by using the latter.
Brunese et al. [31] presented a two-step approach to detect COVID-19. The authors created two deep learning models. The first model can distinguish between healthy X-ray chest images and those showing individuals with pulmonary disease. If the X-ray image is labeled as pulmonary disease, a second model detects whether the pulmonary disease is pneumonia or COVID-19. In addition, the researchers used the GRAD-CAM activation map to highlight the most significant areas in the COVID-19 detection. Brunese et al. models were based on the VGG-16 model and used transfer learning methods. The dataset used in this work combines three others, two of which are freely available datasets. It contains 6523 X-Ray images in total. Regarding the evaluation of the models, the first one (healthy vs. pulmonary disease) obtained an accuracy and sensitivity of 0.96, and the second one (pneumonia vs. COVID-19) reached an accuracy of 0.98 and a sensitivity of 0.87.
Loey et al. [32] evaluated COVID-19 detection by using deep transfer learning and generative adversarial networks (GAN) to apply data augmentation. The experimental dataset was composed of images taken from other publicly available datasets. The authors experimented with three different scenarios: four classes (COVID-19, normal, viral pneumonia, and bacterial pneumonia), three classes (COVID-19, normal and bacterial pneumonia), and finally, binary classification (COVID-19 vs. normal). Reasonable performance rates were obtained in the aforementioned scenarios: for four classes, the best rate obtained was 80.6% of accuracy, for three classes, 85.2% of accuracy, and on the binary classification, 100% accuracy was obtained. Despite the impressive results, the code and dataset used in this work were not made publicly available.
Islam et al. [33] performed COVID-19 classification from CXR images by using CNN to make feature extraction and long short-term memory (LSTM) for classification. The experiments were carried out on a dataset created by using images from publicly available collections containing positive COVID-19 CXR images. The images were divided into three classes: COVID-19, normal, and pneumonia (other than COVID-19). As a result, the authors obtained an accuracy of 99.4%, and they claim that the proposed CNN-LSTM architecture overcame the results obtained by using a competitive CNN architecture.
Ismael et al. [34] experimented with shallow and deep learning approaches to detect COVID-19 in chest X-ray images. The authors used a dataset with 180 COVID-19 and 200 normal CXR images in their experiments. Regarding the deep learning approaches, fine-tuning procedures were done for the ResNet18, ResNet50, ResNet101, VGG16, and VGG19 models. Furthermore, an end-to-end CNN model was trained. As for the shallow approach, Ismael et al. evaluated the SVM classifier trained with deep learning features and various texture extractors such as LBP, LPQ, BSIF, and others. Overall, the deep learning approaches outperformed the local descriptors. The best result was achieved by combining ResNet50 features with the SVM classifiers; this combination achieved an accuracy of 95.79%. Other approaches are also worth mentioning: fine-tuning of ResNet50 achieved an accuracy of 92.6%, end-to-end training of CNN achieved an accuracy of 91.6%, and BSIF achieved an accuracy of 90.5%.
To aid in the screening of COVID-19, Amyar et al. [35] proposed a multi-task deep learning (MTL) approach. The proposed MTL architecture was based on three tasks: COVID-19 vs. normal vs. other infections classification, COVID-19 lesion segmentation, and image reconstruction. The authors collected CT images from three different datasets, totaling 1369 CT scans for their study. The performance of the MTL was compared with various state-of-the-art models, including U-NET for segmentation and Alexnet, VGG-16, VGG-19, ResNet50, and others for classification. The MTL performed significantly better than the state-of-the-art approaches in both segmentation and classification. In the COVID-19 lesion segmentation task, the MTL achieved an accuracy of 95.23%, whereas U-NET achieved 83.40%. As for classification, the proposed method had an accuracy of 94.67%, whereas the best among the state-of-the-art tested models had an accuracy of 90.67%.
Table 2 presents some of the most remarkable details about the 25 papers described in this section. The remaining 100 papers evaluated in this study are listed in Table A1, presented in the Appendix A.

4. General Statistics

This section describes some statistics that can help to identify how the papers investigated in this review are distributed, considering some important aspects in which they can be categorized.

4.1. Citations

The number of citations that early COVID-19 papers received is significant. Nowadays, after approximately two years after the pandemic started, the top paper in this review was cited by 1848 subsequent works. Usually, such a number is obtained after many years of publication. All of that makes the COVID-19 pandemic a significant event worth analyzing.
The number of citations ranged from 1848 to 65, with a mean of 251.5 citations, a standard deviation of 323.7, and an interquartile range of 182.7.
Since the time of the publication is critical, to normalize the number of citations, we calculated the average number of citations per day (CPD) for each of the 100 papers and sorted them in decreasing order. Only two papers originally out of the top 25 made their way into this list. In general, only a few slight changes affected the original ranking. The work carried out by Kassania et al. [36] was originally in the 35th position, and after reordering, it was placed in the 16th position; the work presented by Rahman [37] was originally placed in the 32nd position, and after reordering, it was placed in the 20th position.

4.2. Publication Dates

As discussed, COVID-19 attracted researchers from many different areas, resulting in many published works quickly. A simple search in any engine can easily retrieve hundreds of published papers in different fields.
Given the amount of popularity around the topic, the dates of submission and publications and their difference are fascinating detail to analyze. At first, it is possible to notice a constant flow of publications from the beginning of the pandemic until the second quarter of July 2021. Figure 3 presents an exploratory overview of the top-100 papers selected and submission and publication dates. For 14 papers, the submission date is unavailable; hence, only the publication date is displayed. Among the selected papers, only one was submitted and published after June 2021. The reason for that might be simply that we selected papers based on the number of citations; there was probably not enough time for the newer papers to obtain the proper amount of citations.
Table 3 displays the exact number of papers submitted and published per quarter from 2020 Q1 to 2022 Q1. Among the selected papers, most were submitted during the first half of 2020, and almost all within the first and third quarters of 2020. Considering 86 papers with submission dates available, 60 (≈70%) were submitted in the first half of 2020, and 78 (≈91%) were submitted in the first three quarters of 2020.
Such a skewed distribution of early submissions is somewhat expected for two main reasons: (i) there was a huge commotion at the start of the pandemic to find solutions that could be applied in practice, and (ii) early papers laid the foundations by proposing novel datasets and methods, and hence obtained many citations by subsequent works.
In this vein, it is also possible to notice when analyzing the submission and publication time difference in Figure 3 that many papers had a minimal time difference, meaning that editors and publishers were very fast and efficient in publishing early COVID-19 related papers.

4.3. Countries

The top-100 papers published originated from 34 countries in total. We considered each paper’s author’s institution country for the analysis. Figure 4 presents an exploratory visual representation of their global distribution. China dominated the publication share with a total of 207 authors, 32.6% of the total. India followed them with a total of 65 authors, 10.2% of the total. The United States is in the third position with 42 authors, 6.6%. Appendix B presents the exact distribution for all 34 countries.

4.4. CXR vs. CT Scan

Two prominent medical image tests are used to investigate the lungs and, consequently, to support pneumonia diagnosis: CXR and CT scan. Even though CT scan is considered the gold standard for pneumonia analysis, we cannot ignore that CXR has many advantages as well, as it is more widespread, cheaper, and faster to obtain. There are several health centers worldwide where a CXR machine is available and a CT scan machine is not.
CXR images were the most frequently used in the top-100 papers reviewed here, exclusively assessed in 61 of them. Furthermore, 28 papers used only CT images, and 11 used both these types of images, as shown in Table 4. As discussed in Section 4.9, the likely reason for such distribution is twofold: (i) there were many COVID-19 CXR datasets available early, and (ii) CXR images are much more manageable and lighter to process than a CT scan volume. The average number of citations per paper did not vary much between CT-scan and CXR images.

4.5. Datasets

The limited number of publicly available CXR and CT scan images was a shortcoming of almost every paper reviewed here. We have to remember that most top-cited papers were published in 2020 at a time when any information around COVID-19 was still being published in the early days of the virus.
Table 5 presents the most frequent datasets used. Many papers composed novel datasets by combining multiple images from different sources. Such a trend is clear when we analyze the usage frequency of each dataset. For instance, Kaggle (pneumonia) and ChestX-ray8/ChestX-ray14 are publicly available datasets that precede the pandemic. Researchers are using them to extract images from other pathogens or healthy patients.
The Dr. Joseph Cohen initiative was the most used dataset throughout with 55 papers. It was followed by two non-COVID datasets, Kaggle (pneumonia) and ChestX-ray8/ChestX-ray14 with 29 and 22 papers using them, respectively.
The availability of public datasets is a game changer. Table 6 presents the distribution of public and private datasets. Most of the selected papers used public datasets for evaluation and received substantially more citations.

4.6. Learning Setup

The learning setup is the set of decisions, details, and parameters that control the classification process, comprised mainly of algorithms and data transformations.
In this review, we are separating the classifier type into two categories: deep and shallow methods. We use the term shallow method to refer to any method other than deep learning. Table 7 presents the classifier type distribution in this review. The use of deep learning methods, especially convolutional neural networks (CNNs), has been steadily increasing and dominating pattern-recognition tasks based on images over the last few years. The trend is very prominent in the selected papers: 81 exclusively applied deep learning, whereas only 12 applied shallow methods, and 7 used both machine learning methods. The average number of citations per paper is also substantially more significant in deep learning proposals.
One of the main advantages of CNNs, compared to shallow methods, is their ability to automatically learn helpful features from images, reducing the burden of applying and evaluating multiple handcrafted feature extractors. However, deep learning requires a large amount of training data to converge due to many trainable parameters.
Usually, it is unfeasible to use shallow methods with images directly, as it is performed with CNNs. First, one must extract features from the images by using handcrafted methods. Handcrafted methods can summarize image characteristics, such as texture, shape, color, and others. The weights from a layer of a pre-trained CNN can also be used as features, and they are referred to as deep features. The deep features extraction is an automated process, i.e., it does not focus on a specific characteristic. The usage of deep features aims to take advantage of CNNs ability to learn valuable features automatically while reducing the need for a large dataset, which is precisely the case of COVID-19 data scarcity, especially during the pandemic’s early days.
As shown in Table 8, among the 19 papers using shallow methods, eight (≈42%) used deep features, five (≈26%) used handcrafted features, and six (≈32%) leveraged both kinds of features.
Nevertheless, several techniques have been proposed to overcome the deep learning hunger for data. Transfer learning and data augmentation are probably the most popular among them.
Transfer learning is a method that uses the knowledge obtained from solving one problem to another different problem as a starting point. Then, the model could be fine-tuned for the specific task. As displayed in Table 9, in the papers reviewed, 65 used transfer learning while 32 did not, and two evaluated models with and without it. ImageNet was the most frequent problem used as a starting point, given its ability to generalize well in many subsequent tasks, even on medical tasks [38]. X-ray and CT scan images are visually and perceptually different from the images available in ImageNet, which could ultimately render the transfer learning useless. However, there are reports in the literature showing that even in this setting, the transfer learning from ImageNet can boost the performance across various deep models [39].
Data augmentation is a technique used to increase the training data available by slightly changing the already existing data. The transformations include rotations, translations, crops, random changes in color, brightness, contrast, and other factors. The creation of synthetic data is also considered a type of data augmentation. A generative adversarial network (GAN) has been applied to generate synthetic images. Transfer learning also helps to reduce overfitting, acting as a model regularizer. Amid the papers reviewed, Table 10, 48 used data augmentation when training, whereas 51 did not, and one evaluated both scenarios. Again, given the data scarcity of COVID-19 images, data augmentation could be a powerful ally when training deep models.

4.7. Segmentation

In digital image processing, image segmentation separates the input into multiple segments to ease subsequent analysis or further processing. In classification tasks, segmentation might reduce the unnecessary image background information that can interfere with the recognition process. In medical image analysis, segmentation could be considered an even more essential task because a misdiagnosis can have severe consequences for a patient following unfair treatment.
Considering a scenario of COVID-19 identification by using medical images, one would ideally first segment the lung area to remove the unnecessary information and then perform the detection or classification. The rationale is straightforward, the inflammation caused by COVID-19 is located in the lung area, so isolating it would only improve the classification.
As displayed in Table 11, among the top-100 papers analyzed in this review, only 25 considered lung segmentation as a part of the classification pipeline. The 75 remaining skipped it entirely and did not discuss its reason or justification. Out of the 25, two papers manually segmented the RoI before proceeding to the detection [15,21]; the 23 remaining articles applied automated strategies, usually based on deep networks, to segment the lung region.
Many people reason that by using deep strategies, we can overlook some pre-processing steps, such as segmentation, due to the amount of data available. However, thinking critically about segmentation has significant benefits that cannot be ignored. Considering COVID-19, there are reports in the literature showing that without segmentation, the model might be focusing outside the lung region, resulting in a biased performance [40,41,42]. Hence, the classification performance obtained in works that did not apply lung segmentation could also be biased.

4.8. Explainable Artificial Intelligence (XAI)

Explainable artificial intelligence (XAI) is a field that focuses on methods and approaches that can be utilized to explain model predictions. The primary objective is to determine which features the model actively employs when making predictions. When training deep models, there is no assurance as to which feature the model will prioritize, which is why such models are frequently referred to as black-box classification models.
Often, XAI can be used to verify which portions of the input image are being decisively used to reach a particular prediction. In medical images, it is possible to take advantage of such behavior to ensure the model focuses on the right things.
Following almost the same trend as segmentation, of the 100 papers considered, only 25 applied XAI to evaluate the black-box model. Table 12 presents the distribution of each XAI method used in featured papers. As some applied more than one XAI method, the total number is above 100. Methods based on class activation mapping (CAM) and its variations are the most popular, most likely due to their simplicity, ease of use, and overall accuracy [7,8,43]. Other interesting visualization methods are also used less frequently, including saliency maps [44,45], LIME [9], LRP [10], and GSInquire [46]. One of the papers applied a proprietary software called the uAI Intelligent Assistant Analysis System to analyze CT scans [47].

4.9. Reproducibility

For obvious reasons, the concern with the quality and reproducibility of the works addressed here cannot be left out. This section discusses the availability of datasets and codes among the selected papers.
We cannot neglect that the pandemic brought several critical factors that made the research development much more difficult. At the first moment, the scarcity of data was one of these factors. When the pandemic arose, naturally, there was not enough available and labeled data to support the COVID-19-related research development.
In this context, some researchers put much effort into providing, as fast as possible, datasets with labeled and organized data that could be made available to the research community.
Here, we list some of the most remarkable pioneer initiatives for COVID-19 dataset creation, both for CXR- and CT-scans:
The datasets mentioned above were of great importance to the scientific developments obtained in this field of research during the pandemic. However, making some remarks regarding use of the datasets in most investigations is crucial.
The vast majority of works argue that the scarcity of data was an obstacle to experimental development. In this sense, many works performed experiments on particular image collections, sampling the datasets mentioned earlier and other non-COVID image datasets created before the pandemic to investigate the occurrence of other lung diseases, such as cancer. Among these non-COVID datasets, we can remark the chest X-ray dataset (https://www.kaggle.com/datasets/nih-chest-xrays/sample (accessed on 12 July 2022)), was composed of images provided by the National Institutes of Health (NIH), an American medical agency. This dataset was first introduced to community research by Wang et al. [48]. Other important sources used in many works are the Radiological Society of North America (RSNA) data collection (https://www.kaggle.com/c/rsna-pneumonia-detection-challenge (accessed on 12 July 2022)), and the Radiopaedia imaging datasets (https://radiopaedia.org/articles/imaging-data-sets-artificial-intelligence (accessed on 12 July 2022)).
On the one hand, creating ad hoc datasets was a good strategy to overcome the limitations imposed by the lack of data, favoring the creation of more robust models. On the other hand, it makes it very difficult to compare the results obtained by different works directly. Hence, we do not focus on the classification rates obtained by the works reviewed here; furthermore, different works do not necessarily use the same metric for performance evaluation. It is also essential to observe that in some works, the authors organized the data to perform binary classifications (e.g., COVID-19 vs. non-COVID-19), whereas in others, a multi-class scenario was proposed (e.g., COVID-19 vs. bacterial pneumonia vs. viral pneumonia vs. normal).
Code availability is another important aspect related to reproducibility. Code availability is of great importance for continuous research development, as it can allow other researchers to search for progress starting from other works previously developed. Approximately one in three works contribute in this sense. Thirty-three papers among the top 100 made the codes available. This rate is slightly better among the top-25 papers; there are 11 papers with codes made available among them (44%).

4.10. Non-Peer-Reviewed Excluded Papers

As already mentioned, we have excluded papers that were not peer reviewed in the second filter round (F2). We decided to exclude these papers, aiming to ensure some reasonable level of quality, validity, and originality. However, the fact that those papers were not peer-reviewed does not necessarily imply low quality, as suggested by their impressive number of citations. Table 13 summarizes the 18 works excluded in F2. The best-ranked paper in this list (Hedman et al. [49]) was placed in the eighth position before filtering.

5. Concluding Remarks

First of all, it is important to point out that all the efforts done looking for COVID-19 solutions are worth noting, and many vital achievements were obtained thanks to the commitment of the research community from different fields of study. However, it is also reasonable to look back and evaluate the significant impacts and contributions in the context investigated here and some limitations that may have obstructed achieving even better results.
Based on the rationale that the number of citations obtained by a paper is probably the most straightforward and intuitive way to verify its impact on the research community, we described here a review on the top-100 most cited papers considering the development of computer-based strategies for COVID-19 detection from thoracic medical imaging. Following, we highlight some remarkable findings, and we analyze them from the different perspectives addressed in this review.
One of the first aspects that attract attention is the vast majority of deep learning methods compared to shallow methods. On the one hand, this makes sense because deep models have been getting outstanding results for image classification in several different application domains. However, it is also essential to observe that many works reviewed here were developed at the beginning of the pandemic. Many of these works used transfer learning, taking advantage of pre-trained weights produced from other datasets, in general, not composed exclusively of medical images. In addition, many works did not perform fine-tuning. It is easy to understand this kind of strategy in the initial phase of the pandemic, as the data was scarce. However, we conjecture that there is room for further investigations considering the development of studies focused on obtaining more qualified features specifically for COVID-19 detection.
Another important aspect is the imbalance between the number of works developed by using CXR and CT scans. As described in Section 4.4, many more works are devoted to CXR images. Even though CT scan provides a more precise result, it is important to remember that CXR is cheaper and more widespread. In many less economically developed places, CT scan is not even available. So, investigating both scenarios is essential and must continue for different reasons.
Figure 5 displays a word cloud summarizing the most frequent words in the abstracts of all papers. Despite being an informal analysis, the disparity of deep learning when compared to shallow methods is relatively straightforward; the terms referring to deep learning, such as deep, learning, convolutional, CNN, neural, and network, are evident in the word cloud, whereas no visible terms are referring to shallow methods. Another visible difference is the type of image; the terms related to chest X-ray, such as xray and CXR, are more prominent than terms referring to CT scan.
Last but not least, we discuss the feasibility of applying the strategies described in the papers reviewed here in a real scenario. None of the papers reviewed here has been applied in a real scenario, even considering the work that health professionals contributed as co-author. Only three made an application available online, aiming to provide a system that could help support COVID-19 diagnosis. In addition, 40 out of the 100 papers counted on the support of health professionals. In this case, we adopted a quite flexible requirement to define papers with a contribution of health professionals: every paper with at least one co-author affiliated with a hospital, health institute, department, or university, was considered in this category. Thus, despite the impressive progress already made, there are still some important aspects to be addressed in future research.

Author Contributions

Conceptualization, Y.M.G.C. and R.M.P.; data curation, Y.M.G.C., L.O.T., S.A.S.J. and D.B.; investigation, Y.M.G.C., L.O.T. and S.A.S.J.; methodology, Y.M.G.C. and R.M.P.; project administration, Y.M.G.C.; supervision, A.S.B.J., L.S.O. and G.D.C.C.; validation, A.S.B.J., L.S.O. and G.D.C.C.; writing—original draft preparation, Y.M.G.C. and L.O.T.; writing—review and editing, S.A.S.J., D.B., A.S.B.J., L.S.O. and G.D.C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Brazilian Agencies Council for Scientific and Technological Development (CNPq); and Coordination for the Improvement of Higher Education Personnel (CAPES).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank the Brazilian Agencies National Council for Scientific and Technological Development (CNPq), Coordination for the Improvement of Higher Education Personnel (CAPES), and Federal University of Technology—Parana (UTFPR).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AUCArea Under the Curve
BSIFBinarized Statistical Image Features
CAMClass Activation Mapping
COVID-19COrona VIrus Disease of 2019
CNNConvolutional Neural Network
CPDaverage number of Citations Per Day
CTComputerized Tomography
CXRChest X-ray
GANGenerative Adversarial Networks
LBPLocal Binary Patterns
LIMELocal Interpretable Model-agnostic Explanations
LPQLocal Phase Quantization
LSTMLong Short-Term Memory
MERSMiddle East Respiratory Syndrome
MTLMulti-Task deep Learning
NIHNational Institute of Health
NPVNegative Predictive Value
PPVPositive Predictive Value
RoIRegion of Interest
RSNARadiological Society of North America
SARSSevere Acute Respiratory Syndrome
SMOSocial Mimic Optimization
SVMSupport Vector Machine
XAIeXplainable Artificial Intelligence
YOLOYou Only Look at Once

Appendix A

In this section we describe the references, publication year, and number of citations for the 75 papers, beyond the top-25 discussed in Section 3, that also belong to top-100.
Table A1. List of the 75 papers among top-100 not detailed in Section 3.
Table A1. List of the 75 papers among top-100 not detailed in Section 3.
RankAuthors–ReferenceYear 1Citations 2RankAuthors–ReferenceYear 1Citations 2
26Hu et al. [66]202026664Rahaman et al. [67]2020105
27Mahmud et al. [68]202026565Civit-Masot et al. [69]2020105
28Jain et al. [70]202122966Ouchicha et al. [71]2020105
29Horry et al. [72]202022467Silva et al. [73]2020105
30Apostolopoulos et al. [74]202022368Tuncer et al. [75]202099
31Altan and Karasu [76]202021569Hammoudi et al. [77]202198
32Rahman et al. [37]202121170Ohata et al. [78]202096
33Rajaraman et al. [79]202020571Zhou et al. [80]202194
34El Asnaoui and Chawki [81]202119872Hasan et al. [82]202092
35Kassania et al. [36]202118473Sitaula et al. [83]202191
36Luz et al. [84]202118174Gupta et al. [85]202187
37Panwar et al. [86]202017975Dansana et al. [87]202087
38Ahuja et al. [88]202117276Turkoglu [89]202187
39Ko et al. [90]202016877Sekeroglu and Ozsahin [91]202087
40Nayak et al. [92]202115078Che et al. [93]202086
41Wu et al. [94]202015079Pham [95]202183
42Cohen et al. [96]202014780Ning et al. [97]202083
43Alazab et al. [98]202014681Ibrahim et al. [99]202181
44Hassantabar et al. [100]202014482Ibrahim et al. [101]202180
45Yoo et al. [102]202014283Pham [103]202080
46Maguolo and Nanni [42]202114184Saood and Hatem [104]202180
47Jain et al. [105]202014185Öztürk et al. [106]202179
48Tartaglione et al. [107]202014086Abraham and Nair [108]202079
49Hussain et al. [109]202113687Makris et al. [110]202079
50Chandra et al. [111]202113188Alshazly et al. [112]202178
51Ni et al. [113]202013089Rasheed et al. [114]202176
52Shah et al. [115]202112590Zhang et al. [47]202075
53Kumar et al. [116]202112491Li et al. [117]202074
54Basu et al. [118]202012392Al-Waisy et al. [119]202073
55Asif et al. [120]202012093Haghanifar et al. [41]202273
56Sedik et al. [121]202011994Lassau et al. [122]202171
57Zargari et al. [123]202111695Nishio et al. [124]202069
58Rahimzadeh et al. [125]202111596Das et al. [126]202169
59Punn et al. Punn and Agarwal [127]202111397Shankar and Perumal [128]202168
60Sedik et al. [129]202111298Abdel-Basset et al. [130]202166
61Vaid et al. [131]202011099Saha et al. [132]202165
62Karim et al. [133]2020107100Sakib et al. [134]202065
63Zebin and Rezvy [135]2021107
1 Considering the publication date. 2 According to Google Scholar on 12 July 2022.

Appendix B

In this section we present the total distribution of authors by country, as discussed in Section 4.3.
Table A2. Distribution of authors by country.
Table A2. Distribution of authors by country.
CountryNumber of AuthorsCountryNumber of Authors
China207 (32.6%)Greece8 (1.3%)
India65 (10.3%)Malaysia8 (1.3%)
USA42 (6.6%)Saudi Arabia7 (1.1%)
France37 (5.8%)Spain7 (1.1%)
Turkey33 (5.2%)Hong Kong5 (0.8%)
Brazil26 (4.1%)Japan5 (0.8%)
Egypt22 (3.5%)Mexico5 (0.8%)
South Korea22 (3.5%)Morocco5 (0.8%)
Canada21 (3.3%)Jordan4 (0.6%)
Australia13 (2.1%)Pakistan4 (0.6%)
Bangladesh13 (2.1%)Netherlands3 (0.5%)
UK13 (2.1%)Singapore2 (0.3%)
Germany12 (1.9%)Syria2 (0.3%)
Italy11 (1.7%)Algeria1 (0.2%)
Qatar10 (1.6%)Finland1 (0.2%)
Iran9 (1.4%)Norway1 (0.2%)
Iraq9 (1.4%)Vietnam1 (0.2%)

References

  1. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 Image Data Collection. arXiv 2020, arXiv:2003.11597. [Google Scholar]
  2. Afshar, A.; Tabrizi, A. A bibliometric analysis of the 100 top-cited articles about COVID-19. Arch. Bone Jt. Surg. 2020, 8, 748. [Google Scholar] [CrossRef]
  3. ElHawary, H.; Salimi, A.; Diab, N.; Smith, L. Bibliometric Analysis of Early COVID-19 Research: The Top 50 Cited Papers. Infect. Dis. Res. Treat. 2020, 13, 1178633720962935. [Google Scholar] [CrossRef] [PubMed]
  4. Volodina, O.V. Formation of Future Teachers’ Worldview Culture by Means of Foreign-Language Education. Prospect. Sci. Educ. 2022, 57, 126–159. Available online: https://pnojournal.wordpress.com/2022/07/01/volodina-3/ (accessed on 12 July 2022). [CrossRef]
  5. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar] [CrossRef]
  6. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  7. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  8. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
  9. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16), San Francisco, CA, USA, 13–17 August 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
  10. Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; Müller, K.R.; Samek, W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE 2015, 10, e0130140. [Google Scholar] [CrossRef]
  11. Wang, L.; Lin, Z.Q.; Wong, A. Covid-net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef]
  12. Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef]
  13. Apostolopoulos, I.D.; Mpesiana, T.A. Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 2020, 43, 635–640. [Google Scholar] [CrossRef] [Green Version]
  14. Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Anal. Appl. 2021, 24, 1207–1220. [Google Scholar] [CrossRef]
  15. Wang, G.; Liu, X.; Shen, J.; Wang, C.; Li, Z.; Ye, L.; Wu, X.; Chen, T.; Wang, K.; Zhang, X.; et al. A deep-learning pipeline for the diagnosis and discrimination of viral, non-viral and COVID-19 pneumonia from chest X-ray images. Nat. Biomed. Eng. 2021, 5, 509–521. [Google Scholar] [CrossRef] [PubMed]
  16. Xu, X.; Jiang, X.; Ma, C.; Du, P.; Li, X.; Lv, S.; Yu, L.; Ni, Q.; Chen, Y.; Su, J.; et al. A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering 2020, 6, 1122–1129. [Google Scholar] [CrossRef] [PubMed]
  17. Khan, A.I.; Shah, J.L.; Bhat, M.M. CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest X-ray images. Comput. Methods Programs Biomed. 2020, 196, 105581. [Google Scholar] [CrossRef]
  18. Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021, 51, 854–864. [Google Scholar] [CrossRef] [PubMed]
  19. Song, Y.; Zheng, S.; Li, L.; Zhang, X.; Zhang, X.; Huang, Z.; Chen, J.; Wang, R.; Zhao, H.; Chong, Y.; et al. Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 2775–2780. [Google Scholar] [CrossRef]
  20. Oh, Y.; Park, S.; Ye, J.C. Deep learning COVID-19 features on CXR using limited training data sets. IEEE Trans. Med. Imaging 2020, 39, 2688–2700. [Google Scholar] [CrossRef]
  21. Ardakani, A.A.; Kanafi, A.R.; Acharya, U.R.; Khadem, N.; Mohammadi, A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput. Biol. Med. 2020, 121, 103795. [Google Scholar] [CrossRef]
  22. Chen, J.; Wu, L.; Zhang, J.; Zhang, L.; Gong, D.; Zhao, Y.; Chen, Q.; Huang, S.; Yang, M.; Yang, X.; et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Sci. Rep. 2020, 10, 19196. [Google Scholar] [CrossRef]
  23. Ucar, F.; Korkmaz, D. COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-ray images. Med. Hypotheses 2020, 140, 109761. [Google Scholar] [CrossRef]
  24. Afshar, P.; Heidarian, S.; Naderkhani, F.; Oikonomou, A.; Plataniotis, K.N.; Mohammadi, A. Covid-caps: A capsule network-based framework for identification of COVID-19 cases from X-ray images. Pattern Recognit. Lett. 2020, 138, 638–643. [Google Scholar] [CrossRef]
  25. Panwar, H.; Gupta, P.; Siddiqui, M.K.; Morales-Menendez, R.; Singh, V. Application of deep learning for fast detection of COVID-19 in X-Rays using nCOVnet. Chaos Solitons Fractals 2020, 138, 109944. [Google Scholar] [CrossRef] [PubMed]
  26. Huang, L.; Han, R.; Ai, T.; Yu, P.; Kang, H.; Tao, Q.; Xia, L. Serial quantitative chest CT assessment of COVID-19: A deep learning approach. Radiol. Cardiothorac. Imaging 2020, 2, e200075. [Google Scholar] [CrossRef] [PubMed]
  27. Toğaçar, M.; Ergen, B.; Cömert, Z. COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches. Comput. Biol. Med. 2020, 121, 103805. [Google Scholar] [CrossRef] [PubMed]
  28. Pereira, R.M.; Bertolini, D.; Teixeira, L.O.; Silla, C.N., Jr.; Costa, Y.M. COVID-19 identification in chest X-ray images on flat and hierarchical classification scenarios. Comput. Methods Programs Biomed. 2020, 194, 105532. [Google Scholar] [CrossRef]
  29. Wang, S.; Zha, Y.; Li, W.; Wu, Q.; Li, X.; Niu, M.; Wang, M.; Qiu, X.; Li, H.; Yu, H.; et al. A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis. Eur. Respir. J. 2020, 56, 2000775. [Google Scholar] [CrossRef]
  30. Maghdid, H.S.; Asaad, A.T.; Ghafoor, K.Z.; Sadiq, A.S.; Mirjalili, S.; Khan, M.K. Diagnosing COVID-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms. In Proceedings of the 2021 Multimodal Image Exploitation and Learning, Online, 12–16 April 2021; SPIE: Bellingham, WA, USA, 2021; Volume 11734, pp. 99–110. [Google Scholar]
  31. Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. Explainable deep learning for pulmonary disease and coronavirus COVID-19 detection from X-rays. Comput. Methods Programs Biomed. 2020, 196, 105608. [Google Scholar] [CrossRef]
  32. Loey, M.; Smarandache, F.; Khalifa, N.E.M. Within the lack of chest COVID-19 X-ray dataset: A novel detection model based on GAN and deep transfer learning. Symmetry 2020, 12, 651. [Google Scholar] [CrossRef]
  33. Islam, M.Z.; Islam, M.M.; Asraf, A. A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images. Inform. Med. Unlocked 2020, 20, 100412. [Google Scholar] [CrossRef]
  34. Ismael, A.M.; Şengür, A. Deep learning approaches for COVID-19 detection based on chest X-ray images. Expert Syst. Appl. 2021, 164, 114054. [Google Scholar] [CrossRef]
  35. Amyar, A.; Modzelewski, R.; Li, H.; Ruan, S. Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation. Comput. Biol. Med. 2020, 126, 104037. [Google Scholar] [CrossRef]
  36. Kassania, S.H.; Kassanib, P.H.; Wesolowskic, M.J.; Schneidera, K.A.; Detersa, R. Automatic detection of coronavirus disease (COVID-19) in X-ray and CT images: A machine learning based approach. Biocybern. Biomed. Eng. 2021, 41, 867–879. [Google Scholar] [CrossRef] [PubMed]
  37. Rahman, T.; Khandakar, A.; Qiblawey, Y.; Tahir, A.; Kiranyaz, S.; Kashem, S.B.A.; Islam, M.T.; Al Maadeed, S.; Zughaier, S.M.; Khan, M.S.; et al. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput. Biol. Med. 2021, 132, 104319. [Google Scholar] [CrossRef] [PubMed]
  38. Morid, M.A.; Borjali, A.; Del Fiol, G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput. Biol. Med. 2021, 128, 104115. [Google Scholar] [CrossRef] [PubMed]
  39. Ke, A.; Ellsworth, W.; Banerjee, O.; Ng, A.Y.; Rajpurkar, P. CheXtransfer: Performance and Parameter Efficiency of ImageNet Models for Chest X-Ray Interpretation. In Proceedings of the Conference on Health, Inference, and Learning (CHIL ’21), Virtual Event, 7–8 April 2022; Association for Computing Machinery: New York, NY, USA, 2021; pp. 116–124. [Google Scholar] [CrossRef]
  40. Teixeira, L.O.; Pereira, R.M.; Bertolini, D.; Oliveira, L.S.; Nanni, L.; Cavalcanti, G.D.; Costa, Y.M. Impact of lung segmentation on the diagnosis and explanation of COVID-19 in chest X-ray images. Sensors 2021, 21, 7116. [Google Scholar] [CrossRef]
  41. Haghanifar, A.; Majdabadi, M.M.; Choi, Y.; Deivalakshmi, S.; Ko, S. Covid-cxnet: Detecting COVID-19 in frontal chest X-ray images using deep learning. Multimed. Tools Appl. 2022, 1–31. [Google Scholar] [CrossRef]
  42. Maguolo, G.; Nanni, L. A critic evaluation of methods for COVID-19 automatic detection from X-ray images. Inf. Fusion 2021, 76, 1–7. [Google Scholar] [CrossRef]
  43. Wang, H.; Wang, Z.; Du, M.; Yang, F.; Zhang, Z.; Ding, S.; Mardziel, P.; Hu, X. Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  44. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Proceedings of the Computer Vision—ECCV 2014, 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 818–833. [Google Scholar]
  45. Springenberg, J.; Dosovitskiy, A.; Brox, T.; Riedmiller, M. Striving for Simplicity: The All Convolutional Net. In Proceedings of the ICLR (Workshop Track), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  46. Lin, Z.Q.; Shafiee, M.J.; Bochkarev, S.; Jules, M.S.; Wang, X.Y.; Wong, A. Do explanations reflect decisions? A machine-centric strategy to quantify the performance of explainability algorithms. arXiv 2019, arXiv:1910.07387. [Google Scholar]
  47. Zhang, H.T.; Zhang, J.S.; Zhang, H.H.; Nan, Y.D.; Zhao, Y.; Fu, E.Q.; Xie, Y.H.; Liu, W.; Li, W.P.; Zhang, H.J.; et al. Automated detection and quantification of COVID-19 pneumonia: CT imaging analysis by a deep learning-based software. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 2525–2532. [Google Scholar] [CrossRef]
  48. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R. Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE CVPR, Honolulu, HI, USA, 21–26 July 2017; Volume 7. [Google Scholar]
  49. Hemdan, E.E.D.; Shouman, M.A.; Karar, M.E. Covidx-net: A framework of deep learning classifiers to diagnose COVID-19 in X-ray images. arXiv 2020, arXiv:2003.11055. [Google Scholar]
  50. Gozes, O.; Frid-Adar, M.; Greenspan, H.; Browning, P.D.; Zhang, H.; Ji, W.; Bernheim, A.; Siegel, E. Rapid ai development cycle for the coronavirus (COVID-19) pandemic: Initial results for automated detection & patient monitoring using deep learning CT image analysis. arXiv 2020, arXiv:2003.05037. [Google Scholar]
  51. Zheng, C.; Deng, X.; Fu, Q.; Zhou, Q.; Feng, J.; Ma, H.; Liu, W.; Wang, X. Deep learning-based detection for COVID-19 from chest CT using weak label. medRxiv 2020. [Google Scholar] [CrossRef] [Green Version]
  52. Shan, F.; Gao, Y.; Wang, J.; Shi, W.; Shi, N.; Han, M.; Xue, Z.; Shen, D.; Shi, Y. Lung infection quantification of COVID-19 in CT images with deep learning. arXiv 2020, arXiv:2003.04655. [Google Scholar]
  53. Zhang, J.; Xie, Y.; Li, Y.; Shen, C.; Xia, Y. COVID-19 screening on chest X-ray images using deep learning based anomaly detection. arXiv 2020, arXiv:2003.12338. [Google Scholar]
  54. Farooq, M.; Hafeez, A. Covid-resnet: A deep learning framework for screening of COVID19 from radiographs. arXiv 2020, arXiv:2003.14395. [Google Scholar]
  55. Ghoshal, B.; Tucker, A. Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection. arXiv 2020, arXiv:2003.10769. [Google Scholar]
  56. He, X.; Yang, X.; Zhang, S.; Zhao, J.; Zhang, Y.; Xing, E.; Xie, P. Sample-efficient deep learning for COVID-19 diagnosis based on CT scans. medRxiv 2020. [Google Scholar] [CrossRef]
  57. Hall, L.O.; Paul, R.; Goldgof, D.B.; Goldgof, G.M. Finding COVID-19 from chest X-rays using deep learning on a small dataset. arXiv 2020, arXiv:2004.02060. [Google Scholar]
  58. Punn, N.S.; Sonbhadra, S.K.; Agarwal, S. COVID-19 epidemic analysis using machine learning and deep learning algorithms. medRxiv 2020. [Google Scholar] [CrossRef]
  59. Khalifa, N.E.M.; Taha, M.H.N.; Hassanien, A.E.; Elghamrawy, S. Detection of coronavirus (COVID-19) associated pneumonia based on generative adversarial networks and a fine-tuned deep transfer learning model using chest X-ray dataset. arXiv 2020, arXiv:2004.01184. [Google Scholar]
  60. Mahdy, L.N.; Ezzat, K.A.; Elmousalami, H.H.; Ella, H.A.; Hassanien, A.E. Automatic X-ray COVID-19 lung image classification system based on multi-level thresholding and support vector machine. medRxiv 2020. [Google Scholar] [CrossRef]
  61. Alom, M.Z.; Rahman, M.; Nasrin, M.S.; Taha, T.M.; Asari, V.K. COVID_MTNet: COVID-19 detection with multi-task deep learning approaches. arXiv 2020, arXiv:2004.03747. [Google Scholar]
  62. Mangal, A.; Kalia, S.; Rajgopal, H.; Rangarajan, K.; Namboodiri, V.; Banerjee, S.; Arora, C. CovidAID: COVID-19 detection using chest X-ray. arXiv 2020, arXiv:2004.09803. [Google Scholar]
  63. Kumar, R.; Arora, R.; Bansal, V.; Sahayasheela, V.J.; Buckchash, H.; Imran, J.; Narayanan, N.; Pandian, G.N.; Raman, B. Accurate prediction of COVID-19 using chest X-ray images through deep feature learning model with SMOTE and machine learning classifiers. medRxiv 2020. [Google Scholar] [CrossRef]
  64. Rajinikanth, V.; Dey, N.; Raj, A.N.J.; Hassanien, A.E.; Santosh, K.; Raja, N. Harmony-search and otsu based system for coronavirus disease (COVID-19) detection using lung CT scan images. arXiv 2020, arXiv:2004.03431. [Google Scholar]
  65. Castiglioni, I.; Ippolito, D.; Interlenghi, M.; Monti, C.B.; Salvatore, C.; Schiaffino, S.; Polidori, A.; Gandola, D.; Messa, C.; Sardanelli, F. Artificial intelligence applied on chest X-ray can aid in the diagnosis of COVID-19 infection: A first experience from Lombardy, Italy. medRxiv 2020. [Google Scholar] [CrossRef]
  66. Hu, S.; Gao, Y.; Niu, Z.; Jiang, Y.; Li, L.; Xiao, X.; Wang, M.; Fang, E.F.; Menpes-Smith, W.; Xia, J.; et al. Weakly supervised deep learning for COVID-19 infection detection and classification from ct images. IEEE Access 2020, 8, 118869–118883. [Google Scholar] [CrossRef]
  67. Rahaman, M.M.; Li, C.; Yao, Y.; Kulwa, F.; Rahman, M.A.; Wang, Q.; Qi, S.; Kong, F.; Zhu, X.; Zhao, X. Identification of COVID-19 samples from chest X-ray images using deep learning: A comparison of transfer learning approaches. J. X-ray Sci. Technol. 2020, 28, 821–839. [Google Scholar] [CrossRef]
  68. Mahmud, T.; Rahman, M.A.; Fattah, S.A. CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization. Comput. Biol. Med. 2020, 122, 103869. [Google Scholar] [CrossRef]
  69. Civit-Masot, J.; Luna-Perejón, F.; Domínguez Morales, M.; Civit, A. Deep learning system for COVID-19 diagnosis aid using X-ray pulmonary images. Appl. Sci. 2020, 10, 4640. [Google Scholar] [CrossRef]
  70. Jain, R.; Gupta, M.; Taneja, S.; Hemanth, D.J. Deep learning based detection and analysis of COVID-19 on chest X-ray images. Appl. Intell. 2021, 51, 1690–1700. [Google Scholar] [CrossRef]
  71. Ouchicha, C.; Ammor, O.; Meknassi, M. CVDNet: A novel deep learning architecture for detection of coronavirus (COVID-19) from chest X-ray images. Chaos Solitons Fractals 2020, 140, 110245. [Google Scholar] [CrossRef] [PubMed]
  72. Horry, M.J.; Chakraborty, S.; Paul, M.; Ulhaq, A.; Pradhan, B.; Saha, M.; Shukla, N. COVID-19 detection through transfer learning using multimodal imaging data. IEEE Access 2020, 8, 149808–149824. [Google Scholar] [CrossRef] [PubMed]
  73. Silva, P.; Luz, E.; Silva, G.; Moreira, G.; Silva, R.; Lucio, D.; Menotti, D. COVID-19 detection in CT images with deep learning: A voting-based scheme and cross-datasets analysis. Inform. Med. Unlocked 2020, 20, 100427. [Google Scholar] [CrossRef] [PubMed]
  74. Apostolopoulos, I.D.; Aznaouridis, S.I.; Tzani, M.A. Extracting possibly representative COVID-19 biomarkers from X-ray images with deep learning approach and image data related to pulmonary diseases. J. Med. Biol. Eng. 2020, 40, 462–469. [Google Scholar] [CrossRef]
  75. Tuncer, T.; Dogan, S.; Ozyurt, F. An automated Residual Exemplar Local Binary Pattern and iterative ReliefF based COVID-19 detection method using chest X-ray image. Chemom. Intell. Lab. Syst. 2020, 203, 104054. [Google Scholar] [CrossRef]
  76. Altan, A.; Karasu, S. Recognition of COVID-19 disease from X-ray images by hybrid model consisting of 2D curvelet transform, chaotic salp swarm algorithm and deep learning technique. Chaos Solitons Fractals 2020, 140, 110071. [Google Scholar] [CrossRef]
  77. Hammoudi, K.; Benhabiles, H.; Melkemi, M.; Dornaika, F.; Arganda-Carreras, I.; Collard, D.; Scherpereel, A. Deep learning on chest X-ray images to detect and evaluate pneumonia cases at the era of COVID-19. J. Med. Syst. 2021, 45, 1–10. [Google Scholar] [CrossRef]
  78. Ohata, E.F.; Bezerra, G.M.; das Chagas, J.V.S.; Neto, A.V.L.; Albuquerque, A.B.; de Albuquerque, V.H.C.; Reboucas Filho, P.P. Automatic detection of COVID-19 infection using chest X-ray images through transfer learning. IEEE/CAA J. Autom. Sin. 2020, 8, 239–248. [Google Scholar] [CrossRef]
  79. Rajaraman, S.; Siegelman, J.; Alderson, P.O.; Folio, L.S.; Folio, L.R.; Antani, S.K. Iteratively pruned deep learning ensembles for COVID-19 detection in chest X-rays. IEEE Access 2020, 8, 115041–115050. [Google Scholar] [CrossRef]
  80. Zhou, T.; Lu, H.; Yang, Z.; Qiu, S.; Huo, B.; Dong, Y. The ensemble deep learning model for novel COVID-19 on CT images. Appl. Soft Comput. 2021, 98, 106885. [Google Scholar] [CrossRef]
  81. El Asnaoui, K.; Chawki, Y. Using X-ray images and deep learning for automated detection of coronavirus disease. J. Biomol. Struct. Dyn. 2021, 39, 3615–3626. [Google Scholar] [CrossRef]
  82. Hasan, A.M.; Al-Jawad, M.M.; Jalab, H.A.; Shaiba, H.; Ibrahim, R.W.; AL-Shamasneh, A.R. Classification of COVID-19 coronavirus, pneumonia and healthy lungs in CT scans using Q-deformed entropy and deep learning features. Entropy 2020, 22, 517. [Google Scholar] [CrossRef] [PubMed]
  83. Sitaula, C.; Hossain, M.B. Attention-based VGG-16 model for COVID-19 chest X-ray image classification. Appl. Intell. 2021, 51, 2850–2863. [Google Scholar] [CrossRef] [PubMed]
  84. Luz, E.; Silva, P.; Silva, R.; Silva, L.; Guimarães, J.; Miozzo, G.; Moreira, G.; Menotti, D. Towards an effective and efficient deep learning model for COVID-19 patterns detection in X-ray images. Res. Biomed. Eng. 2022, 38, 149–162. [Google Scholar] [CrossRef]
  85. Gupta, A.; Gupta, S.; Katarya, R. InstaCovNet-19: A deep learning classification model for the detection of COVID-19 patients using Chest X-ray. Appl. Soft Comput. 2021, 99, 106859. [Google Scholar] [CrossRef] [PubMed]
  86. Panwar, H.; Gupta, P.; Siddiqui, M.K.; Morales-Menendez, R.; Bhardwaj, P.; Singh, V. A deep learning and grad-CAM based color visualization approach for fast detection of COVID-19 cases using chest X-ray and CT-Scan images. Chaos Solitons Fractals 2020, 140, 110190. [Google Scholar] [CrossRef]
  87. Dansana, D.; Kumar, R.; Bhattacharjee, A.; Hemanth, D.J.; Gupta, D.; Khanna, A.; Castillo, O. Early diagnosis of COVID-19-affected patients based on X-ray and computed tomography images using deep learning algorithm. Soft Comput. 2020, 1–9. [Google Scholar] [CrossRef]
  88. Ahuja, S.; Panigrahi, B.K.; Dey, N.; Rajinikanth, V.; Gandhi, T.K. Deep transfer learning-based automated detection of COVID-19 from lung CT scan slices. Appl. Intell. 2021, 51, 571–585. [Google Scholar] [CrossRef]
  89. Turkoglu, M. COVIDetectioNet: COVID-19 diagnosis system based on X-ray images using features selected from pre-learned deep features ensemble. Appl. Intell. 2021, 51, 1213–1226. [Google Scholar] [CrossRef]
  90. Ko, H.; Chung, H.; Kang, W.S.; Kim, K.W.; Shin, Y.; Kang, S.J.; Lee, J.H.; Kim, Y.J.; Kim, N.Y.; Jung, H.; et al. COVID-19 pneumonia diagnosis using a simple 2D deep learning framework with a single chest CT image: Model development and validation. J. Med. Internet Res. 2020, 22, e19569. [Google Scholar] [CrossRef]
  91. Sekeroglu, B.; Ozsahin, I. Detection of COVID-19 from Chest X-ray Images Using Convolutional Neural Networks. Slas Technol. Transl. Life Sci. Innov. 2020, 25, 553–565. [Google Scholar] [CrossRef] [PubMed]
  92. Nayak, S.R.; Nayak, D.R.; Sinha, U.; Arora, V.; Pachori, R.B. Application of deep learning techniques for detection of COVID-19 cases using chest X-ray images: A comprehensive study. Biomed. Signal Process. Control. 2021, 64, 102365. [Google Scholar] [CrossRef] [PubMed]
  93. Che Azemin, M.Z.; Hassan, R.; Mohd Tamrin, M.I.; Md Ali, M.A. COVID-19 deep learning prediction model using publicly available radiologist-adjudicated chest X-ray images as training data: Preliminary findings. Int. J. Biomed. Imaging 2020, 2020, 8828855. [Google Scholar] [CrossRef] [PubMed]
  94. Wu, X.; Hui, H.; Niu, M.; Li, L.; Wang, L.; He, B.; Yang, X.; Li, L.; Li, H.; Tian, J.; et al. Deep learning-based multi-view fusion model for screening 2019 novel coronavirus pneumonia: A multicentre study. Eur. J. Radiol. 2020, 128, 109041. [Google Scholar] [CrossRef]
  95. Pham, T.D. Classification of COVID-19 chest X-rays with deep learning: New models or fine tuning? Health Inf. Sci. Syst. 2021, 9, 2. [Google Scholar] [CrossRef] [PubMed]
  96. Cohen, J.P.; Dao, L.; Roth, K.; Morrison, P.; Bengio, Y.; Abbasi, A.F.; Shen, B.; Mahsa, H.K.; Ghassemi, M.; Li, H.; et al. Predicting COVID-19 pneumonia severity on chest X-ray with deep learning. Cureus 2020, 12, e9448. [Google Scholar] [CrossRef]
  97. Ning, W.; Lei, S.; Yang, J.; Cao, Y.; Jiang, P.; Yang, Q.; Zhang, J.; Wang, X.; Chen, F.; Geng, Z.; et al. Open resource of clinical data from patients with pneumonia for the prediction of COVID-19 outcomes via deep learning. Nat. Biomed. Eng. 2020, 4, 1197–1207. [Google Scholar] [CrossRef]
  98. Alazab, M.; Awajan, A.; Mesleh, A.; Abraham, A.; Jatana, V.; Alhyari, S. COVID-19 prediction and detection using deep learning. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2020, 12, 168–181. [Google Scholar]
  99. Ibrahim, D.M.; Elshennawy, N.M.; Sarhan, A.M. Deep-chest: Multi-classification deep learning model for diagnosing COVID-19, pneumonia, and lung cancer chest diseases. Comput. Biol. Med. 2021, 132, 104348. [Google Scholar] [CrossRef]
  100. Hassantabar, S.; Ahmadi, M.; Sharifi, A. Diagnosis and detection of infected tissue of COVID-19 patients based on lung X-ray image using convolutional neural network approaches. Chaos Solitons Fractals 2020, 140, 110170. [Google Scholar] [CrossRef]
  101. Ibrahim, A.U.; Ozsoz, M.; Serte, S.; Al-Turjman, F.; Yakoi, P.S. Pneumonia classification using deep learning from chest X-ray images during COVID-19. Cogn. Comput. 2021, 1–13. [Google Scholar] [CrossRef] [PubMed]
  102. Yoo, S.H.; Geng, H.; Chiu, T.L.; Yu, S.K.; Cho, D.C.; Heo, J.; Choi, M.S.; Choi, I.H.; Cung Van, C.; Nhung, N.V.; et al. Deep learning-based decision-tree classifier for COVID-19 diagnosis from chest X-ray imaging. Front. Med. 2020, 7, 427. [Google Scholar] [CrossRef]
  103. Pham, T.D. A comprehensive study on classification of COVID-19 on computed tomography with pretrained convolutional neural networks. Sci. Rep. 2020, 10, 16942. [Google Scholar] [CrossRef] [PubMed]
  104. Saood, A.; Hatem, I. COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet. BMC Med. Imaging 2021, 21, 19. [Google Scholar] [CrossRef]
  105. Jain, G.; Mittal, D.; Thakur, D.; Mittal, M.K. A deep learning approach to detect COVID-19 coronavirus with X-ray images. Biocybern. Biomed. Eng. 2020, 40, 1391–1405. [Google Scholar] [CrossRef]
  106. Öztürk, Ş.; Özkaya, U.; Barstuğan, M. Classification of Coronavirus (COVID-19) from X-ray and CT images using shrunken features. Int. J. Imaging Syst. Technol. 2021, 31, 5–15. [Google Scholar] [CrossRef] [PubMed]
  107. Tartaglione, E.; Barbano, C.A.; Berzovini, C.; Calandri, M.; Grangetto, M. Unveiling COVID-19 from chest X-ray with deep learning: A hurdles race with small data. Int. J. Environ. Res. Public Health 2020, 17, 6933. [Google Scholar] [CrossRef] [PubMed]
  108. Abraham, B.; Nair, M.S. Computer-aided detection of COVID-19 from X-ray images using multi-CNN and Bayesnet classifier. Biocybern. Biomed. Eng. 2020, 40, 1436–1445. [Google Scholar] [CrossRef]
  109. Hussain, E.; Hasan, M.; Rahman, M.A.; Lee, I.; Tamanna, T.; Parvez, M.Z. CoroDet: A deep learning based classification for COVID-19 detection using chest X-ray images. Chaos Solitons Fractals 2021, 142, 110495. [Google Scholar] [CrossRef]
  110. Makris, A.; Kontopoulos, I.; Tserpes, K. COVID-19 detection from chest X-Ray images using Deep Learning and Convolutional Neural Networks. In Proceedings of the 11th Hellenic Conference on Artificial Intelligence, Athens, Greece, 2–4 September 2020; pp. 60–66. [Google Scholar]
  111. Chandra, T.B.; Verma, K.; Singh, B.K.; Jain, D.; Netam, S.S. Coronavirus disease (COVID-19) detection in chest X-ray images using majority voting based classifier ensemble. Expert Syst. Appl. 2021, 165, 113909. [Google Scholar] [CrossRef]
  112. Alshazly, H.; Linse, C.; Barth, E.; Martinetz, T. Explainable COVID-19 detection using chest CT scans and deep learning. Sensors 2021, 21, 455. [Google Scholar] [CrossRef]
  113. Ni, Q.; Sun, Z.Y.; Qi, L.; Chen, W.; Yang, Y.; Wang, L.; Zhang, X.; Yang, L.; Fang, Y.; Xing, Z.; et al. A deep learning approach to characterize 2019 coronavirus disease (COVID-19) pneumonia in chest CT images. Eur. Radiol. 2020, 30, 6517–6527. [Google Scholar] [CrossRef] [PubMed]
  114. Rasheed, J.; Hameed, A.A.; Djeddi, C.; Jamil, A.; Al-Turjman, F. A machine learning-based framework for diagnosis of COVID-19 from chest X-ray images. Interdiscip. Sci. Comput. Life Sci. 2021, 13, 103–117. [Google Scholar] [CrossRef] [PubMed]
  115. Shah, V.; Keniya, R.; Shridharani, A.; Punjabi, M.; Shah, J.; Mehendale, N. Diagnosis of COVID-19 using CT scan images and deep learning techniques. Emerg. Radiol. 2021, 28, 497–505. [Google Scholar] [CrossRef] [PubMed]
  116. Kumar, R.; Khan, A.A.; Kumar, J.; Golilarz, N.A.; Zhang, S.; Ting, Y.; Zheng, C.; Wang, W.; Zakria. Blockchain-federated-learning and deep learning models for COVID-19 detection using CT imaging. IEEE Sens. J. 2021, 21, 16301–16314. [Google Scholar] [CrossRef]
  117. Li, Z.; Zhong, Z.; Li, Y.; Zhang, T.; Gao, L.; Jin, D.; Sun, Y.; Ye, X.; Yu, L.; Hu, Z.; et al. From community-acquired pneumonia to COVID-19: A deep learning—Based method for quantitative analysis of COVID-19 on thick-section CT scans. Eur. Radiol. 2020, 30, 6828–6837. [Google Scholar] [CrossRef]
  118. Basu, S.; Mitra, S.; Saha, N. Deep learning for screening COVID-19 using chest X-ray images. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, Australia, 1–4 December 2020; IEEE: New York, NY, USA, 2020; pp. 2521–2527. [Google Scholar]
  119. Al-Waisy, A.S.; Al-Fahdawi, S.; Mohammed, M.A.; Abdulkareem, K.H.; Mostafa, S.A.; Maashi, M.S.; Arif, M.; Garcia-Zapirain, B. COVID-CheXNet: Hybrid deep learning framework for identifying COVID-19 virus in chest X-rays images. Soft Comput. 2020, 1–16. [Google Scholar] [CrossRef]
  120. Asif, S.; Wenhui, Y.; Jin, H.; Jinhai, S. Classification of COVID-19 from chest X-ray images using deep convolutional neural network. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; IEEE: New York, NY, USA, 2020; pp. 426–433. [Google Scholar]
  121. Sedik, A.; Iliyasu, A.M.; El-Rahiem, A.; Abdel Samea, M.E.; Abdel-Raheem, A.; Hammad, M.; Peng, J.; El-Samie, A.; Fathi, E.; El-Latif, A.; et al. Deploying machine and deep learning models for efficient data-augmented detection of COVID-19 infections. Viruses 2020, 12, 769. [Google Scholar] [CrossRef]
  122. Lassau, N.; Ammari, S.; Chouzenoux, E.; Gortais, H.; Herent, P.; Devilder, M.; Soliman, S.; Meyrignac, O.; Talabard, M.P.; Lamarque, J.P.; et al. Integrating deep learning CT-scan model, biological and clinical variables to predict severity of COVID-19 patients. Nat. Commun. 2021, 12, 634. [Google Scholar] [CrossRef]
  123. Zargari Khuzani, A.; Heidari, M.; Shariati, S.A. COVID-Classifier: An automated machine learning model to assist in the diagnosis of COVID-19 infection in chest X-ray images. Sci. Rep. 2021, 11, 9887. [Google Scholar] [CrossRef]
  124. Nishio, M.; Noguchi, S.; Matsuo, H.; Murakami, T. Automatic classification between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy on chest X-ray image: Combination of data augmentation methods. Sci. Rep. 2020, 10, 17532. [Google Scholar] [CrossRef] [PubMed]
  125. Rahimzadeh, M.; Attar, A.; Sakhaei, S.M. A fully automated deep learning-based network for detecting COVID-19 from a new and large lung CT scan dataset. Biomed. Signal Process. Control 2021, 68, 102588. [Google Scholar] [CrossRef] [PubMed]
  126. Das, A.K.; Ghosh, S.; Thunder, S.; Dutta, R.; Agarwal, S.; Chakrabarti, A. Automatic COVID-19 detection from X-ray images using ensemble learning with convolutional neural network. Pattern Anal. Appl. 2021, 24, 1111–1124. [Google Scholar] [CrossRef]
  127. Punn, N.S.; Agarwal, S. Automated diagnosis of COVID-19 with limited posteroanterior chest X-ray images using fine-tuned deep neural networks. Appl. Intell. 2021, 51, 2689–2702. [Google Scholar] [CrossRef] [PubMed]
  128. Shankar, K.; Perumal, E. A novel hand-crafted with deep learning features based fusion model for COVID-19 diagnosis and classification using chest X-ray images. Complex Intell. Syst. 2021, 7, 1277–1293. [Google Scholar] [CrossRef] [PubMed]
  129. Sedik, A.; Hammad, M.; El-Samie, A.; Fathi, E.; Gupta, B.B.; El-Latif, A.; Ahmed, A. Efficient deep learning approach for augmented detection of Coronavirus disease. Neural Comput. Appl. 2022, 34, 11423–11440. [Google Scholar] [CrossRef]
  130. Abdel-Basset, M.; Chang, V.; Hawash, H.; Chakrabortty, R.K.; Ryan, M. FSS-2019-nCov: A deep learning architecture for semi-supervised few-shot segmentation of COVID-19 infection. Knowl.-Based Syst. 2021, 212, 106647. [Google Scholar] [CrossRef]
  131. Vaid, S.; Kalantar, R.; Bhandari, M. Deep learning COVID-19 detection bias: Accuracy through artificial intelligence. Int. Orthop. 2020, 44, 1539–1542. [Google Scholar] [CrossRef]
  132. Saha, P.; Sadi, M.S.; Islam, M.M. EMCNet: Automated COVID-19 diagnosis from X-ray images using convolutional neural network and ensemble of machine learning classifiers. Inform. Med. Unlocked 2021, 22, 100505. [Google Scholar] [CrossRef]
  133. Karim, M.; Döhmen, T.; Rebholz-Schuhmann, D.; Decker, S.; Cochez, M.; Beyan, O. DeepCOVIDExplainer: Explainable COVID-19 diagnosis based on chest X-ray images. arXiv 2020, arXiv:2004.04582. [Google Scholar]
  134. Sakib, S.; Tazrin, T.; Fouda, M.M.; Fadlullah, Z.M.; Guizani, M. DL-CRC: Deep learning-based chest radiograph classification for COVID-19 detection: A novel approach. IEEE Access 2020, 8, 171575–171589. [Google Scholar] [CrossRef] [PubMed]
  135. Zebin, T.; Rezvy, S. COVID-19 detection and disease progression visualization: Deep learning on chest X-rays for classification and coarse localization. Appl. Intell. 2021, 51, 1010–1021. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Thoracic medical imaging. (a) Example of CXR taken from [1]. (b) Example of CT scan taken from [1].
Figure 1. Thoracic medical imaging. (a) Example of CXR taken from [1]. (b) Example of CT scan taken from [1].
Sensors 22 07303 g001
Figure 2. Taxonomy used to conduct the review.
Figure 2. Taxonomy used to conduct the review.
Sensors 22 07303 g002
Figure 3. Submission and publication dates.
Figure 3. Submission and publication dates.
Sensors 22 07303 g003
Figure 4. Distribution of authors by country.
Figure 4. Distribution of authors by country.
Sensors 22 07303 g004
Figure 5. Wordcloud of all abstracts.
Figure 5. Wordcloud of all abstracts.
Sensors 22 07303 g005
Table 1. Details of top 100 after each round of filtering.
Table 1. Details of top 100 after each round of filtering.
Average Number
of Citations 1
H-IndexMaximum Number
of Citations
Minimum Number
of Citations
First round29995184887
After F128990184880
After F225181184865
1 “Average number of citations” corresponds to the total sum of citations obtained by the papers divided by the number of papers.
Table 2. Details about the 25 most cited papers.
Table 2. Details about the 25 most cited papers.
RankAuthors-ReferenceYearCitationsCPD 1CT/CXRDeep/Shallow
Learning
Detection/Classification 2Open Code
1Wang et al. [11]202018483.04CXRDeepBothyes
2Ozturk et al. [12]202015231.89CXRDeepBothyes
3Apostolopoulos et al. [13]202014561.75CXRDeepBothno
4Narin et al. [14]202112752.97CXRDeepBothyes
5Wang et al. [15]202111212.23CTDeepDetectionno
6Xu et al. [16]202011071.49CTDeepBothno
7Khan et al. [17]20206980.91CXRDeepDetectionyes
8Abbas et al. [18]20216400.95CXRDeepClassificationyes
9Song et al. [19]20216051.24CTDeepbothyes
10Oh et al. [20]20205000.63CXRDeepDetectionyes
11Ardakani et al. [21]20204980.62CTDeepDetectionno
12Chen et al. [22]20204660.76CTDeepDetectionyes
13Ucar and Korkmaz [23]20204650.57CXRDeepBothno
14Afshar et al. [24]20204110.62CXRDeepDetectionyes
15Panwar et al. [25]20203490.45CXRDeepBothno
16Huang et al. [26]20203410.40CTDeepNoneno
17Togaçar et al. [27]20203330.42CXRShallowClassificationyes
18Pereira et al. [28]20203270.41CXRShallowClassificationyes
19Wang et al. [29]20203260.46CTBothBothno
20Maghdid et al. [30]20213110.37BothDeepDetectionno
21Brunese et al. [31]20203080.41CXTDeepBothno
22Loey et al. [32]20202980.37CXRDeepBothno
23Islam et al. [33]20202920.42CXRDeepClassificationno
24Ismael and Sengür [34]20212910.45CXRBothDetectionno
25Amyar et al. [35]20202810.44CTDeepClassificationno
1 Average number of citations per day starting from the date when the paper was published. 2 ‘Detection’ stands for binary classification, and “classification” stands for multi-class.
Table 3. Submission and publication dates.
Table 3. Submission and publication dates.
QuarterSubmissionsPublications
2020 Q1141
2020 Q24625
2020 Q31833
2020 Q4418
2021 Q1316
2021 Q216
2021 Q3--
2021 Q4--
2022 Q1-1
Table 4. CXR vs. CT scan.
Table 4. CXR vs. CT scan.
Image TypeQuantityAverage Number of Citations
CT28251
CXR61269
Both11155
Table 5. Datasets frequently used to compose image collections.
Table 5. Datasets frequently used to compose image collections.
DatasetQuantityAverage Number of Citations
cohen 155236
kaggle (pneumonia) 229206
chestX-ray8/chestX-ray1422280
sirm16219
radiopaedia13301
covid-ct13135
rsna12282
kaggle covid-19 3,48126
kermany7323
covidx6520
figure1 54143
sars-cov-2 ct-scan 64124
Table 6. Data privacy.
Table 6. Data privacy.
Data PrivacyQuantityAverage Number of Citations
Public85362
Private15232
Table 7. Classifier type.
Table 7. Classifier type.
Classifier TypeQuantityAverage Number of Citations
Deep81279
Shallow12139
Both7128
Table 8. Feature extraction.
Table 8. Feature extraction.
Feature TypeQuantityAverage Number of Citations
Deep8131
Handcrafted5108
Both6173
Table 9. Transfer learning.
Table 9. Transfer learning.
Transfer LearningQuantityAverage Number of Citations
Yes65229
No32305
Both2182
Not informed1130
Table 10. Data augmentation.
Table 10. Data augmentation.
Data AugmentationQuantityAverage Number of Citations
Yes48243
No51263
Both180
Table 11. Segmentation strategy.
Table 11. Segmentation strategy.
Segmentation StrategyQuantityAverage Number of Citations
None75239
Manually2810
Automated23244
Table 12. Segmentation strategy.
Table 12. Segmentation strategy.
XAI MethodQuantityAverage Number of Citations
None75251
CAM4186
Grad-CAM17210
Score-CAM1211
Saliency maps1147
LIME1113
Layer-wise Relevance Propagation (LRP)1107
GSInquire11848
uAI Intelligent Assistant Analysis System175
Table 13. List of the 18 papers not peer reviewed, excluded in F2.
Table 13. List of the 18 papers not peer reviewed, excluded in F2.
Authors–ReferencePreprint RepositoryYear 1Citations 2
1Hemdan et al. [49]arXiv2020809
2Gozes et al. [50]arXiv2020726
3Zheng et al. [51]MedRxiv2020526
4Shan et al. [52]arXiv2020510
5Zhang et al. [53]arXiv2020365
6Farooq et al. [54]arXiv2020349
7Ghoshal et al. [55]arXiv2020320
8He et al. [56]medrxiv2020251
9Hall et al. [57]arXiv2020202
10Punn et al. [58]MedRxiv2020178
11Khalifa et al. [59]arXiv2020145
12Mahdy et al. [60]MedRxiv2020129
13Alom et al. [61]arXiv2020116
14Mangal et al. [62]arXiv2020114
15Kumar et al. [63]MedRxiv2020107
16Rajinikanth et al. [64]arXiv2020104
17Gozes et al. [50]arXiv202093
18Castiglioni et al. [65]MedRxiv202076
1 Considering the publication date. 2 According to Google Scholar on 12 July 2022.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Costa, Y.M.G.; Silva, S.A., Jr.; Teixeira, L.O.; Pereira, R.M.; Bertolini, D.; Britto, A.S., Jr.; Oliveira, L.S.; Cavalcanti, G.D.C. COVID-19 Detection on Chest X-ray and CT Scan: A Review of the Top-100 Most Cited Papers. Sensors 2022, 22, 7303. https://doi.org/10.3390/s22197303

AMA Style

Costa YMG, Silva SA Jr., Teixeira LO, Pereira RM, Bertolini D, Britto AS Jr., Oliveira LS, Cavalcanti GDC. COVID-19 Detection on Chest X-ray and CT Scan: A Review of the Top-100 Most Cited Papers. Sensors. 2022; 22(19):7303. https://doi.org/10.3390/s22197303

Chicago/Turabian Style

Costa, Yandre M. G., Sergio A. Silva, Jr., Lucas O. Teixeira, Rodolfo M. Pereira, Diego Bertolini, Alceu S. Britto, Jr., Luiz S. Oliveira, and George D. C. Cavalcanti. 2022. "COVID-19 Detection on Chest X-ray and CT Scan: A Review of the Top-100 Most Cited Papers" Sensors 22, no. 19: 7303. https://doi.org/10.3390/s22197303

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop