Next Article in Journal
Functionalities, Benchmarking System and Performance Evaluation for a Domestic Service Robot: People Perception, People Following, and Pick and Placing
Next Article in Special Issue
An Explainable Classification Method of SPECT Myocardial Perfusion Images in Nuclear Cardiology Using Deep Learning and Grad-CAM
Previous Article in Journal
Method of Optical Diagnostics of Grain Seeds Infected with Fusarium
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Learning-Based Diagnosis System for COVID-19 Detection and Pneumonia Screening Using CT Imaging

1
Laboratory of Technologies and Medical Imaging-LTIM-LR12ES06, Faculty of Medicine of Monastir, University of Monastir, Monastir 5019, Tunisia
2
Gaspard-Monge Computer-Science Laboratory, Mixed Unit CNRS-UMLV-ESIEE UMR8049, Paris-Est University, BP99, ESIEE Paris Cité Descartes, 93162 Noisy-le-Grand, France
3
Laboratory of Biophysics and Medical Technologies, Higher Institute of Medical Technologies of Tunis, University of Tunis El Manar, Tunis 1068, Tunisia
4
College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Iraq
5
eVIDA Laboratory, University of Deusto, 48007 Bilbao, Spain
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(10), 4825; https://doi.org/10.3390/app12104825
Submission received: 9 April 2022 / Revised: 5 May 2022 / Accepted: 6 May 2022 / Published: 10 May 2022
(This article belongs to the Special Issue Information Processing in Medical Imaging)

Abstract

:
Background: Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is a global threat impacting the lives of millions of people worldwide. Automated detection of lung infections from Computed Tomography scans represents an excellent alternative; however, segmenting infected regions from CT slices encounters many challenges. Objective: Developing a diagnosis system based on deep learning techniques to detect and quantify COVID-19 infection and pneumonia screening using CT imaging. Method: Contrast Limited Adaptive Histogram Equalization pre-processing method was used to remove the noise and intensity in homogeneity. Black slices were also removed to crop only the region of interest containing the lungs. A U-net architecture, based on CNN encoder and CNN decoder approaches, is then introduced for a fast and precise image segmentation to obtain the lung and infection segmentation models. For better estimation of skill on unseen data, a fourfold cross-validation as a resampling procedure has been used. A three-layered CNN architecture, with additional fully connected layers followed by a Softmax layer, was used for classification. Lung and infection volumes have been reconstructed to allow volume ratio computing and obtain infection rate. Results: Starting with the 20 CT scan cases, data has been divided into 70% for the training dataset and 30% for the validation dataset. Experimental results demonstrated that the proposed system achieves a dice score of 0.98 and 0.91 for the lung and infection segmentation tasks, respectively, and an accuracy of 0.98 for the classification task. Conclusions: The proposed workflow aimed at obtaining good performances for the different system’s components, and at the same time, dealing with reduced datasets used for training.

1. Introduction

COVID-19, provoked by the Severe Acute Respiratory Syndrome Corona Virus 2 (SARS-CoV-2), has been spreading exponentially around the world since December 2019 starting from Wuhan, China, resulting in a global health crisis [1]. This highly infectious disease has been posing as the biggest current healthcare threat towards humanity, which has led the World Health Organization (WHO) to declare this outbreak to be a Public Health Emergency of International Concern (PHEIC), and it was recognized as a global pandemic [2,3]. As of 5 April 2021, the WHO reported 131,309,792 worldwide cases with 2,854,276 deaths and a mortality rate exceeding 2% [4]. The typical clinical characteristics of COVID-19 cases range from asymptomatic to flu-like symptoms, fever, dry cough, tiredness, loss of taste and smell, to even a life threatening Acute Respiratory Distress Syndrome (ARDS) [5]. Up to this date, no effective treatment has yet been proven. Hence, accurate and rapid testing is extremely pivotal to lessen the spread of the virus.
The Reverse Transcription-Polymerase Chain Reaction (RT-PCR) test is considered to be the gold standard method for confirming infected cases because it is able to identify SARS-CoV-2 RNA from respiratory specimens, obtained by nasopharyngeal or oropharyngeal swabs, within 4 to 6 h [6]; however, the shortage of RT-PCR test kits is a major problem in many countries around the world. Furthermore, the sensitivity of RT-PCR screening is rather low as a result of high false-negative rates caused by several factors including sample preparation and quality control [7,8,9]; therefore, chest radiography imaging, X-ray (CXR), or computed tomography (CT scans), is usually used as a complementary examination in the rapid diagnosis and control of the coronavirus. Hence, the use of chest CT scans, as mentioned in this review [10], can counteract the limitations of the low sensitivity of RT-PCR tests, thus improving the accuracy and speed of diagnosis. Moreover, compared with the chest X-ray, CT scans are generally recommended thanks to their three-dimensionality and good visibility. There are various studies inspecting the imaging characteristics throughout the diagnosis, follow-up, and treatment of COVID-19 [11,12]. It was found that patients presented chest radiographic abnormalities which exhibited similar features, including ground-glass opacities (GGO), an area of increased attenuation in the lung with preserved bronchial and vascular markings in the early stages; moreover, they presented pulmonary consolidation when the accumulation of fluid progresses to obscure bronchial and vascular regions in the latter stages. Consequently, accurate and rapid detection and localization of these tissue abnormalities is critical for early diagnosis and treatment of COVID-19.
Recently, with the rapid development of artificial intelligence, and more specifically, deep learning technologies, Convolutional Neural Networks (CNNs) have been used a great deal in medical image processing thanks to their powerful feature representation and extraction. Several techniques based on CNNs have been published, showing promising performances in other disease diagnosing cases, such as cancer and so forth [13,14], which should also have the same achievability in this novel pneumonia detection. In biomedical image analysis, the problems can be interpreted as classification and segmentation to identify and detect abnormal features and regions of interest (ROIs) via deep learning techniques, where the CNN and Unet based architectures are the most promising and popular choice among the research community. In this work, a deep learning-based diagnosis system was developed to automatically detect and analyze areas suspected to be infected with the COVID-19 virus from clinical CT images extracted from a publicly available chest CT scans dataset. Although training accurate and robust models requires sufficient annotated medical imaging data, only one small yet sufficient public dataset is, so far, available because of privacy restrictions and costly labelling. Not to mention that it is likely problematic to combine data sets collected under different labelling regimes, given that generally, the collected data is heavily influenced by the instructions provided to the annotators. For this reason, the proposed framework is aimed at obtaining good performances for the different system’s components while dealing with reduced dataset used for training at the same time.
The remainder of this paper is organized as follows. In Section 2, a literature overview is presented. Our proposed method is explained in Section 3. The experimental results are introduced in Section 4, then discussed and compared with recent works in Section 5. Finally, Section 6 concludes the paper, highlights limitations, and proposes future improvements.

2. Related Works

Recently, medical imaging processing techniques have been widely used to monitor several diseases. The progress in this field has been reinforced by the introduction of Artificial Intelligence technologies that became a popular approach for detection and segmentation of many medical problems thanks to their powerful feature representation [15].
In this context, many approaches have been proposed for the detection and the segmentation of the lungs’ COVID-19 infection using chest X-rays and CT scans in the last few months [16], which confirmed that a carefully designed image examination procedure plays a vital role in reducing the diagnostic burden. The proposed methods can be classified into three categories: (1) classification techniques; (2) infected regions and segmentation techniques and; (3) diagnosis systems that worked on both tasks. Table 1 presents two proposed methods for each category as well as the deep learning architectures used in each one of them and the images’ modality they used.
Wang et al. [17] introduced COVID-Net, a densely-connected deep convolutional neural network design, tailored for the detection of COVID-19 cases from chest X-ray images, and it achieved a 93.3% test accuracy. A three-phase detection model using deep transfer learning is proposed, in the work of Ahuja et al. [19] to improve the detection accuracy, and it was proven that the ResNet18 architecture helped to attain a better classification accuracy (99.4%) compared with the other considered architectures. Fan et al. [20] developed a new Deep Network called “Inf-Net” to automatically identify infected regions from chest CT slices. The algorithm is based on a parallel partial decoder able to aggregate the high-level features and to generate a global map. In their study, they used a small dataset of 100 CT labeled images which achieved a dice score of 0.682. A semi-supervised segmentation system was then introduced to alleviate the shortage of labeled data and it achieved a dice score of 0.739. Similarly, Shan et al. [21] proposed a deep learning based system for automatic segmentation and quantification of infection regions from chest CT scans. A modified 3D CNN that combines V-Net [25] with the bottle-neck structure was used and they achieved a Dice coefficient of 91.6%. Another study developed by Elzeki et al. [22] who proposed a novel approach that combines CNN and VGG19 to detect COVID-19 features in chest X-ray images. In their study, they used a dataset of eighty-seven chest X-ray images associated with twenty-five cases and they obtained an accuracy of 96.93%, a sensitivity of 57.14%, and a specificity of 99.2%. A joint classification and segmentation system was proposed by Wu et al. [23] for a real-time diagnosis of COVID-19. The classification model is a Res2Net-based [26] classifier that achieved an average sensitivity of 95.0% and a specificity of 93.0%. The segmentation model is based on an encoder, with VGG-16 backbone [27] added with an Enhanced Feature Module, and a decoder based on Attentive Feature Fusion strategy. The model succeeded to segment the infected regions with a dice score of 78.3. Gozes et al. [24] presented a system that detects cases suspected to have COVID-19 features from CT images, using a Resnet-50—2D deep CNN architecture [28] with 94% sensitivity and 98% specificity; then, for cases classified as positive, an abnormality localization module will be executed to extract, using a Grad-cam technique [29], the network’s activation maps that contributed most to the decision. An infection analysis will be further held using commercial off-the-shelf software to detect nodules and small opacities to reinforce the infection localization.

3. Method

In this section, our proposed approach is presented in detail. We start with demonstrating the architecture design methodology behind the proposed approach. We then describe the data set used for training our models and provide details of the proposed network architectures, along with their different components, and we point out the training strategy along with some implementation details. At the end, we display the real runtime workflow of our proposed diagnosis system.

3.1. CNN and U-Net

Convolutional Neural Networks (CNN) [30,31] are a powerful tool which has already demonstrated their success in classification tasks, where the output of an image is a single class label. For almost any computer vision problems, CNN-based approaches surmount other techniques, and it may even surpass human experts in the corresponding domain.
However, in several visual tasks, particularly in biomedical image processing where reliable image segmentation is one of its crucial tasks because it demands us not only to determine whether there is a disease, but also to delimit the abnormal regions, the desired output ought to cover localization, meaning that a class label is supposed to be attributed to each pixel. Over the last few years, different methods that improved traditional deep learning approaches have been developed in order to address the problem of creating CNNs producing a segmentation map for a whole input image in a sole forward pass [32,33].
One of the most acknowledged state of the art deep learning methods is the Fully Convolutional Network [34]. Its key point is to use the CNN as an effective feature extractor. It consists of replacing the fully connected layers with convolutional ones to output spatial feature maps, which are further up-sampled to produce dense pixel-wise output, instead of classification scores.
The FCN’s topology consists of two parts: a down-sampling path in charge of capturing semantic information and an up-sampling path in charge of recovering spatial information. A skip connection operation is used for mitigating information loss as a result of pooling or down-sampling layers. The U-Net neural is another popular biomedical segmentation model, originally proposed by Ronneberger et al. for biomedical image processing, and the winner of the ISBI cell tracking challenge in 2015 by a large margin [35]. This architecture has so far proved itself in binary image segmentation competitions such as satellite image analysis [36], medical image analysis [13], and others [37].
The encoder part is responsible for capturing context following the typical architecture of a CNN with alternating convolution and pooling operations. It is composed of five blocks, which are each composed of two convolutional layers, using a ReLU (Rectified Linear Unit) activation function that provides nonlinearity to the network; they produce feature maps through a convolution process; one max-pooling layer down-samples those feature maps, hence, reducing their size, and increasing their number per layer at the same time, so that the architecture can effectively learn the complex structures.
The decoder part is responsible for decoding the information, thus enabling precise localization, by using transposed convolution (deconvolution) operations to finally produce the segmentation mask of the image. It is also made of five blocks, each composed of two convolutional layers, which also utilizes the ReLU as the activation function; one up-sampling layer which is responsible for restoring the feature maps to their original size in the network by reverting the max-pooling operation; and a skip connection that combines the up-sampled features with high-resolution encoded features from the encoder part.
The general architecture of a U-Net model is illustrated in Figure 1. It is made up of two major sections, namely, the encoder and decoder.

3.2. Dataset

In spite of the growing number of COVID-19 infected patients, along with their volumetric CT scans, labeled CT scans are still only available in a limited capacity. Hence, publicly accessible CTS datasets are very limited. For this reason, we chose to use the lung CTS dataset of Ma Jun et al. [38] to train and evaluate our proposed network which is, to the best of our knowledge, the first publicly available data-efficient learning benchmark for medical image segmentation.
The dataset was collected from the Corona-cases Initiative [39] and Radiopaedia [40], and was manually annotated in the work of Ma Jun et al. [41]. It is composed of twenty axial volumetric CT scans related to confirmed COVID-19 subjects and it is composed of a total of 3.138 lung CT images, labeled, segmented, and verified by expert radiologists, along with their correspondent lung CT images, their corresponding lung masks, infection masks, and a superposition of the two masks, respectively. Table 2 gives an overview of the used database.

3.3. Network Architecture

In this section our proposed approach is provided and described in Figure 2. As the figure shows, the architecture consists of four main processes. Starting with the 20 CT scan cases, we portioned the data to be 70% for the training dataset and 30% for the validation dataset. First, we started with the lung segmentation phase. We used the Contrast Limited Adaptive Histogram Equalization preprocessing method to remove noise and intensity inhomogeneity. Then, we removed all the black slices to only crop the region of interest containing the lungs.
A Unet architecture, based on CNN encoder and CNN decoder approaches, is then introduced for fast and precise image segmentation to obtain the lung segmentation model. We then proceeded the exact same way to obtain the infection segmentation model in the second phase, and to further improve the model and to better estimate its skill on unseen data, we used fourfold cross-validation as a resampling procedure for the evaluation.
In the third phase, we augmented the data which was then fed to our proposed three layer CNN architecture, with additional fully connected layers, followed by a Softmax layer for the COVID-19 classification. The last phase is the volume reconstruction of the twenty cases. We reconstructed the lungs’ volumes first, then the infection volumes for each case, in order to calculate the volume ratio to obtain, at last, the corresponding infection rates. A detailed description of the four processes’ components will be described in the later sections.

3.4. Data Preprocessing

Since medical images suffer a lot from contrast problems such as noise and intensity inhomogeneity, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method, proposed in [42], was used to enhance the contrast of the obtained images. It is a variant of adaptive histogram equalization (AHE), and its main idea is to find the mapping for each pixel based on its local (neighborhood) grayscale distribution using a transformation function that limits the contrast amplification in highly concentrated regions. In fact, the CLAHE demonstrated good results on medical images, and has shown its effectiveness in assigning displayed intensity levels in chest CT scans in particular [43,44].
Furthermore, the CT scans contain a lot of black slices and parts which we are not interested in, such as the diaphragm below the lungs, which takes up valuable RAM and unnecessary computing in the network. For this reason, we chose to crop only the region of interest (ROI) that contains the lungs: the contour (largest closed boundary) with the largest area would be the contour covering the lungs. We concatenated the 2nd and 3rd largest contours for the two lungs individually in order to get the maximum ROI in the same resolution. In addition, when cropping a CT scan, we made sure that its corresponding segmentation map is also cropped by the same limits, otherwise pixel level labeling will go wrong. Below, Figure 3 illustrates the impact of applying the CLAHE filter on an input chest CT scan.
The Figure 4 and Figure 5 below illustrate a chest CT scan and its corresponding lung and infection mask, respectively, before and after cropping and applying the CLAHE filter. The impact of the filter on the sharpness of the image is clearly identifiable.

3.5. Data Augmentation

In order to overcome the limited size of the dataset size and to avoid the overfitting problem, we augmented our data by randomly applying typical transformation techniques [45] including rotations, horizontal and vertical translations and flips, shearing and scaling.

3.6. Models Description

3.6.1. Segmentation Models

The proposed network includes the segmentation of lung and COVID-19 infection segmentation. Both models were trained separately. The standard U-Net was implemented using the keras library with the tensorflow backend, consisting of five blocks encoding path and a symmetric five blocks decoding path. At each level of the encoder, a convolution operation, a ReLU activation function, and a batch normalization operation were applied two times consecutively, followed by a max-pooling operation, to overcome the overfitting issue by minimizing the spatial size of the convolved features, before moving to the next level. The decoder recovers the original input size by applying the same sequence of operations by replacing the max-pooling operation with the transposed convolution as an up-sampling operation at every level. Additionally, the corresponding feature from the encoder is concatenated to the decoder’s block input. A 1 × 1 convolution, with a sigmoid activation function, was then finally added for the generation of the final binary prediction map. The network used thirty-two feature maps at its highest resolution and 512 at its lowest. The convolutions were applied with a kernel size of 3 × 3, and the transposed convolutions were applied with a kernel size of 2 × 2 with a stride of 2 × 2. The network was trained, and the parameters were updated using the Adam optimizer with 0.0005 as a learning rate. To further optimize the training procedure, we used the cosine annealing scheduler, implemented as a custom callback, where learning rate starts at 0.0005 and then is dropped rapidly to 0.0001 before being increased again to the maximum. Both networks were trained with a batch size of thirty-two, and 80 and 16 epochs for lung and infection segmentations, respectively.

3.6.2. Classification Models

For the classification task we implemented three layers CNN where each layer was composed of two convolution operations, each followed by a batch normalization operation, and a max-pooling operation. A dense layer, using the ReLU activation function with a dropout of 0.4, was then introduced followed by the last dense layer for classification using the Softmax activation function and the binary cross entropy loss function. The network used sixteen feature maps at its highest resolution and sixty-four at its lowest. The convolutions were applied with a kernel size of 3 × 3. The network was trained with thirty-two batch size and twenty-five epochs, and the parameters were updated using the Adam optimizer with 0.0005 as a learning rate.

3.7. Volume Reconstruction

The last phase is the volume reconstruction of studied cases. We reconstructed the lungs’ volumes at first then the infection volumes in order to calculate the volume ratio of each case to obtain at last the correspondent infection rate. For this step, we used the platform Thermo Scientific Amira Software [46], which is a powerful multifaceted 2D–5D platform for visualizing, manipulating and understanding data from many image modalities including CT, MRI, and others. Below, Table 3 demonstrates the volume ratios calculated using the lung and infection reconstructed volumes of the 20 patients from the used dataset which respects the fact that the proportion of infections in the lungs range from 0.01% to 59% as stated in [41].
As the table shows, the used dataset contains a total of six mild infected patients having a volume ratio inferior to 0.02, eight moderately infected patients having a volume ratio between 0.02 and 0.15 and six severely infected patients having a volume ratio superior to 0.15. This clearly demonstrates that we used a well-balanced dataset. Figure 6a–c below illustrate the volume reconstruction of the lungs and its corresponding infection regions of the most representative three patients, respectively. Volume reconstruction for all patients is given in Figure 7.

3.8. Real Runtime Flowchart

Below, Figure 8 presents the real run time flowchart of our proposed system. For input CT scan slices, the lung segmentation model will first be executed to output the lung masks that will be used for the region of interest extraction by superposing them with the input slices. Then, as the user chose, either the classification model will be executed in case of a rapid diagnosis choice, or the infection segmentation model will be executed in case of a full diagnosis choice. In the first case, the classification model will be executed, using the extracted ROIs, to verify whether the patient is infected or not. In the latter case, the infection segmentation model will be executed, using the extracted ROIs, to obtain the infection masks. The lung and infection volume construction will then be held for the calculation of the infection rate by calculating the volumes’ ratios that will give us, at last, an idea about the infection severity.

4. Experimental Results

In this section, we demonstrate the aptness of our proposed method by providing a detailed experimental analysis presenting both quantitative and qualitative results as well as comparing our results with other state of the art methods.

4.1. Evaluation Metrics

There are several metrics that are used by the research community for medical image analysis to measure the performance of classification and segmentation models, including precision, recall, dice coefficient and intersection over union (IoU). To calculate these metrics, the following four measures are required:
  • True Positive (TP): represents the number of pixels being correctly identified in the segmentation tasks and the number of correctly predicted infected CTs in the classification task.
  • True Negative (TN): denotes the number of non-lung/infection pixels being correctly identified as non-lung infection in the segmentation tasks and the number of correctly predicted healthy CTs in the classification task.
  • False Positive (FP): represents the number of non-lung/infection pixels being wrongly classified as lung/infection pixels in the segmentation tasks and the number of mistakenly predicted infected CTs in the classification task.
  • False Negative (FN): denotes the lung/infection pixels being wrongly classified as non-lung/infection pixels in the segmentation tasks and the number of mistakenly predicted healthy CTs in the classification task.
  • Accuracy: this metric measures the ratio of correctly identified predictions divided by the entire predictions and it is defined in Equation (1).
Accuracy = TP + TN TP + TN + FP + FN  
  • Precision: this metric is a measure of exactness calculated as the ratio of true positive predictions divided by the number of predicted positives and defined in Equation (2).
Precision   = TP TP + FP  
  • Recall: this metric is a measure of completeness calculated as the ratio of true positive predictions divided by the number of actual positives and defined in Equation (3).
Recall   = TP TP + FN  
  • Area Under the Receiver Operating Characteristic Curve (AUROC): this metric is a measure of separability that summarizes the ROC curve which plots the rate of true positive predictions versus the false positive ones for all possible thresholds.
  • Area Under the Precision Recall Curve (AUPRC): this metric is another measure of separability that summarizes the PR curve which plots the precision versus the recall for all thresholds.
  • Dice Coefficient (also known as Dice score): it is the overlap ratio between the prediction and the ground truth, giving more weight to the intersection between the two. Its value ranges between 0 and 1, and the higher the value is, the better the segmentation result. It is defined in Equation (4) as follows:
Dice = 2 TP 2 TP + FP + FN    
  • Intersection over Union IoU (also known as Jaccard index/F1 score): it is another popular metric that measures the overlap between the prediction and the ground truth and is defined in Equation (5).
IoU = TP TP + FP + FN    

4.2. Quantitative Evaluation

4.2.1. Learning Phase

We start by studying the learning phase. Figure 9 demonstrates the Dice Coefficient and the Dice Loss Training curves of the lung’s segmentation model, respectively.
As the figure shows, the plots of training and validation loss decreased to a point of stability which confirms the good performance of the proposed model on unseen data. Figure 10 demonstrates the Dice Coefficient and the Dice Loss Training curves of the infection segmentation model respectively for the different folds.
The curves clearly confirm the good performance of the proposed model on unseen data in the different folds, as the training and validation loss decreased to a point of stability. Figure 11 demonstrates the Loss Training curve of the classification model.

4.2.2. Validation Phase

The quantitative evaluation demonstrated the good performance of our proposed system as justified by the reached values of the evaluation metrics for the three tasks in Table 4 below. The attained dice coefficient values are 0.98 for the lung segmentation model and 0.91 for infection segmentation model. Additionally, we achieved an accuracy of 0.95, 0.94, and 0.98 for lung segmentation, infection segmentation, and classification models, respectively.
Figure 12 below displays fourfold obtained dice for different chosen thresholds.
As the figure shows, the maximum validation dice was on the second split as the value reached 0.927999. Hence, we consider that the best threshold to be used later for the calculation of the other metrics is 0.40. The mean of all obtained dice is 0.9189. The results of the fourfold cross-validation for the different metrics are summarized in Table 5.
The model achieved a dice coefficient of 0.91, an IoU of 0.85, a precision 0.92, a recall of 0.90, an AUROC of 0.95, and an AUPRC of 0.91. The high AUROC and AUPRC values demonstrate that our model succeeded in handling and distinguishing both infected regions and non-infected ones well.

4.3. Qualitative Evaluation

The qualitative evaluation proved that our predicted masks are notably close to the ground-truth masks, as shown in Figure 13 and Figure 14 below, which display four sample images from the test set for the lung segmentation task and infection segmentation task, respectively.

5. Discussion

As previously mentioned, many deep learning algorithms have been used for the diagnosis of COVID-19 complications using CT or X ray images and they have achieved good classification performances; however, most of the results are obtained without knowledge of clinical characteristics of COVID-19. Moreover, the results analysis is only reported as metric values such as accuracy and AUC, without showing a clinical image that could be validated by radiologists. The aforementioned limitations constrain the translation of deep learning model into clinical practice. In this study, we proposed a new a 3D visualization of COVID-19 complications, which facilitate their interpretation by radiologists. An accurate training model requires sufficient annotated medical imaging data. On the other hand, high quantities of information from multiple sources are likely to overfit training and to lose the main clinical features. Not to mention that the collected data is heavily influenced by the instructions provided to the annotators. For this reason, the proposed framework achieved good performances for the different system’s components, and at the same time, dealt with the reduced dataset used for training.
For further evaluation, we quantitatively compared the performance of the proposed lungs and infection segmentation models with other state of the art methods that used the same publicly available dataset we used for training and testing, which renders the comparison to be interesting. We considered the works of Ma et al. [41] who proposed a U-net based deep learning system, Muller et al. [47], where a standard 3D U-Net architecture was implemented, Omar Alirr [48], where two cascaded deep FCNs are connected sequentially to segment the lung organ and then the COVID-19 infection areas with an adjustment of U-net architecture as a backbone, and Punn et Agarwal [49], who developed their network with a two cascaded residual attention inception U-Net (RAIU-Net) model to generate lung contour maps and COVID-19 infected regions. Quantitative results, using the dice coefficient metric as a reference and including lungs and infection segmentation models, are reported in Table 6 below.
The results prove the effectiveness of our models that outperformed the other listed methods by achieving the highest Dice Coefficient value for both lungs and infection segmentation models.

6. Conclusions

In this study, the development and deployment of a deep learning-based diagnosis system to assist novel coronavirus pneumonia screening using CT imaging is proposed. The segmentation subsystem, developed using the popular U-Net architecture as the main framework, will highlight the position of the infected areas, whereas the classification subsystem and the semi-automated 3D reconstruction subsystem will give an idea about the probability of being infected with the virus, along with the infection rate in case of positive findings. The quantitative and qualitative evaluation results demonstrated the effectiveness of our system in accurately localizing and quantifying the infection regions from CT scans. It also showed that the developed models outperform the other recently proposed approaches which were evaluated using standard benchmark performance metrics such as the dice score, for which we achieved 0.98 and 0.91 for the lung and infection segmentation tasks, respectively, and the accuracy, for which we achieved 0.98 for the classification task. The main limitation of this study is the use of a small, but sufficient number, of model training data. Confidentiality restrictions and the high cost of labeling partly explain the absence of a large number of COVID-19 clinical CT images. Indeed, combining datasets collected under different labeling regimes is often problematic because, in general, the data collected is heavily influenced by the instructions provided to annotators. For this reason, a workflow aimed at obtaining good performances for the different systems’ components, while at the same time, dealing with reduced datasets used for training, is proposed. For future work, we intend to fully automate the volume reconstruction module to enable a real time 3D screening result.

Author Contributions

Conceptualization, R.M. (Ramzi Mahmoudi); methodology, R.M. (Ramzi Mahmoudi) and N.B.; software, R.M. (Rania Mabrouk); formal analysis, M.A.M. and B.G.-Z.; writing—original draft preparation, R.M. (Ramzi Mahmoudi) and N.B.; writing—review and editing, M.H.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by The Francophone University Agency (AUF) COVID-19.1 PANDEMIC SPECIAL PLAN. Project ID 469, Title: Development and implementation of a decision support system based on artificial intelligence applied to medical imaging to improve the management of patients with SARS-CoV2 (SPECTRUM) (2020).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the lung CTS dataset of Ma Jun et al. Further details can be found at: https://coronacases.org/ (accessed on 15 December 2021) and https://radiopaedia.org/ (accessed on 15 December 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, C.; Horby, P.W.; Hayden, F.G.; Gao, G.F. A novel coronavirus outbreak of global health concern. Lancet 2020, 395, 470–473. [Google Scholar] [CrossRef] [Green Version]
  2. Wang, W.; Xu, Y.; Gao, R.; Lu, R.; Han, K.; Wu, G.; Tan, W. Detection of SARS-CoV-2 in Different Types of Clinical Specimens. JAMA 2020, 323, 1843–1844. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. World Health Organization. WHO Director-General’s Opening Remarks at the Media Briefing on COVID-19. 2020. Available online: https://www.who.int/director-general/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19 (accessed on 11 March 2020).
  4. World Health Organization. WHO Coronavirus (COVID-19) Dashboard. WHO (COVID-19) Homepage. Available online: https://covid19.who.int/table (accessed on 1 May 2022).
  5. Benameur, N.; Mahmoudi, R.; Zaid, S.; Arous, Y.; Hmida, B.; Bedoui, M.H. SARS-CoV-2 diagnosis using medical imaging techniques and artificial intelligence: A review. Clin. Imaging 2021, 76, 6–14. [Google Scholar] [CrossRef] [PubMed]
  6. World Health Organization. Statement on the Second Meeting of the International Health Regulations. 2005. Available online: https://www.who.int/news/item/30-01-2020-statement-on-the-second-meeting-of-the-international-health-regulations-(2005)-emergency-committee-regarding-the-outbreak-of-novel-coronavirus-(2019-ncov) (accessed on 1 May 2022).
  7. Tingbo, L.; Yu, L. Handbook of COVID-19 Prevention and Treatment. Tools, Guidelines and Methodologies. 2020. Available online: https://covid-19.alibabacloud.com/ (accessed on 15 December 2021).
  8. Fang, Y.; Zhang, H.; Xie, J.; Lin, M.; Ying, L.; Pang, P.; Ji, W. Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR. Radiology 2020, 296, 2020200432. [Google Scholar] [CrossRef] [PubMed]
  9. Ai, T.; Yang, Z.; Hou, H.; Zhan, C.; Chen, C.; Lv, W.; Tao, Q.; Sun, Z.; Xia, L. Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases. Radiology 2020, 296, E32–E40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Dong, D.; Tang, Z.; Wang, S.; Hui, H.; Gong, L.; Lu, Y.; Xue, J.; Liao, H.; Chen, F.; Yang, F.; et al. The Role of Imaging in the Detection and Management of COVID-19: A Review. IEEE Rev. Biomed. Eng. 2020, 14, 16–29. [Google Scholar] [CrossRef]
  11. Bernheim, A.; Mei, X.; Huang, M.; Yang, Y.; Fayad, Z.A.; Zhang, N.; Diao, K.; Lin, B.; Zhu, X.; Li, K.; et al. Chest CT Findings in Coronavirus Disease-19 (COVID-19): Relationship to Duration of Infection. Radiology 2020, 295, 200463. [Google Scholar] [CrossRef] [Green Version]
  12. E Kaufman, A.; Naidu, S.; Ramachandran, S.; Kaufman, D.S.; A Fayad, Z.; Mani, V. Review of radiographic findings in COVID-19. World J. Radiol. 2020, 12, 142–155. [Google Scholar] [CrossRef]
  13. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  14. Hu, Z.; Tang, J.; Wang, Z.; Zhang, K.; Zhang, L.; Sun, Q. Deep learning for image-based cancer detection and diagnosis—A survey. Pattern Recognit. 2018, 83, 134–149. [Google Scholar] [CrossRef]
  15. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef] [PubMed]
  16. Shi, F.; Wang, J.; Shi, J.; Wu, Z.; Wang, Q.; Tang, Z.; He, K.; Shi, Y.; Shen, D. Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19. IEEE Rev. Biomed. Eng. 2020, 14, 4–15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Wang, L.; Lin, Z.Q.; Wong, A. COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef] [PubMed]
  18. ElAraby, M.E.; Elzeki, O.M.; Shams, M.Y.; Mahmoud, A.; Salem, H. A novel Gray-Scale spatial exploitation learning Net for COVID-19 by crawling Internet resources. Biomed. Signal Process. Control 2021, 73, 103441. [Google Scholar] [CrossRef]
  19. Ahuja, S.; Panigrahi, B.K.; Dey, N.; Rajinikanth, V.; Gandhi, T.K. Deep transfer learning-based automated detection of COVID-19 from lung CT scan slices. Appl. Intell. 2020, 51, 571–585. [Google Scholar] [CrossRef]
  20. Fan, D.-P.; Zhou, T.; Ji, G.-P.; Zhou, Y.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images. IEEE Trans. Med. Imaging 2020, 39, 2626–2637. [Google Scholar] [CrossRef]
  21. Shan, F.; Gao, Y.; Wang, J.; Shi, W.; Shi, N.; Han, M.; Xue, Z.; Shen, D.; Shi, Y. Lung Infection Quantification of COVID-19 in CT Images with Deep Learning. arXiv 2020, arXiv:2003.04655. [Google Scholar]
  22. Elzeki, O.M.; Elfattah, M.A.; Salem, H.; Hassanien, A.E.; Shams, M. A novel perceptual two layer image fusion using deep learning for imbalanced COVID-19 dataset. PeerJ Comput. Sci. 2021, 7, e364. [Google Scholar] [CrossRef]
  23. Wu, Y.-H.; Gao, S.-H.; Mei, J.; Xu, J.; Fan, D.-P.; Zhang, R.-G.; Cheng, M.-M. JCS: An Explainable COVID-19 Diagnosis System by Joint Classification and Segmentation. IEEE Trans. Image Process. 2021, 30, 3113–3126. [Google Scholar] [CrossRef]
  24. Gozes, O.; Frid-Adar, M.; Greenspan, H.; Browning, P.D.; Zhang, H.; Ji, W.; Bernheim, A.; Siegel, E. Rapid AI Development Cycle for the Coronavirus (COVID-19) Pandemic: Initial Results for Automated Detection & Patient Monitoring Using Deep Learning CT Image Analysis. arXiv 2020, arXiv:2003.05037. [Google Scholar]
  25. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  26. Gao, S.-H.; Cheng, M.-M.; Zhao, K.; Zhang, X.-Y.; Yang, M.-H.; Torr, P.H. Res2Net: A New Multi-Scale Backbone Architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 652–662. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  29. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef] [Green Version]
  30. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Ankile, L.L.; Heggland, M.F.; Krange, K. Deep Convolutional Neural Networks: A survey of the Foundations, Selected Improvements, and Some Current Applications. arXiv 2020, arXiv:2011.12960. [Google Scholar]
  32. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. arXiv 2015, arXiv:1512.07108. [Google Scholar] [CrossRef] [Green Version]
  33. Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef] [Green Version]
  34. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. arXiv 2015, arXiv:1411.4038. [Google Scholar] [CrossRef]
  35. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
  36. Iglovikov, V.; Mushinskiy, S.; Osin, V. Satellite Imagery Feature Detection Using Deep Convolutional Neural Network: A Kaggle Competition. arXiv 2017, arXiv:1706.06169. [Google Scholar]
  37. Iglovikov, V.; Shvets, A. TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation. arXiv 2018, arXiv:1801.05746. [Google Scholar]
  38. Ma, J.; Ge, C.; Wang, Y.; An, X.; Gao, J.; Yu, Z.; Zhang, M.; Liu, X.; Deng, X.; Cao, S. COVID-19 CT Lung and Infection Segmentation Dataset. Zenodo 2020, 20. [Google Scholar] [CrossRef]
  39. RAIOSS.com. Coronacases. Available online: https://coronacases.org/ (accessed on 1 May 2022).
  40. Radiopaedia Pty Ltd. ACN 133 562 722. Available online: https://radiopaedia.org/ (accessed on 1 May 2022).
  41. Ma, J.; Wang, Y.; An, X.; Ge, C.; Yu, Z.; Chen, J.; Zhu, Q.; Dong, G.; He, J.; He, Z.; et al. Towards Data-Efficient Learning: A Benchmark for COVID-19 CT Lung and Infection Segmentation. arXiv 2020, arXiv:2004.12537. [Google Scholar] [CrossRef] [PubMed]
  42. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  43. Zimmerman, J.; Pizer, S.; Staab, E.; Perry, J.; McCartney, W.; Brenton, B. An evaluation of the effectiveness of adaptive histogram equalization for contrast enhancement. IEEE Trans. Med. Imaging 1988, 7, 304–312. [Google Scholar] [CrossRef] [Green Version]
  44. Pizer, S.; Johnston, R.; Ericksen, J.; Yankaskas, B.; Muller, K. Contrast-limited adaptive histogram equalization: Speed and effectiveness. In Proceedings of the First Conference on Visualization in Biomedical Computing, Atlanta, GA, USA, 22–25 May 1990; pp. 337–345. [Google Scholar] [CrossRef]
  45. Hussain, Z.; Gimenez, F.; Yi, D.; Rubin, D. Differential Data Augmentation Techniques for Medical Imaging Classification Tasks. AMIA Annu. Symp. Proc. 2017, 2017, 979–984. [Google Scholar]
  46. ThermoFisher. AMIRA Software. Available online: https://www.thermofisher.com/tn/en/home/electron-microscopy/products/software-em-3d-vis/amira-software.html (accessed on 15 December 2021).
  47. Müller, D.; Rey, I.S.; Kramer, F. Automated Chest CT Image Segmentation of COVID-19 Lung Infection based on 3D U-Net. arXiv 2020, arXiv:2007.04774. [Google Scholar]
  48. Alirr, O.I. Automatic Deep Learning System for COVID-19 Infection Quantification in Chest CT. Available online: https://arxiv.org/ftp/arxiv/papers/2010/2010.01982.pdf (accessed on 15 December 2021).
  49. Punn, N.S.; Agarwal, S. CHS-Net: A Deep Learning Approach for Hierarchical Segmentation of COVID-19 Infected CT Images. arXiv 2021, arXiv:2012.07079. [Google Scholar] [CrossRef]
Figure 1. U-net architecture [32].
Figure 1. U-net architecture [32].
Applsci 12 04825 g001
Figure 2. Flowchart of the different phases involved in the proposed diagnosis system.
Figure 2. Flowchart of the different phases involved in the proposed diagnosis system.
Applsci 12 04825 g002
Figure 3. The impact of applying the CLAHE filter on an input chest CT scan.
Figure 3. The impact of applying the CLAHE filter on an input chest CT scan.
Applsci 12 04825 g003
Figure 4. A chest CT scan and its corresponding lung mask before and after cropping.
Figure 4. A chest CT scan and its corresponding lung mask before and after cropping.
Applsci 12 04825 g004
Figure 5. A chest CT scan and its corresponding infection mask before and after cropping.
Figure 5. A chest CT scan and its corresponding infection mask before and after cropping.
Applsci 12 04825 g005
Figure 6. Volume reconstruction of the lungs and its correspondent infection regions of: (a) a patient presenting a mild infection; (b) a patient presenting a moderate infection; (c) a patient presenting a severe infection.
Figure 6. Volume reconstruction of the lungs and its correspondent infection regions of: (a) a patient presenting a mild infection; (b) a patient presenting a moderate infection; (c) a patient presenting a severe infection.
Applsci 12 04825 g006
Figure 7. Volume reconstruction of the lungs and its corresponding infection regions of the twenty patients.
Figure 7. Volume reconstruction of the lungs and its corresponding infection regions of the twenty patients.
Applsci 12 04825 g007
Figure 8. Real Runtime flowchart.
Figure 8. Real Runtime flowchart.
Applsci 12 04825 g008
Figure 9. Dice Coefficient and Dice Loss Training curves of the lung’s segmentation model.
Figure 9. Dice Coefficient and Dice Loss Training curves of the lung’s segmentation model.
Applsci 12 04825 g009
Figure 10. (a) Dice Coefficient and Dice Loss Training curves of the infection segmentation model using fold 1. (b) Dice Coefficient and Dice Loss Training curves of the infection segmentation model using fold 2. (c) Dice Coefficient and Dice Loss Training curves of the infection segmentation model using fold 3. (d) Dice Coefficient and Dice Loss Training curves of the infection segmentation model using fold 4.
Figure 10. (a) Dice Coefficient and Dice Loss Training curves of the infection segmentation model using fold 1. (b) Dice Coefficient and Dice Loss Training curves of the infection segmentation model using fold 2. (c) Dice Coefficient and Dice Loss Training curves of the infection segmentation model using fold 3. (d) Dice Coefficient and Dice Loss Training curves of the infection segmentation model using fold 4.
Applsci 12 04825 g010
Figure 11. Classification model’s loss curve.
Figure 11. Classification model’s loss curve.
Applsci 12 04825 g011
Figure 12. Fourfold obtained dice for different chosen thresholds.
Figure 12. Fourfold obtained dice for different chosen thresholds.
Applsci 12 04825 g012
Figure 13. Visual qualitative comparison of the lungs segmentation results between the ground truth and our proposed model on four slices from different CT scans. First column: original CT scan. Second column: ground truth. Third column: predicted lung masks.
Figure 13. Visual qualitative comparison of the lungs segmentation results between the ground truth and our proposed model on four slices from different CT scans. First column: original CT scan. Second column: ground truth. Third column: predicted lung masks.
Applsci 12 04825 g013
Figure 14. Visual qualitative comparison of the infection segmentation results between the ground truth and our proposed model on four slices from different CT scans. First column: original CT scan. Second column: ground truth. Third column: predicted infection masks.
Figure 14. Visual qualitative comparison of the infection segmentation results between the ground truth and our proposed model on four slices from different CT scans. First column: original CT scan. Second column: ground truth. Third column: predicted infection masks.
Applsci 12 04825 g014
Table 1. A summary of the recently published studies on COVID-19 detection approaches.
Table 1. A summary of the recently published studies on COVID-19 detection approaches.
MethodImages’ ModalityApproach
ClassificationWang et al. COVID-Net [17]X-rayA deep residual CNN based model
El Araby et al. GSEN architecture [18]X-ray A novel Gray-Scale spatial exploitation learning Net (GSEN) for COVID-19
Ahuja et al. Deep transfer learning [19]CT-scansResNet18 pre-trained transfer learning-based model
Infection SegmentationFan et al. Inf-Net [20]CT-scansDL approach based on three reverse attention modules connected to a paralleled partial decoder
Shan et al. VB-Net [21]CT-scansA modified 3D CNN that combines V-Net with the bottle-neck structure
Elzeki et al. (CNN-VGG19) [22]X-rayDL approach based on fusion algorithm using NSCT with deep learning VGG19
Diagnosis SystemWu et al. JCS [23]CT-scansClassification: Res2Net; Infection segmentation: VGG16 backbone + Enhanced Feature Module + Attentive Feature Fusion
Gozes et al. [24]CT-scansClassification: Resnet-50—2D deep CNN; ROI extraction: Unet; Infection detection: Grad-cam technique and a commercial off-the-shelf software for lung pathology detection
Table 2. Three sample images from the used dataset. First column: original CT scans. Second column: lung masks. Third column: infection masks. Fourth column: superposition of the lung and infection masks.
Table 2. Three sample images from the used dataset. First column: original CT scans. Second column: lung masks. Third column: infection masks. Fourth column: superposition of the lung and infection masks.
Original CT ScanLung MaskInfection MaskLung and Infection Masks
Applsci 12 04825 i001 Applsci 12 04825 i002 Applsci 12 04825 i003 Applsci 12 04825 i004
Applsci 12 04825 i005 Applsci 12 04825 i006 Applsci 12 04825 i007 Applsci 12 04825 i008
Applsci 12 04825 i009 Applsci 12 04825 i010 Applsci 12 04825 i011 Applsci 12 04825 i012
Table 3. Quantitative results of the volume reconstruction phase for three different patients from the used dataset.
Table 3. Quantitative results of the volume reconstruction phase for three different patients from the used dataset.
Slices NumberLungs VolumeInfection VolumeVolume RatioInfection Severity
P130133127764089200.1234Moderate
P220046226721814220.0392Mild
P3200349820810158190.2903Severe
P4270453638759529.40.0131Mild
P5290474660880654.70.0169Mild
P621344306801259770.0284Moderate
P7249291656788225.80.0302Moderate
P830138184692786900.0729Moderate
P925626390751016210.0385Moderate
P1030122744453965740.1743Severe
P1139551634610678530.1935Severe
P1245573603510441400.1820Severe
P133946171091562310.0338Moderate
P1441881864362632090.0321Moderate
P15110413873510238510.2473Severe
P166678827857771850.0985Moderate
P174231369919.018170.003Mild
P1845572904146320.60.0080Mild
P194566646431482460.0222Mild
P2093484289228449760.5874Severe
Table 4. Quantitative results among different tasks on testing set in terms of accuracy and Dice Coefficient values.
Table 4. Quantitative results among different tasks on testing set in terms of accuracy and Dice Coefficient values.
TaskAccuracyDice Coefficient
Lung Segmentation0.950.98
Infection Segmentation0.940.91
COVID-19 Classification0.98-
Table 5. Fourfold cross-validation results of the Infection segmentation model calculated as the mean of folds.
Table 5. Fourfold cross-validation results of the Infection segmentation model calculated as the mean of folds.
Dice CoefficientIoUPrecisionRecallAUROCAUPRC
0.910.850.920.900.950.91
Table 6. Quantitative comparison of COVID-19 CT lung and infection segmentation results in terms of Dice Coefficient.
Table 6. Quantitative comparison of COVID-19 CT lung and infection segmentation results in terms of Dice Coefficient.
WorksLungs SegmentationInfection Segmentation
Ma et al. [38]0.9770.673
Muller et al. [44]0.9560.761
Omar Alirr [45]0.9610.780
Punn et Agarwal [46]0.960.81
Proposed method0.980.91
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mahmoudi, R.; Benameur, N.; Mabrouk, R.; Mohammed, M.A.; Garcia-Zapirain, B.; Bedoui, M.H. A Deep Learning-Based Diagnosis System for COVID-19 Detection and Pneumonia Screening Using CT Imaging. Appl. Sci. 2022, 12, 4825. https://doi.org/10.3390/app12104825

AMA Style

Mahmoudi R, Benameur N, Mabrouk R, Mohammed MA, Garcia-Zapirain B, Bedoui MH. A Deep Learning-Based Diagnosis System for COVID-19 Detection and Pneumonia Screening Using CT Imaging. Applied Sciences. 2022; 12(10):4825. https://doi.org/10.3390/app12104825

Chicago/Turabian Style

Mahmoudi, Ramzi, Narjes Benameur, Rania Mabrouk, Mazin Abed Mohammed, Begonya Garcia-Zapirain, and Mohamed Hedi Bedoui. 2022. "A Deep Learning-Based Diagnosis System for COVID-19 Detection and Pneumonia Screening Using CT Imaging" Applied Sciences 12, no. 10: 4825. https://doi.org/10.3390/app12104825

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop