1 Introduction

Lung related disease is emerged as one of the most prevalent medical conditions in humans, globally. The diseases in the lung can be categorized as, (i) airway diseases, (ii) circulation diseases, and (iii) tissue disease [1], [2], [3]. The airway diseases cause interruption of the oxygen and other gases air supply through tubes, e.g. of the disease are asthma, cystic fibrosis, Chronic obstructive pulmonary disease (COPD), Tuberculosis (TB), bronchitis, etc. The circulation diseases have an adverse effect on the flow of blood in the lungs due to the clotting inside blood vessels, e.g., pulmonary embolism and pulmonary hypertension come under this category. Lung tissue diseases are caused due to inflammation of the tissue that affects the lung expansion ability, e.g. sarcoidosis and pulmonary fibrosis. The other diseases that affect lungs are lung cancer, pneumothorax, pneumonia, and Acute Respiratory Distress Syndrome (ARDS) [4].

The Coronavirus Disease (COVID-19) is one of the rapidly communicable infectious diseases, affected a large number of the population globally, irrespective of their gender and race. The infection due to COVID-19 severely affects the respiratory tracts and creates a layer of the lesion on the lungs; which affects the normal functioning of the lungs. The infection due to COVID-19 is initially discovered in Wuhan, China during December 2019. On February 11, 2020, the International Committee on Taxonomy of Viruses (ICTV) announced the new name of the virus as “severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)” and WHO declared the name “COVID-19” [5],[6]. The symptoms of COVID-19 ranges from dry cough, tiredness, mild to moderate respiratory illness, loss of taste sensation to fever [7]. The disease spreads from a person infected with COVID-19 to another person through the transmission of micron size droplets from the nose and/or mouth, that is expelled when a person with COVID-19 sneezes, coughs and even speaks. People having medical problems like diabetes, chronic respiratory disease, cardiovascular diseases, and cancer are more likely to develop serious illness. It has been shown that old populations with other medical complications are more prone to infections compared to the young age group. Till May 13, 2020, total COVID-19 cases reported are USA 14,08,636, Spain 2,69,520, Russia 2,32,243, UK 2,26,463, and Italy 2,21,216 [6]. The death cases due to COVID-19 in these countries are 83,425 in the USA, 26,920 in Spain, 2,116 in Russia, 32,692 in the UK, and Italy 30,911. Although in India total cases reported as on May 13, 2020, are 74,480, and deaths reported are 2,415. All across the world, researchers are working on numerous areas related to COVID-19 diagnosis and treatment, e.g., medical equipment to detect COVID-19 and vaccines for the treatment.

The COVID-19 pandemic testing kits are divided into two categories: a) antigen tests- detect currently infected patients and b) antibody tests- detect the antibodies in the blood of a person previously infected with the virus. The majority of antigen tests being produced to detect the COVID-19 disease are termed as PCR tests because they use a polymerase chain reaction (PCR). The steps involved in the PCR test are the collection of a clinical specimen with a nasopharyngeal swab, storage of the swab, sending the samples to the laboratories for extraction of RNA, and quantitative reverse transcription PCR [8]. The major challenges in the COVID-19 rapid detection are as follows: a) conventional PCR test kits take longer duration for diagnosis of disease, b) longer time in the production of probes, primers, and physical equipment (swab, containers, etc.). Globally, the governments are not able to reduce the COVID-19 spread in the population due to the following reasons: a) insufficient kit for COVID-19 detection/million population; b) vaccines and drug treatments are not available till date [9]. Total tests conducted for COVID-19 in various countries are as follows: 9,935,720 in the USA; 2,467,761 in Spain; 5,805,404 in Russia; 2,007,146 in the UK; 2,673,655 in Italy [6]. The statistical data of total tests per million population in these countries are USA-30,017, Spain- 52,781, Russia-39,781, UK-29,566, and Italy-44,221. Whereas in India total tests conducted are 1,854,250 and total tests per million population are 1,344. Thus, due to the shortage of COVID-19 testing kits, a smaller number of tests are conducted per million population.

COVID-19 pandemic has a very severe impact on the respiratory as well as other systems of the human body. Thus, medical imaging features of chest radiography is found to be useful for rapid COVID-19 detection. The imaging features of the chest can be obtained through medical imaging modalities like CT (Computed Tomography) scan and X-rays. The advantages of CT scan over x-ray include a) 3-D view formation of organs in CT scan, b) convenient examination of disease, and its location. Whereas with X-rays, a 2-D view of an organ is obtained that can help in the examination of dense tissues only. Also, the machinery to acquire these CT scan images is already available in an optimum amount in every country. Therefore, for COVID-19 detection, a CT scan of the chest is drawing the researchers’ attention [10], [11]. Expert radiologist guidance is required for accurate and rapid COVID-19 detection using a CT scan of the chest. Timely and accurate treatment of COVID-19 disease is a challenging task for the healthcare givers. But the limited availability of conventional COVID-19 detection kits is a major issue. Thus, an automatic diagnosis model is required for COVID-19 detection using imaging modality to reduce the manual involvement in disease detection using a CT scan of chest images.

The proposed work aims to automatically detect and localize COVID-19 using a CT scan of the chest. For this, a novel three-phase COVID-19 detection model is proposed: a) Phase 1-data augmentation using wavelets, b) Phase 2- disease detection, and c) Phase 3- abnormality localization. However, there are a limited number of CT scans of COVID-19 disease available online for the research community. To overcome the need for large databases for training purposes and overfitting issues, the pre-processed images are decomposed to three levels using stationary wavelets. Then shear, rotation, and translational operation are applied to all these images. In phase 2, CT scans are classified into binary classes, i.e., COVID and Non-COVID using transfer learning-based techniques. Further, four deeper convolution layer pre-trained transfer learning models are used, namely, ResNet18, ResNet101, ResNet50, and SqueezeNet. Then, the best training model is obtained based on the evaluation of common performance parameters of transfer learning models. In Phase 3, abnormality in chest CT-scan images of COVID-19 positive cases is localized using the feature map and activation layers of best performing pre-trained transfer learning model. The contribution of the proposed work can be summarized as i) the proposed methodology with the novel data augmentation on the limited dataset is used to classify the CT scan data into binary classes, i.e., COVID-19 and non-COVID, ii) The performance of four pre-trained transfer learning models are compared to address the issue of COVID-19 detection through CT scan with the limited dataset, iii) Further, the feature maps of the deeper layer (pooling layer) of the best performing transfer learning model are used to investigate the abnormality in COVID-19 positive patients.

The paper is summarized as follows: section 2 illustrates the literature review; section 3 discusses the proposed methodology. Section 4 gives details about the transfer learning-based pre-trained CNN model. Section 5 put forth the experimentation and discussion and section 6 provides the conclusion of the proposed work.

2 Literature review

The state-of-the-art put forth in Table 1 claims that AI along-with radiology imaging of COVID-19 positive patients can be helpful for timely and accurate diagnosis of disease [12].

Table 1 Summary of state-of-the-art-of COVID-19 detection techniques

From the brief review of research carried on COVID-19 diagnosis, it can be concluded that a limited number of chest radiographic data is available for COVID-19 detection and deep learning is beneficial in COVID-19 detection using chest CT radiography. The conventional method of COVID-19 detection, i.e., PCR kits and reverse transmission polymerase chain reaction (RT-PCR) are having certain drawbacks like less production of the kits, time consumption in getting the results, pooled sensitivity of RT-PCR is very less. i.e., 89% etc [7]. The real-time clinical problem in COVID-19 detection using conventional RT-PCR test kit is false negatives. Thus, to resolve the issues caused due to PCR kits, researchers are looking into chest radiography as an alternative to COVID-19 detection.

Researchers have mainly analyzed chest X-ray Imaging and chest CT scan modality of patients with COVID-19 positive cases. The CT scans of chest display bilateral ground-glass opacity [42]. Recent studies found that the sensitivity of CT for COVID-19 infection is 98% compared to RT-PCR sensitivity of 71% [46]. However, CT scan is more useful in COVID-19 detection because it can provide a complete 3-D view of organ and thus nature of abnormality can be better diagnosed in comparison to X-ray images [47], [48], [49], [50]. But the availability of CT scan chest images of COVID-19 positive cases is limited. Thus, transfer learning along-with data augmentation is proved to be a useful method for the detection of abnormality from a small dataset of chest radiography of COVID-19 patients.

Apart from COVID-19 detection, certain efforts have been made to forecast the COVID-19 disease spread. Composite Monte Carlo (CMC) simulation method along-with deep learning network and fuzzy rule induction is used to forecast the COVID-19 pandemic spread [51]. Further, Polynomial Neural Network with corrective feedback (PNN+cf) is used to forecast COVID-19 pandemic spread with relatively lower prediction error [52].

3 Methodology

In the proposed work, the immense need for a large number of COVID-19 positive lung CT scan image dataset is resolved using stationary wavelet-based data augmentation techniques. The pre-trained model extracts features from trained augmented images and incorporates multi-scale discriminant features to detect binary class labels (COVID-19 and Non-COVID). The hierarchical representation of the proposed methodology for three-phase COVID-19 detection using lung CT scan slices are shown in Figure 1.

Fig. 1
figure 1

Schematic diagram of the proposed methodology of COVID-19 detection

3.1 Phase-1 (Data Augmentation)

In the present study, there is a need to make CT scan images compatible with the pre-trained transfer learning-based model. For this, pre-processing steps: change in input image data type, resizing of the input images, and normalization of input images. However, a limited number of CT scan data of COVID-19 positive cases are available. To resolve this, the pre-trained transfer learning-based CNN model with data augmentation is performed.

The proposed methodology is implemented on a database of CT scan images of lungs of COVID-19 positive patients and normal patients. The database contains 349 CT COVID-19 positive CT images from 216 patients and 397 CT images of Non-COVID patients [35], [38], [39], [40]. The input CT images are available in different sizes and numerous image formats (JPEG, png). Figure 2 presents the sample CT scan images of COVID-19 and normal patients from the database used.

Fig. 2
figure 2

Sample lung CT scan images of a COVID-19 and b Non-COVID patients

The input data of CT scan images used in the proposed work are put forth in Table 2. Initially, the input images are converted into a JPEG image format and resized to 256x256x3. In the prepossessing stage, CT scan images in the input dataset are of different sizes, thus to maintain the uniformity the input images are resized to 256x256x3. These images are compatible with stationary wavelet decomposition up to three levels because the size of all the images in three levels remains the same, i.e., 256x256x3. Further, 4 different transfer learning models (ResNets-18, -50,-101, and Squeezenet) used for the binary classification, have different input size requirements. So, during the transfer learning process, images sizes are again adjusted but the uniformity in size of input CT scan used by the pre-trained layers is retained due to pre-processing step. Then resized CT scan images are normalized within the confined range [0 1] using (1).

$$ I\_norm = (In - min (In)) / (max (In)-min (In)) $$
(1)
Table 2 Brief detail of the input dataset used for the proposed work

Here, ‘In’ represents the input CT scan images from binary classes (COVID and non-COVID). Let us assume the generalized size of ‘In’ is having a size p x q x 3, and the normalized image is represented by ‘I_norm’. Overfitting is a major challenge with a transfer-learning based model trained on a limited set of datasets. Thus data augmentation technique is applied to the training dataset and this step resolves the overfitting problem. In the proposed methodology, data augmentation is done as follows: a) decompose the training images up to 3 levels using stationary wavelets, b) shear operation within the range [-30, 30], c) random rotation of training data between range [-90, 90], and d) random translation from pixel range [-10, 10].

The wavelet transforms are used to extract useful information from data. Wavelets such as Daubechies, Harr, and Coiflets, etc., through certain modifications in parameters, contain useful properties such as compact-support symmetry, regularity, and smoothness. This makes wavelets compatible with image processing applications. The input image I_norm is fed to stationary wavelet decomposition up-to k levels. The dimension of output decomposed image depends on dimensions of I_norm and level (k). Suppose, I_norm is a 2-D matrix and level k greater than 1, the outputs are 3-D arrays with following stationary wavelet coefficients:

$$ \begin{array}{@{}rcl@{}} SWC\_2D&=&[H(:,:,1:k);V(:,:,1:k);\\&&D(:,:,1:k);A(:,:,k)] \end{array} $$
(2)

Here, ‘A’ symbolizes approximation, ‘D’ diagonal, ‘V’: vertical, and ‘H’: Horizontal. For n less than equal to k in (2), the output matrix, i.e., SWC_2D contains approximation coefficient= A(:,:,n) of level i; H(:,:,n), V(:,:,n), and D(:,:,n) contain the coefficients details of level n (horizontal, vertical, and diagonal). Let us consider I_norm is a 3-D matrix of dimension p x q x 3, and k greater than 1, then the coefficients of the output are 4-D arrays of dimension p x q x 3 x k with following output matrix and coefficients:

$$ \begin{array}{@{}rcl@{}} SWC\_3D=[H(:,:,1:3,1:k);V(:,:,1:3,1:k);\\ D(:,:,1:3,1:k);A(:,:,1:3,k)] \end{array} $$
(3)

In Equation 3, For n less than equal to k and t = 1, 2, 3, the output matrix H(:,:,t,n), V(:,:,t,n) and D(:,:,t,n) contain the coefficients of details of level n (horizontal, vertical, and diagonal) and A(:,:,t,n) is coefficients of approximation of level n [53],[54].

In the proposed work, stationary wavelet transform performs 3 levels of the wavelet decomposition. The 2-D stationary wavelet is orthogonal (Haar; Daubechies: db1, db2,..., db10, etc.), and a biorthogonal wavelet (bior1.1, bior1.3, and bior1.5, etc.). In the proposed work, a 2-D stationary wavelet is decomposed using a db2 orthogonal filter. In 2-D stationary wavelet decomposition, the normalized input image I_norm passes through a set of the Low Pass Filters (LPF) and High Pass Filters (HPF). In level 1 decomposition using wavelets, I_norm is down-sampled by a factor of 2 and output obtained is: level 1 approximate coefficient (A1) of low frequency and detail coefficients (H1, V1, and D1) of high frequency. In second level decomposition, the output coefficient is level 2 approximate coefficient (A2) and detail coefficients of level 2 (H2, V2, and D2). Similarly, output coefficients from level 3 are approximation coefficient A3, and detailed coefficients V3, H3, and D3. The detailed hierarchy of normalized input CT scan images decomposed up to three levels is shown in Figure 3. The A1, A2, and A3 images obtained from wavelet decomposition up to 3 levels contain the most useful information and the size of the image remains unaltered. Thus, to enhance the training data, these wavelet decomposed images further undergone through random rotation, shear, and translation operation under a specific range as shown in Figure 4.

Fig. 3
figure 3

Generalized block diagram of the data augmentation using wavelet decomposition up to 3 levels

Fig. 4
figure 4

Augmented training data after rotation, shear, and translation operations

3.2 Phase-2 (Transfer Learning Models)

The pre-trained transfer learning-based COVID-19 detection model classifies a CT scan of lungs into binary classes: a) COVID-19 and, b) Non-COVID. Different models used for the binary classification are: ResNet18, ResNet50, ResNet101, and SqueezeNet. The size of the training augmented images is adjusted based upon the compatibility with numerous pre-trained CNN models. The size requirement for transfer learning models are: ResNet18 (224x224x3), ResNet50 (224x224x3), ResNet101 (224x224x3) and Squeeze Net (227x227x3).

The pre-trained models can classify CT scan images based upon class labels assigned to the training dataset, i.e., COVID-19 and Non-COVID. To classify new images, retrain a pre-trained model by updating fully connected layers according to the input augmented dataset. The training parameters opted for the transfer learning-based CNN model are: a) ‘sgdm’ optimizer is used, b) mini-batch size is 64, b) the training is performed up to 50 epochs, c) validation frequency is set to value 3, and d) pre-specified initial learning rate for the training is 3e-4. The performance of different pre-trained networks is examined based on the following parameters: accuracy, precision, Negative Predictive Value (NPV), sensitivity, AUC, F1-score, and specificity. Further, the deeper layer of the best performing model is used for abnormality detection in CT scan images of COVID-19 positive cases. abnormality detection in CT scan images of COVID-19 positive cases.

3.3 Phase-3 (Abnormality localization using deeper layer)

In phase-3 of the proposed methodology, the activations of layers and different channels of the pre-trained network are examined. Each layer of a pre-trained CNN network contains channels (consists of many 2-D arrays). The output activations of the first convolution layer (conv1) and deeper layer are used to localize abnormality in the COVID-19 positive CT scan image through the best-trained transfer-learning model. The features obtained from the network are evaluated by comparing the areas of activation with the COVID-19 CT scan image as input. The activations are obtained in a 3-D array having the third dimension indexing the channel on the layer to be examined. Ideally, color and edges like features are detected in the conv1 and more complex features in deeper layers.

4 Transfer learning model

The transfer learning-based CNN models has the following advantages a) limit pre-processing of the dataset is required, b) faster-learning process, c) time complexity can be adjusted by decreasing the numerous parameters, and d) works well on the limited dataset, thus suitable for the medical image classification task. In the proposed work, transfer learning is used for binary classification of chest CT scan images into COVID-19 and Non-COVID. For this, 4 different pre-trained transfer learning-based CNN models are used, namely, ResNet18, ResNet50, ResNet101, and SqueezeNet. The CNN models are trained to classify into 1000 object categories from the image net database [55], [56]. The pre-trained transfer learning models are retrained for the binary classification data of a chest CT scan into 2 classes.

SqueezeNet is used to investigate the performance of the binary classification of chest CT images on smaller deep learning neural networks. The squeeze net is 68 layers model and requires a 227x227x3 size input image [57]. Although time complexity is resolved with squeezeNet but performance is not satisfactory on the proposed model. The pre-trained ResNet18 model used for binary classification of CT scan images is 71 layers deep and requires an input image size of 224x224x3 [58]. The “sgdm” optimizer is used for the training. The ResNet50 model used in the proposed work is 177 layers deep and required a 224x224x3 size input image. Further, the ResNet101 is 347-layer and thus it is more complex than the other two residual models used. The network requires an image input size of 224x224x3. The initial learning rate used is 3e-4 in all the pre-trained models. The residual model is chosen because of its ability of easy optimization, converges faster, and enhancement in the accuracy with increased depth [59]. However, time consumption is increased with an increase in layers of each residual network.

The ResNet18 model is the best performing model thus its architecture is explained in detail. Figure 5 put forth the descriptive block diagram of ResNet18 architecture. The input images are pre-processed to make it compatible with a pre-trained residual network, i.e., an image of size 256x256x3 converted into 224x224x3. The convolutional layers (conv1, conv2_x, conv3_x, etc.) apply a filter that scans the whole pre-processed CT scan image. Then the convolution layer creates a feature map to predict the class probabilities for each feature map obtained. The role of the first convolution layer, i.e., conv1 is to provide low-level features like color, edges, and gradient operation, etc. The deeper convolution layer provides high-level features like in the proposed work abnormality in CT scan images is localized in the deeper layer. The spatial size of convolution features is reduced by the pooling layer. The pooling layer is of the following types: a) max-pooling, b) average-pooling, and c) global-pooling. The dominant features like positional invariant and rotational ones are obtained in the pooling layer. The fully connected layer (Fc) receives the flatten output from the pooling layer and act as feed-forward networks. The softmax layer limits the output within the range [0, 1]. Thus the softmax layer predicts the possibility of the input vector in a learned particular class. For transfer learning using pre-trained ResNet18, the “Fc” layer is replaced by a new layer. The number of outputs is equal to the binary classes of the CT scan dataset on which the model is trained. In the proposed work, the number of outputs will be 2: a) COVID-19, b) Non-COVID. In the case of SqueezeNet, the last learnable layer is the convolutional layer (1x1) instead of “Fc”. Thus, in SqueezeNet, the last convolutional layer is replaced with a new convolutional layer and the number of filters equal to the number of classes (2 classes: COVID-19, and Non-COVID).

Fig. 5
figure 5

Descriptive block diagram of ResNet18 architecture

ResNets introduce skip connection to add the output from a previous layer to an upcoming layer, thus learning does not degrade from the first layer to a deeper layer. Rectified Linear Unit (ReLU) is a type of activation function used in CNN models. The CNN based ResNet models are implemented with skipping of the double layer or triple layer with non-linearities (ReLU) and additional weight matrix are used to learn the skip weights. This helps to resolve the vanishing gradient issue in deep learning networks. As the residual network learns the feature space, it gradually restores the skipped layers as shown in Figure 6. The details of the hyperparameter used for the pre-trained ResNet18 model are provided in Table 3.

Fig. 6
figure 6

Skip connection scenario in the residual network

Table 3 Brief details of hyperparameters used for pre-trained ResNet18 model

5 Experimentation and discussion

The proposed methodology is implemented on MATLAB 2019b software with the NVIDIA GeForce MX130 GPU system. The system requirements are the Intel Core i7 processor, 2 GB graphic card, 64-bit operating system at 1.80 GHz, and 8 GB RAM.

5.1 The pre-trained transfer learning model

In the proposed research, the pre-trained transfer learning model is trained and validated for binary classification of input CT scan images into COVID-19 and Non-COVID. For the binary classification task, 4 different transfer learning models (ResNet18, ResNet50, ResNet101, and SqueezeNet) are examined. The highest classification accuracy, i.e., 99.82% of training accuracy, and validation accuracy of 97.32% is obtained with the ResNet18 model on the used CT scan dataset. Figure 7 shows the convergence graph of training and validation accuracy of the ResNet18 model along-with loss function up to 50 epochs (7:3 training and validation data). The classification performance of the ResNet18 model with and without augmentation is put forth in Table 4. The performance with novel data augmentation technique is much superior to model accuracy on training done on without augmentation. Also, the data augmentation technique eliminates the overfitting issues with the transfer learning model trained on a limited COVID-19 positive CT scan dataset. However, the model robustness can be further investigated by implementing the proposed methodology on larger datasets.

Fig. 7
figure 7

Convergence graph of Accuracy and loss function using ResNet18 model up to epoch 50

Table 4 Comparison of training, validation, and testing accuracy of the ResNet18 model on the used dataset with and without augmentation

Classification performance of the pre-trained transfer learning model on testing data is evaluated based on following parameters: a) sensitivity defines patients with a COVID-19 disease, b) specificity defines correct detection of normal patients based on CT scan images, c) accuracy is the ratio of correct predictions to total predictions of the COVID-19 disease, d) AUC value defines the ability of pre-trained transfer learning model to differentiate between binary classes, i.e., COVID and Non-COVID, and e) F1 score is a measure of the harmonic mean of sensitivity and precision. The ResNet18 pre-trained transfer learning model obtained testing accuracy of 99.4% and major outcomes are: a) COVID and Non-COVID images are classified with 98.6% specificity and 100% sensitivity, (c) AUC of 0.9965 (shown in Figure 8). The performance parameters are summarized in Table 5 (TP- True Positive, FP-False Positive, TN-True Negative, and FN- False Negative). The performance comparison of pre-trained models used in the proposed work (ResNet18, ResNet50, ResNet101, and SqueezeNet) is shown in Figure 9.

Fig. 8
figure 8

ROC characteristics curve of ResNet18

Fig. 9
figure 9

GIyph plot for the performance evaluation of pre-trained transfer learning models

Table 5 Performance parameters of transfer learning models on testing data

5.2 Localization of abnormality using feature maps

The first convolutional layer (conv1) and the deeper layer from the pre-trained transfer learning model ‘ResNet18’ are used to obtain the features map. The low-level features; namely, texture, color, and edges are generally evaluated using the first convolutional layer (conv_1). The output activation is obtained by passing the testing image (COVID-19 positive CT scan image) through the best performing ResNet18 pre-trained network. Further, all the activations are scaled to a range [0 1]; here ‘0’ symbolizes minimum activation and ‘1’ symbolizes maximum activation. The details of the abnormality (location, and severity) in medical data can be obtained from a more complex feature of the deeper layers of the CNN model. In the proposed pre-trained ResNet18 model the deeper layers used are conv5_x and pooling layer. In these layers, feature maps symbolize the features learned by the pre-trained model on the CT scan datasets used. Further, the features useful for abnormality localization in COVID-19 positive CT scans are obtained through the strongest activation channel. Table 6 presents the brief details of the performance comparison of the proposed methodology for COVID-19 detection with the techniques available in the literature using chest radiography.

Table 6 Performance parameters of transfer learning models on testing data

6 Conclusion

This work proposes a three-phase methodology to classify the considered lung CT scan slices into COVID-19 and non-COVID-19 class. Initially, the collected images are resized based on the requirement, and the following procedures are implemented sequentially; in phase-1, data augmentation is implemented to decompose the CT scan slices into 3 levels using stationary wavelets. Further, other operations, such as random rotation, translation, and shear operations are applied to increase the dataset size. In phase-2, a two-level classification is executed using four different transfer learning-based architectures, such as ResNet18, ResNet50, ResNet101, and SqueezeNet, and their performances are verified. The highest classification accuracy for training (99.82%) and validation (97.32%) is achieved with the ResNet18 using the transfer learning model. The testing data yields an accuracy of 99.4%, the sensitivity of 100%, the specificity of 98.6%, and AUC with the highest value of 0.9965. In phase-3, the selected best performing model (ResNet18) is selected and implemented for abnormality localization in the chest CT scan slices of COVID-19 positive cases. The developed model will certainly help in the rapid and accurate detection of COVID-19 signature from lungs CT scan slices. In the future, the performance of the proposed system can be considered to examine the clinically obtained CT scan slices with COVID-19 infection. Further, the proposed methodology needs to be investigated on the larger set of COVID-19 positive CT scan of patients.