CovidXrayNet: Optimizing data augmentation and CNN hyperparameters for improved COVID-19 detection from CXR

https://doi.org/10.1016/j.compbiomed.2021.104375Get rights and content

Highlights

  • Optimal data augmentation and CNN hyperparameters significantly increase the accuracy of CNN.

  • We propose CovidXrayNet model that classifies a CXR into COVID-19, normal, or pneumonia.

  • CovidXrayNet achieves 95.82% on the COVIDx dataset, with only 30 epochs of training.

  • We introduce COVIDcxr, a balanced dataset that consists of CXRs and tabular data.

  • The CovidXrayNet model and the COVIDcxr dataset are publicly available.

Abstract

To mitigate the spread of the current coronavirus disease 2019 (COVID-19) pandemic, it is crucial to have an effective screening of infected patients to be isolated and treated. Chest X-Ray (CXR) radiological imaging coupled with Artificial Intelligence (AI) applications, in particular Convolutional Neural Network (CNN), can speed the COVID-19 diagnostic process. In this paper, we optimize the data augmentation and the CNN hyperparameters for detecting COVID-19 from CXRs in terms of validation accuracy. This optimization increases the accuracy of the popular CNN architectures such as the Visual Geometry Group network (VGG-19) and the Residual Neural Network (ResNet-50), by 11.93% and 4.97%, respectively. We then proposed CovidXrayNet model that is based on EfficientNet-B0 and our optimization results. We evaluated CovidXrayNet on two datasets, including our generated balanced COVIDcxr dataset (960 CXRs) and the benchmark COVIDx dataset (15,496 CXRs). With only 30 epochs of training, CovidXrayNet achieves state-of-the-art accuracy of 95.82% on the COVIDx dataset in the three-class classification task (COVID-19, normal or pneumonia). The CovidXRayNet model, the COVIDcxr dataset, and several optimization experiments are publicly available at https://github.com/MaramMonshi/CovidXrayNet.

Keywords

Chest X-Ray
Convolutional neural network
COVID-19
Data augmentation
Hyperparameters

Cited by (0)

View Abstract