iScience
Volume 25, Issue 4, 15 April 2022, 104031
Journal home page for iScience

Article
An artificial intelligence deep learning platform achieves high diagnostic accuracy for Covid-19 pneumonia by reading chest X-ray images

https://doi.org/10.1016/j.isci.2022.104031Get rights and content
Under a Creative Commons license
open access

Highlights

  • We used artificial intelligence models to diagnose Covid-19 pneumonia by reading chest X-ray images

  • We employed our unique deep learning voting algorithms combining multiple Convolutional neural networks

  • Our AI models reached a high diagnostic accuracy (>99%) for Covid-19 pneumonia detection

  • We obtained and analyzed a large chest X-ray image dataset (10,182 images)

Summary

The coronavirus disease of 2019 (Covid-19) causes deadly lung infections (pneumonia). Accurate clinical diagnosis of Covid-19 is essential for guiding treatment. Covid-19 RNA test does not reflect clinical features and severity of the disease. Pneumonia in Covid-19 patients could be caused by non-Covid-19 organisms and distinguishing Covid-19 pneumonia from non-Covid-19 pneumonia is critical. Chest X-ray detects pneumonia, but a high diagnostic accuracy is difficult to achieve. We develop an artificial intelligence-based (AI) deep learning method with a high diagnostic accuracy for Covid-19 pneumonia. We analyzed 10,182 chest X-ray images of healthy individuals, bacterial pneumonia. and viral pneumonia (Covid-19 and non-Covid-19) to build and test AI models. Among viral pneumonia, diagnostic accuracy for Covid-19 reaches 99.95%. High diagnostic accuracy is also achieved for distinguishing Covid-19 pneumonia from bacterial pneumonia (99.85% accuracy) or normal lung images (100% accuracy). Our AI models are accurate for clinical diagnosis of Covid-19 pneumonia by reading solely chest X-ray images.

Subject areas

Radiology
Virology
Artificial intelligence

Data and code availability

  • Whole source code can be found from https://fts.umassmed.edu (user name: dli; password: Dong2022) or obtained by sending a request to [email protected].

  • This paper was produced using large volumes of publicly available image data. The authors have made every effort to make available links to these resources as well as the software methods and information used to produce the dataset, analyses, and summary information. The datasets used in this study are available online (https://fts.umassmed.edu/).

Cited by (0)

2

Lead contact