Heliyon
Volume 8, Issue 10, October 2022, e11209
Journal home page for Heliyon

Research article
Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach

https://doi.org/10.1016/j.heliyon.2022.e11209Get rights and content
Under a Creative Commons license
open access

Highlights

  • The current methods of testing for the virus are RT-PCR and RAT. These test results may take up to 24 h. Recent studies and research have shown that lung images show the presence or damage caused by the virus using machine learning techniques.

  • Adversarial Attack is a major security threat in the domain of machine learning. Adversarial training is the most widely explored technique to defend against adversarial attacks.

  • High-level representation Guided Denoiser (HGD) architecture, another defensive technique, being suitable for high-resolution images, makes it a good candidate for medical image applications.

  • HGD architecture has been evaluated as a potential defensive technique for the task of medical image analysis with a new loss function.

  • In a White box scenario considerable increase in accuracy is seen. However, in the black box setting, the defense fails to defend against adversarial samples.

Abstract

Covid-19 has posed a serious threat to the existence of the human race. Early detection of the virus is vital to effectively containing the virus and treating the patients. Profound testing methods such as the Real-time reverse transcription-polymerase chain reaction (RT-PCR) test and the Rapid Antigen Test (RAT) are being used for detection, but they have their limitations. The need for early detection has led researchers to explore other testing techniques. Deep Neural Network (DNN) models have shown high potential in medical image classification and various models have been built by researchers which exhibit high accuracy for the task of Covid-19 detection using chest X-ray images. However, it is proven that DNNs are inherently susceptible to adversarial inputs, which can compromise the results of the models. In this paper, the adversarial robustness of such Covid-19 classifiers is evaluated by performing common adversarial attacks, which include the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Using these attacks, it is found that the accuracy of the models for Covid-19 samples decreases drastically. In the medical domain, adversarial training is the most widely explored technique to defend against adversarial attacks. However, using this technique requires replacing the original model and retraining it by including adversarial samples. Another defensive technique, High-Level Representation Guided Denoiser (HGD), overcomes this limitation by employing an adversarial filter which is also transferable across models. Moreover, the HGD architecture, being suitable for high-resolution images, makes it a good candidate for medical image applications. In this paper, the HGD architecture has been evaluated as a potential defensive technique for the task of medical image analysis. Experiments carried out show an increased accuracy of up to 82% in the white box setting. However, in the black box setting, the defense completely fails to defend against adversarial samples.

Keywords

Adversarial attacks
Denoiser
FGSM
PGD
HGD
Machine learning
Deep neural network

Cited by (0)