Pandemics pose unique challenges in that their rapid spread necessitates a quick response on many fronts, from diagnostic modalities to drug development and medical resource allocation and planning. Quarantines that are necessarily implemented, as seen with the coronavirus disease-2019 (COVID-19) outbreak, further strain these efforts, as a result of hospital personnel and researchers potentially being furloughed while being evaluated for symptoms themselves [1]. Not only is the potential lack of staff detrimental to those who have the disease in question, but also to others who may require access to the emergency department or intensive care. Artificial intelligence (AI) methodologies have increasingly been studied as a potential tool to aid in improving existing modalities. The number of research papers being published on COVID-19 and AI have been growing exponentially since March of 2020 [2]. However, while AI has shown immense promise in its ability to help counter the rapid spread of disease in a pandemic, there are significant potential ethical and legal considerations that must be taken into account before it can be used on a widespread scale.

Advantages of artificial intelligence

One major advantage AI provides in a pandemic setting like that of COVID-19 is in its ability to assist in diagnosis with use of big data. AI has the potential to more quickly and cost-effectively diagnose disease [3]. COVID-Net is a neural network that can diagnose COVID-19 from chest radiography images with a reported accuracy of 92.4%, more quickly than the conventional reverse transcriptase-polymerase chain reaction (RT-PCR) technique, that while highly specific, is very time-consuming and involves a complicated manual process that is not in great supply [4, 5]. The use of AI models can be vital in reducing the amount of images and time radiologists spend reviewing scans, which can be especially important in times when hospitals are overcrowded and the need to diagnose quickly is imperative in determining whether or not to quarantine patients or provide healthcare workers with additional personal protective equipment.

AI can also play a key role in the prognostic staging of diseases. AI has been used to predict which patients will need more intensive care, and thereby could potentially be used to triage patients during pandemics [3]. In one study, blood samples of patients from Wuhan, China were retrospectively analyzed with machine learning algorithms to identify the most discriminative biological markers determining patient survival with COVID-19 [6]. By identifying patients at the highest risk for death or other complications, hospital personnel can deliver more targeted patient care and allocate resources appropriately when they are scarce.

Newer AI technology has even been shown to be accurate in predicting the onset and spread of outbreaks in general. This is essential, as the faster a country can predict and respond to a potential pandemic, the more lives that can be saved. BlueDot, a startup that collected data from news reports, airline ticketing, and animal disease outbreaks, accurately predicted which areas of the world would be the most prone to COVID-19 outbreaks using their AI models. The company was also successful in predicting the spread of the H1N1 influenza in 2009, the spread of Ebola in West Africa in 2014, and the spread of the Zika virus to Florida in 2016 [7]. Another AI company, Metabiota, traced flight data to correctly anticipate that many Asian countries would be at risk of COVID-19 days before the initial cases appeared. The AI-based HealthMap system at Boston Children’s Hospitals was also the first to notify the world about coronavirus on December 30, 2019, days before even the World Health Organization released a statement regarding the situation [8].

Artificial intelligence can also speed up the development of drugs and vaccines in pandemics. Thousands of experiments are completed between the drug discovery stage and the clinical trial phase, requiring many man-hours and expensive material resources. Combining AI with pharmaceutical research can speed up this process, resulting in quicker drug development, work efficiency, and cost reduction [9]. For example, the AI start-up, BenevolentAI, was used to scour medical literature to find potential ways to repurpose existing drugs to help with COVID-19, discovering that baricitinib, a biologic currently used to treat rheumatoid arthritis, can have antiviral effects [10].

Potential drawbacks of artificial intelligence

However, potentially significant ethical issues arise concerning the use of AI. It’s algorithms can be difficult to interpret and explain, as the model “learns” and develops its own algorithms on the basis of the data it trains on [11]. For instance, Artificial Neural Networks (ANN’s) are a type of AI modality that utilizes interconnected processors to mimic a human brain’s neurons. The algorithm for an ANN is not determined by the programmer; rather, the machine “learns” the relationships between variables and develops its own rules by which it makes decisions, which are usually not easily readable by humans. The lack of transparency, therefore, has concerning legal implications. For instance, who would be liable for a missed or incorrect diagnosis? Under current legal standards, it would be difficult for a patient to sue the manufacturer of the AI because for product liability purposes, the “hardware” is the device, not the algorithms that have been “learned” by the software. As a result, unless the error can be proven to be the result of a hardware device defect, liability would likely not fall to the developers of the AI software [12]. Physicians may need to assume this liability, unless alternative legal solutions are created. Possibilities include ascribing “personhood” to the AI model, in which case it would need liability insurance of its own. Alternatively, the common enterprise theory of liability can be evoked, in which all groups involved in the creation and use of AI share legal responsibility [12].

Additionally, while it may initially seem like AI can be helpful in countering implicit physician biases concerning race and gender that may impact care, AI is only as useful as the information it trains from [13]. For instance, if the AI model learns using data that includes care and outcomes that have been influenced by implicit physician biases, the AI model will learn from these instances, and could potentially make decisions mirroring the biases of its data sources, thereby amplifying the impact of these biases and thus may not be generalizable to other patient populations. AI can also exacerbate biases in the case of insufficient data. Genetic risk prediction using AI has been found to perform unequally in different populations because of the lack of training data to produce accurate results for all populations. Since data often comes from sources like electronic health records, data in the model will be skewed towards individuals who have access to health care, who may have different health outcomes than those who do not [14, 15]. Furthermore, machine learning has been found to be able to learn stereotyped biases in semantic data. One study trained a machine-learning model using a standard body of text from the Internet. The model acquired and perpetuated cultural stereotypes found in the training text in terms of race and gender, providing evidence that artificial intelligence systems may be able to learn enough about language properties to process it with the same potentially discriminatory, historical, and cultural associations [16].

Finally, as with the development of technology in general, AI raises privacy and security concerns. The data stored within AI models always have the potential to be illegally obtained or tampered with. Illegal spread of this information could have many consequences for patients. Unless protected by law, insurance premiums could increase as a result of the availability of new information about patients [17]. Furthermore, in order for the AI models developed to continue to improve, data would need to be continuously supplied, and ideally obtained and shared internationally to incorporate as diverse a population as possible. Distribution of patient information at this scale would require a rethinking of what constitutes patient confidentiality and privacy, and an informed consent process would likely need to exist [18]. Cybersecurity measures, such as encryption protections, will need to be incorporated to prevent illegal access or modification of data.

Conclusions

It is clear that AI has a wide breadth of potential for use in countering pandemics—it provides a faster and cheaper diagnosis capability, an ability to predict prognoses to help hospitals allocate resources, and can help to predict outbreaks and speed up drug development. All of these uses can shorten the amount of time between when a patient gets infected and when they can be effectively treated, which in turn will decrease morbidity and mortality, particularly in cases where the disease progresses relatively quickly, as seen with COVID-19. However, it is imperative that policy and security measures catch up to this promise. Without means to address legal concerns such as liability and transparency, ethical concerns such as biases, and security concerns, AI has the potential to create more problems than it solves. AI will do what it was built to do—it’s our responsibility to understand the implications of that. In conclusion, it is important that we do not depend on AI to replace clinicians and policy-makers, but rather, depend on it to help clinicians and policy-makers improve their decision-making.