The COVID-19 pandemic has changed the world in profound ways and led to significant shifts in our social, political, economic, and scientific priorities. Among these has been a major shift in both public and private research funding towards COVID-19-related projects, with the aim of mapping the pandemic and its effects and developing vaccines and therapies (London and Kimmelman 2020). In this context, the need for speed is taken for granted, and the scientific process has adapted to accommodate this. Rather than going through the usual research–dissemination–translation pathway, existing drugs are being “repurposed,” usual preclinical testing regimes are being bypassed or shortened, study sizes are being reduced, and time-consuming randomized controlled trials are being replaced or supplemented with observational studies. The results of research are already being reported mere months after the epidemic’s onset—often prior to formal peer review or after so-called “rapid” review. And regulators are “fast tracking” their review of potentially promising drugs and vaccines (Park 2020; European Medicines Agency 2020; Komesaroff, Kerridge and Glibert 2020; London and Kimmelman 2020; Callaway 2020; Markar et al. 2020).

On the surface, attempts to speed up the research enterprise appear to be a good thing. But, like “warp speed” (a science fiction notion, specifically because matter cannot travel faster than light without distorting into infinite mass), there might be a maximum pace at which science can “travel” before it distorts and its harms begin to consistently outweigh its benefits.

The potential for this to happen is illustrated perfectly by the recent high-profile retraction of an article from the Lancet. The article, which was published in May 2020, cast doubt on the effectiveness and safety of hydroxychloroquine and chloroquine for the treatment of COVID-19 (Mehra et al. 2020c). The study reported in the article drew on a registry that collected cloud-based healthcare data on 96,032 patients from 671 hospitals across six continents. The registry is owned by Surgisphere Corporation, a U.S.-based company founded by Dr. Sapan Desai (Surgisphere Corporation 2020). The study (henceforth the Surgisphere study) concluded that there was no evidence of benefit associated with the use of hydroxychloriquine or chloroquine but that there was evidence of increased risk of ventricular arrhythmia and in-hospital death. The authors called for urgent randomized clinical trials (RCTs) and suggested that these drugs not be used outside this research context (Mehra et al. 2020c).

A few days later, the World Health Organization (WHO) announced the cessation of all hydroxychloroquine arms of its COVID-19 trials in seventeen countries (World Health Organization 2020). Following the publication of the Surgisphere study, researchers and clinicians around the globe expressed scepticism about the integrity and validity of the dataset, statistical analysis, and conclusions and on May 28, 146 wrote a letter to the Lancet calling for its retraction (Watson et al. 2020). An “expression of concern” was subsequently published in the Lancet on June 3 (The Lancet 2020), and the article was retracted by three of the authors (excluding Desai) on June 5 (Mehra, Ruschitzka, and Patel 2020) on the basis that they were unable to complete an independent audit of the data underpinning their analysis and as a result concluded that they “can no longer vouch for the veracity of the primary data sources” (Mehra, Ruschitzka, and Patel 2020). A little more than an hour later, an article written by the three of the Lancet authors and two others was retracted from the New England Journal of Medicine. This article, “Cardiovascular Disease, Drug Therapy, and Mortality in Covid-19” (Mehra et al. 2020b) was retracted: “Because all the authors were not granted access to the raw data and the raw data could not be made available to a third-party auditor, we are unable to validate the primary data sources underlying our article” (Mehra et al. 2020a).

This event has, unsurprisingly, been the source of considerable scientific and media interest, with concerted efforts being made to unravel the specific details of the case. In what follows, we offer a broad interpretation of the Surgisphere case by conceptualizing it as one in which biomedical innovation has been “sped up” excessively.

Should Biomedical Research, Publication, and Translation Be Sped up?

A common trope about biomedical innovation is that is it too slow, often taking decades for an idea to translate into a technology and for this technology to then be tested, registered, funded, and taken into practice. A number of explanations are offered for this slowness, including resource limitations, cultural barriers, and the need for reflection and critique inherent in the scientific method itself. Research is also slowed by the governance processes that curtail certain behaviours and demand that criteria be met before research projects are funded and before resulting technologies are registered, funded, and made accessible to patients (Morris, Wooding, and Grant 2011).

These perceived barriers become particularly salient during emergencies, and research, dissemination, and translation are often sped up in response to political and public concern. On the surface, this seems like a perfectly rational response to desperate circumstances, but events such as the Surgisphere study retraction prompt the question of what, if anything, might be lost when biomedical innovation is sped up. In what follows, we highlight three failures in the Surgisphere case that were due, at least in part, to the speed with which the research was conducted and reviewed. These are 1) failures of methodological rigour, 2) failures of journal review, and 3) failure to manage competing interests.

Failures of Methodological Rigour

When science speeds up, two things tend to happen to its methodology: first, established but less rigorous methods are used (e.g., the use of surrogate outcomes in clinical studies or observational instead of interventional trials (London and Kimmelman 2020)) and second, emerging or even entirely novel methodologies are used.

The Surgisphere study was an observational study that relied on an aggregation of the deidentified electronic health records (EHRs) of customers of QuartzClinical, Surgisphere’s machine learning programme and data analytics platform. Surgisphere directly integrates with the EHRs and includes their data in its queryable registry/database of real-world, real-time patient encounters (Surgisphere 2020). The study compared patients who received one of four treatments. The main outcomes of interest were in-hospital mortality and the occurrence of de-novo ventricular arrhythmias (non-sustained or sustained ventricular tachycardia or ventricular fibrillation). Regression methods and propensity matching were used to minimize confounding.

While these kinds of observational research methods are well-established in pharmacoepidemiology, there are a number of factors that render them vulnerable to error and misinterpretation. First, while there are techniques that can be used to reduce bias and confounding, these cannot be completely eliminated. Indeed, the larger the dataset the easier it is to draw unjustified conclusions (based on spurious correlations) about the causal nature of associations observed (Lipworth et al. 2017; Lipworth 2019). Second, the data used in this kind of research is often incomplete or slanted towards particular patient groups, unstandardized, unstructured, and not comparable among patients or across time (Lipworth et al. 2017; Lipworth 2019). This is particularly true when data are collected for other purposes (e.g., administration, clinical care, or even other research studies with differently defined outcome measures (Lipworth et al. 2017; Lipworth 2019). Add to this the other limitations of data analysis using artificial intelligence, such as flaws in (often non-transparent) algorithms (Chin-Yee and Upshur 2019), and it becomes clear why even the best “big data” research needs to be undertaken with the utmost care.

This requires not only extensive methodological expertise (which may or may not have been present in the Surgisphere case) but also caution on the part of researchers, which appears not to have been the case, given that the lead researcher only discovered serious flaws in the data after the article was published and concerns were raised by other researchers (Mehra et al. 2020a). There is also a need for rigorous methodological review by peers. This usually occurs when applications are made for research funding or ethics approval, but it is not clear that the Surgisphere research was preceded by any kind of formal methodological review. Somewhat ironically, the Surgisphere authors appeared to be alert to the limitations of their methodology because they called for follow-up randomized trials. Setting aside whether such trials would even be ethical (a point perhaps lost in the furore), this gives the impression that even they themselves did not believe that their methods were sufficiently robust.

Failure of Journal Review

Even if substandard research is conducted, there is the expectation that the review processes conducted by journal editors and the peer reviewers they commission (henceforth journal reviewers) will identify flaws in research questions, methods, data sources, analysis and interpretation, and either prevent poor quality research from being published or ensure that it is improved before publication occurs. If this fails, then journals can retract articles, thus removing them from the formal scientific record. Retraction is not, however, ideal because retracted articles, as was the case with both Surgisphere papers, often have considerable impacts prior to retraction and never fully “disappear” (Teixeira Da Silva and Bornemann-Cimenti 2017). Prepublication review, therefore, plays a crucial role in scientific quality control.

Unfortunately, the quality of peer review can be negatively affected by resource limitations (because reviewers are usually unpaid and therefore unable to devote as much time as might be needed); by lack of expertise (because reviewers who are experts are often excluded due to competing interests); by the difficulty reviewers have in detecting flaws (that are either deliberately concealed, built into the design of research, or impossible to detect without access to complete raw data and time to reanalyse it); and by reviewers’ (often unconscious) biases that lead them to favour or disfavour research on the basis of features other than its quality (Lipworth and Kerridge 2011; Lock 1985; Godlee and Jefferson 2003). When biomedical innovation is sped up, already fragile journal review processes are placed under enormous pressure: it is more difficult to find external reviewers who are both qualified and available; there is less time for these reviewers to conduct their appraisals; and editors have less time to appraise the reviews. This is particularly the case when papers are “prepublished” and made available to the public before even expedited review processes have been finalized (Kaiser 2017).

While the editors and reviewers involved in the Surgisphere case should not automatically be criticized for missing deliberate misconduct (if this occurred), they do appear to have missed many methodological “red flags” that were identified in subsequent investigations. For example, the study claimed to have screened significantly more patients and matched controls than was possible in the time frame available (particularly given the number of sophisticated data-sharing partnerships and IT system integrations that would have needed to be established); there was no ethics review; there was scant detail available about data sources; there were unusually small reported variances in patient baseline characteristics, interventions, and outcomes (especially unusual given the broad population demographic differences expected from across the continents); and cases were included in which dosing levels of the testing regimes were above established recommended therapeutic doses (Watson et al. 2020).

Of course, the authors also have some responsibility here—it is simply unacceptable to review one’s data sources after an article has been published and criticized and raise concerns only in a retraction note (Mehra, Ruschitzka, and Patel 2020). Similarly, questions have to be asked about the due diligence underpinning the WHO decision to suspend its trials on the basis of the study (Davey, Kirchgaessner, and Bosely 2020). But this simply underscores the points that anyone reviewing research in a hurry is susceptible to major oversights and that the effects of speed can compound as checks and balances are serially undermined.

Failure to Manage Competing Interests

So far, we have portrayed the scientific process as a depersonalized one in which ideas are (more or less effectively) generated, tested, and either discarded or taken up by people who are concerned only about generating high quality knowledge and promoting health and well-being. The reality, however, is that all those involved in biomedical research, publication, and translation (including researchers, journal editors, and clinicians) have multiple competing obligations and are embedded in a complex web of financial and non-financial interests, such as the desire to earn money, create product opportunities, pursue intellectual projects, and achieve professional recognition and career advancement. All of these stakeholders are also susceptible to cognitive biases that can lead them to overvalue innovative technologies (“optimism bias”), and to be swayed by industry marketing, and by pressure from patients, the public, and governments to address urgent unmet needs (Chan 2012; Taylor 2013).

These competing interests can sometimes be benign and easily managed, but they can also introduce biases that distort research, publication, policymaking, and practice and, at times, even motivate outright fraud. For these reasons, there are many checks and balances in place to manage competing interests in biomedical innovation. These include both the general scientific and journal review processes described above and other processes that are designed to directly address competing interests such as preregistration of research studies (to avoid reporting/publication biases), disclosure of competing interests, and independent auditing processes (Lipworth 2019).

When biomedical innovation is sped up, these checks and balances can be undermined and competing interests are far less likely to be picked up or navigated in nuanced and sophisticated ways. To make matters worse, science tends to be sped up in precisely those circumstances in which stakes are highest and competing interests are the most powerful. As described above, when health emergencies arise, there is often “big money” at stake and strong political pressures (in some cases, focused on particular interventions, as evident in Donald Trump’s personal endorsement of hydroxychloroquine (Mahase 2020)). As a result, those who are the first to make, disseminate, and use new discoveries are likely to reap considerable fame and fortune. Another compounding factor is that it is easier for undeveloped methods (which, as discussed above, are often used in emergency situations) to be manipulated without detection for personal or professional gain (Lipworth 2019).

The precise nature and effects of competing interests in the Surgisphere study remain to be determined, but there is emerging evidence that there might have been significant commercial interests at play, stemming primarily from the fact that one of the study’s lead co-authors (Dr Supan Desai) is founder and CEO of Surgisphere. While such ties to any company can be problematic for a researcher, journalists have subsequently uncovered findings suggesting that Desai and his companies have behaved with questionable integrity both in other research and in their cooperation with investigations following the Lancet publication (Davey, Kirchgaessner, and Bosely 2020); Mehra, Ruschitzka and Patel 2020; Ledford and Van Noorden 2020; Davey 2020). It is important for these emerging findings to be critically interrogated and for conceptual and moral distinctions to be made between commercial interests, competing interests, misconduct, and corruption. It seems very clear, however, that competing interests need to be investigated in the Surgipshere case.

Governing Science at Speed

Modern research governance is part of a complex bureaucratic-administrative state, which, in its ideal form, is devoted to rational and efficient decision-making to maximize the benefit of the research enterprise and minimize the risks. Every bureaucracy develops inefficiencies over time, as systems will always gravitate towards decision-making that benefits the bureaucracy itself rather than its aims. The scientific process can similarly lose its way: while it exists largely to “slow down” thinking and ensure that all ideas are thoroughly critiqued, it can also become caught up in “checks and balances” that are out of line with both epistemic realities and the community’s appetite for risk. There will, therefore, always be ways to improve efficiencies in both science and its governance. But there is also a maximum speed that bureaucracies and knowledge systems may reach before they begin to “warp” and fail in their primary functions.

This means that, while there are excellent reasons for conducting, reviewing, and translating research rapidly during epidemics and for exploring creative methodologies and technologies (e.g., using genetically engineered fragments of a virus’s genetic code rather than an inactivated virus for vaccine development (Komesaroff, Kerridge, and Glibert 2020)), there are also serious risks associated with doing so. Indeed, the potential damage caused by not ensuring effective governance of research during epidemics may be immense. Harmful drugs and devices might go on to injure millions of people, useful drugs and devices might be abandoned, the public’s faith in science and medicine might be undermined, and irrational and ineffective healthcare might proliferate.

The question then becomes: how can we avoid “pandemic research exceptionalism” (London and Kimmelman 2020) with respect to quality and integrity while still facilitating rapid research? Some possibilities—which simultaneously address weaknesses in technical or methodological rigour, lack of peer oversight, and unmanaged conflict of interest—include

  • setting up independent panels to review research prior to it commencing, during its conduct, and prior to publication (including rejecting research which would not meet basic quality standards in non-emergency times or research that is not of the utmost priority during an emergency, establishing standards for review of emerging technologies or methodologies, and making sure that competing interests are identified and managed);

  • establishing or repurposing dedicated research facilities that specialize in and can coordinate rapid research and its governance (and providing the resources needed for all bona fide researchers to be able to access these facilities);

  • establishing processes by which data both for and from rapid research can be widely shared and collectively critiqued; and

  • increasing the resources available to the agencies charged with rapidly making sense of the results of research and with making decisions under pressure about registration, funding, and clinical practice guidelines so that they can put in place the necessary scientific review and conflict of interest management processes.

There might also need to be stronger penalties (beyond the “mere” shame of retraction) for scientists who deliberately exploit the loopholes that emerge when science is hurried and perhaps even for those who fail to take adequate care.

With such mechanisms in place (and adequately resourced) it should be quite possible to both speed up science and remain attentive to both scientific quality and integrity. At the same time, we need to have a sensible discussion to determine how much quality people are willing to sacrifice in the name of speed. This is a difficult conversation, but one that must be undertaken. After all, this is not the first time that science has been sped up during pandemics with problematic effects, and we will undoubtedly need to speed science up again, many times in the future.