During the current Severe Acute Respiratory Syndrome 2 (SARS-CoV-2) virus outbreak, we have observed a particular phenomenon in medical publications. Many observational studies without control groups are published in major journals. The first series of 1099 patients with COVID-19 is a descriptive study without a control group [1]. Thus, we do not know, for example, whether the frequency of diarrhea in coronavirus disease 2019 (COVID-19) patients (3.8%) is higher or lower than that in patients with influenza. In the time of the pandemic, there is a need for rapid information to decide how to best treat patients, and how to best curtail the spread of the virus; hence, the publication of these types of studies may be justifiable.

Already in medical school, we learn that in the evidence hierarchy, randomized clinical trials (RCTs) are positioned at the top, because they eliminate many possible biases, such as selection bias and other possible confounders that are difficult to adjust. The first RCT in clinical medicine was a study in 1948 of 55 patients who received streptomycin, compared with 52 control patients, for the treatment of tuberculosis [2]. However, RCTs require considerable advance planning and approval processes and are costly. One systematic review of RCT costs found, among relatively scant published information, that the median costs per recruited patient were US$ 409 and the overall costs per RCT ranged from US$ 0.2 million to 612 million [3]. Performing RCTs is thus reserved for the pharmaceutical industry and academics with strong funding. This poses a problem during the COVID-19 pandemic and will likely continue to do so after the pandemic. The inevitable economic recession may cause a reduction of research funding. Therefore, it is good to reconsider the role of RCTs as the primary study design of choice.

Equipoise

The decision to perform an RCT needs to consider appropriate use of research resources, trying to answer relevant questions and following best ethical principles. Doctors and researchers simply cannot test all ideas for possible treatments that come to their mind by way of RCTs. A principle proposed several decades ago, called equipoise, should be considered by doctors, researchers and ethics committees [4]. This principle requires that a clinical trial should be performed only if there is genuine uncertainty about which treatment is beneficial. When a treatment can be predicted to be effective by the medical community, or when the biological plausibility is clear and beyond doubt, or on the other hand, totally absent (for example, the use of Reiki for treating COVID-19), RCTs should normally not be undertaken. While these criteria appear self-explanatory, we are aware of many trials that lacked equipoise and were published in major medical journals. One such example is a trial that compared chlorhexidine–alcohol with povidone-iodine alone for surgical skin antisepsis, published in the clinical journal with the highest impact factor in the world [5]. If one asks infection prevention practitioners, they would be perfectly able to predict the results of this RCT at the outset. Alcohols were known for decades to be more potent than aqueous povidone-iodine, and the comparison was unfair by way of comparing two antiseptics against one.

In finding a treatment for COVID-19, the principle of equipoise should remain applicable for a clinical trial, even though the situation is often not clear-cut. One example is the proposal to treat COVID-19 patients with plasma of recovered patients. Based on immunological reasoning and on experience with some other viral diseases, this treatment is clearly plausible. However, the comparator, the standard of care, at the moment is the best treatment available, and there are risks of adverse events, such as transfusion-related acute lung injury and antibody-dependent viral enhancement.

RCTs are generally slow in providing answers. A search performed on April 28, 2020, in the ClinicalTrials.gov database showed that among 311,349 registered trials on adults up to December 31, 2018, only 39,601 (13%) had published their results, only half of the RCTs achieved the recruitment target, and only half of these were completed in time.

Clinical trials ethics during outbreaks

During the Ebola virus outbreak in 2014, an ethics advisory panel to the World Health Organization (WHO) concluded that it would be acceptable to offer unregistered interventions that have shown promising results in the laboratory and in animal models, but have not yet been evaluated for safety and efficacy in humans, provided that certain conditions are met (https://www.who.int/csr/resources/publications/ebola/ethical-considerations/en/). The essence from this viewpoint has been applied in several studies concerning treatment of COVID-19 with hydroxychloroquine. Some early observations indicated that this agent may be effective, but some were later rebuked by the scientific community [6]. To us, the unfortunate aspect of this example does not appear to be the contradiction of results by different studies—which is an inherent aspect of how science works—but the fact that those observational studies were not conducted sufficiently well, i.e. included analyses without adjustment, had no proper control group, and used cases and controls from different patient populations [6].

Alternative to RCTs

Observational studies—either cohort or case–control—can indeed answer clinical questions when they are performed well. Guidelines for reporting observational studies are available, such as the STROBE statement (https://www.strobe-statement.org/index.php?id=strobe-home). The STROBE statement was developed by an international, collaborative initiative of epidemiologists, methodologists, statisticians, researchers, and journal editors involved in the conduct and dissemination of observational studies. Observational studies can answer more, and more diverse, questions than RCTs. This includes, for example, outcomes observed during various follow-up durations and also side effects. RCTs are less suitable for answering these questions, because the follow-up duration is often fixed, and it is more difficult to recruit larger study populations. In addition, RCTs are usually powered to observe outcomes, but not side effects. Unfortunately, the possibility to ‘adjust’ the study size, the duration of follow-up and other variables, such as what outcomes to include or not to include, is an important threat to this type of study design. It becomes the task of editors, peer-reviewers and the research community to detect such problems. Systematic reviews that analyze and summarize observational studies are able to rate the quality of such studies and detect if there is publication bias, such as when only studies with ‘positive’ results tend to get published. The advent of some journals—unfortunately many of which charge publication fees—that focus on publishing methodologically sound studies regardless of whether they bear ‘positive’ or ‘negative’ results also helps towards addressing these problems. Observational studies are also usually much cheaper and easier to plan than RCTs. Researchers do need to maintain comprehensive databases from where the study populations, including case and controls, are derived. Publication bias of observational studies may underlie the proposition that the measured effect size is often exaggerated. However, a Cochrane review showed that any lack of agreement between results of RCTs and observational studies was not due to different study designs per se, and that there were no significant differences in effect size between observational studies and RCTs [7].

Conclusions

In conclusion, the current COVID-19 pandemic reminds us that RCTs should be conducted with the question of equipoise in mind, and that observational studies, when performed and analyzed well, can give valid answers to clinical questions in the absence of RCTs. Under the right conditions, observational studies are not much inferior to RCTs.