Introduction

The 2020 Covid-19 pandemic has forced science advisory institutions and processes into an unusually prominent role, and placed their decisions under intense public, political and media scrutiny. In the UK, a key controversy has centred on whether the government was too late in implementing its lockdown policy, resulting in thousands of unnecessary deaths, and the role of advice provided by the government’s Scientific Advisory Group for Emergencies (SAGE), a group of experts drawn from government, academia and industry (SAGE, 2020a). In this article, I highlight how uncertainties in the virus doubling rate—an important input into lockdown decision-making—were downplayed in both the advice provided by SAGE and public comments by SAGE participants. Previous research has shown how knowledge producers perceive high uncertainty, whereas knowledge users perceive less uncertainty (MacKenzie, 1990), an issue particularly problematic for government science advice where differentiating between knowledge production and use may be challenging (Jasanoff, 2003). In the case of UK Covid-19 advice, the conflation of knowledge production and knowledge use has accompanied experts downplaying the uncertainty within their own models as they provided advice to decision makers. The consideration of policy options thus became anchored in a consensus that Covid-19 cases were doubling every five days, as opposed to the results of scientific modelling that showed doubling rates as low as three days were also credible. This failure to consider the full range of credible values for doubling rates projected an unwarranted sense of certainty to decision-makers and the public regarding the spread of the virus, and potentially helped to delay the implementation of a lockdown policy.

Knowledge production, knowledge use and uncertainty perception

In June 2020, as the UK started to wind down its lockdown measures in response to Covid-19, some prominent SAGE participants reflected on the timing of the policy’s introduction. Professor Neil FergusonFootnote 1 stated that enforcing lockdown a week earlier could have halved the death rate (Stewart and Sample, 2020), Professor John Edmunds expressed regret that the delay “cost a lot of lives” (UK Lockdown Delay Cost a Lot of Lives-Scientist, 2020), a position supported by fellow SAGE participant and Royal Society President Sir Venki Ramakrishnan (Today, 2020). Both Ferguson and Edmunds pinpointed a paucity of data as the key reason for the delay to lockdown (Johns, 2020; UK Lockdown Delay Cost a Lot of Lives-Scientist, 2020). However, there is more to the lockdown controversy than the important, but unsurprising, challenges of data availability and validation (Marcus and Oransky, 2020; Rutter et al., 2020). The more fundamental issue for science advisory systems is how to deal with the multiple uncertainties that inevitably arise from poor data, a topic of enduring interest in the science and technology studies (STS) and science advice literature. (Cassidy, 2019; Funtowicz and Ravetz, 1993; Landström et al., 2015; Pearce et al., 2018; Raman, 2015; SAPEA, 2019; Stilgoe et al., 2006; Stirling, 2010; Wesselink and Hoppe, 2010). Here, I focus on one aspect of science advice’s ’uncertainty monster’ (van der Sluijs, 2005), the tensions between knowledge production and knowledge use.

One of the most influential studies of scientific uncertainty was provided three decades ago by Donald MacKenzie (1990) in Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance. MacKenzie outlined three broad positions which formed a ‘certainty trough’ related to the perception of uncertainty in knowledge production and use. First, those directly involved in knowledge production, such as scientific modellers, are keenly aware of the uncertainties in that knowledge. Second, users of said knowledge are likely to perceive less uncertainty, believing, as MacKenzie puts it, ‘what the brochures tell them’. Third, those people alienated from knowledge production and use, or committed to a different technology, will have the highest perception of uncertainty. When represented schematically, this relationship resembles a trough, as shown in Figure 1 (MacKenzie, 1990, p. 372).

This concept has become influential in STS, with the implication that different actors reside in different points along the trough depending on their whether they are knowledge producers or users. Sheila Jasanoff proposes that expert reviewers of government science fall within the intermediate zone of reduced scepticism, while also noting the challenge of achieving both familiarity with the subject matter and distance from the scientists involved (2006, p. 34). In a subsequent study of climate modellers, Myanna Lahsen (2005) highlights how the boundary between model developer and model user can become blurred, leading to a situation where knowledge producers become ‘seduced’ by their own models. Lahsen uses this insight to argue for a differently shaped ‘trough’, with uncertainty perception actually at its lowest amongst those directly involved in knowledge production (2005, p. 918). However, I argue that in the case of UK Covid-19 advice, the conflation of knowledge production and use does not imply a differently shaped relationship. Instead, key experts occupy different positions on MacKenzie’s trough according to their priorities at any given moment rather than remaining fixed in one place. These conflated roles point both to the challenge of navigating the ‘hybridity’ that forms an inherent part of science advice (Palmer et al., 2019), and the importance of effective review procedures being built into science advisory systems (Jasanoff, 2003). In the next section, I provide a brief overview of Covid-19 advice in the UK, and raise questions regarding the membership of advisory groups.

Diversity and scrutiny in the UK science advice system

SAGE (Scientific Advice to Government in Emergencies) has overall responsibility for coordinating and peer reviewing the scientific advice that informs decision-making (Civil Contingencies Secretariat, 2012, p. 2), and is part of a science advice system that takes in multiple sub-groups (see Table 1).

Table 1 Descriptions of SAGE and related sub-groups (Government Office for Science, 2020; HM Government, 2020b).

At the time of writing, the UK Government’s information on SAGE does not detail criteria for the selection of experts participating within the system, other than them being from healthcare and academia, and that sub-groups “consider the scientific evidence and feed in their consensus conclusions to SAGE” (HM Government, 2020a). However, the processes behind these remain somewhat opaque. On membership, the guidance for SAGE states that the experts appointed should be the most appropriate rather than the most accessible, but not how that is to be determined (Civil Contingencies Secretariat, 2012, p. 19). On consensus, no mechanism is discussed for resolving expert disagreement, other than to note that reaching consensus may not always be possible and that a statement should be made on “the extent and sources of uncertainty” (Civil Contingencies Secretariat, 2012, p. 47).

This lack of clarity was exacerbated by an early decision not to disclose the identities of expert advisers, in order to protect them from “lobbying and other forms of unwanted influence” (Vallance, 2020). This caused controversy over a perceived lack of transparency and accountability, leading to pressure from Members of Parliament and the eventual publication of a leaked participant list by The Guardian on April 24th (Carrell et al., 2020; Mason, 2020a). On May 4th, the names of participants within all advisory groups were published by government (Government Office for Science, 2020; Mason, 2020b). Analysis of this list shows a total of 17 experts participating in three or four advisory groups (see Table 2).

Table 2 List of experts participating in three or more groups within the UK science advice system, as of May 4th, 2020.

That some experts appear on more than one committee is perhaps unsurprising, not least because Cabinet Office guidance recommends that at least one member of each sub-group should attend SAGE in order to enable two-way communication (Civil Contingencies Secretariat, 2012, p. 15). However, the guidance also notes that SAGE “should not overly rely on specific experts” (Cabinet Office, 2012, p. 19; original emphasis). Such a principle is important for the consideration of a full range of expert views in the process of consensus-formation, as well as ensuring that the available evidence can be scrutinised by experts beyond those directly involved in its production (Freedman, 2020b; Jasanoff, 2003). That Professor Ian Boyd joined SAGE in April 2020 to provide an “independent challenge function” suggests an awareness within the science advice system of a need for greater diversity and scrutiny (Freedman, 2020a). In the next section, I highlight the importance of these issues through the representation of uncertainty within advice on the Covid-19 doubling rate.

The doubling rate: from multiple uncertainties to a single number

Two leading groups of epidemiological modellers from Imperial College London and London School of Hygiene and Tropical Medicine (LSHTM) were closely involved in the early science advice provided to the UK government for Covid-19. As shown in Table 1, LSHTM’s Edmunds is a member of four groups within the science advice system, including SAGE (Government Office for Science, 2020). While LSHTM’s work has not been as high-profile as that from Ferguson’s Imperial group, it features regularly in the published minutes of SAGE, and their scientists are prominent media contributors. While the modelling reports provided to SAGE during February and March are yet to be published, a version of the LSHTM group’s work has been peer reviewed and published in The Lancet Public Health (LPH) (Davies et al., 2020).Footnote 2 In this article they provide an estimate of R0 (the number of people infected by a single case, in a situation where everyone is susceptible) as 2.7, but state that it could also credibly be as high as 3.9. This is notable for two reasons. First, these values for R0 are considerably higher than those in the Imperial College report that was reportedly pivotal in the government’s move to more stringent interventions and, ultimately, lockdown policies (Landler and Castle, 2020). Imperial modelled R0 as between 2.0–2.6, with a central assumption of 2.4 (Ferguson et al., 2020). In other words, the highest value modelled by the Imperial group was below the central estimate of LSHTM. Footnote 3 Second, despite the wide range of possible R0 values identified in the LPH article, and the divergence from estimates provided by Imperial, public pronouncements by science advisers focused on a single number for the doubling rate of the virus. Providing a single number for the doubling rate relies not only on having a firm grasp of a figure for R0, but also the time elapsed between infectiousness in a primary case and a secondary case. This was perhaps an impossible task in the early stages of the virus, where a paucity of data meant societies were “living in a moment of ground-zero empiricism …[with] basic facts yet to be ascertained” (Daston, 2020). Some modelling experts from outside the epidemiological community have countered such perspectives, argued that government science advisers should have paid more attention to early case data from the UK and other European countries (Annan and Hargreaves, 2020). While such efforts to improve the quality of model simulations are clearly important, there is a broader point that requires attention: that the multiple uncertainties inherent in trying to model the virus’s spread should be reflected in the scientific advice given to decision makers. Based on the evidence published in the scientific literature and by government, this does not seem to have happened in the UK case.

Fig. 1
figure 1

The ‘certainty trough’ shows a non-linear relationship between proximity to knowledge and perception of uncertainty. Adapted from MacKenzie (1990, p. 372; 1998, p. 325).

Defending the number

On March 13th, Edmunds appeared in a feisty Channel 4 News interview with Tomas Pueyo, a commentator with no apparent scientific credentials but who had written the viral Medium story “Why You Must Act Now” recommending immediate implementation of lockdown in order to minimise death rates (Pearce, 2020; Pueyo, 2020). At the end of the interview, Pueyo made the case, based on the increasing number of cases in the UK, that cases were more than doubling every three days. Edmunds disputed this saying (de Pear and Cacace, 2020, 23m17s):

“it’s true that if you look crudely at the numbers, then the cases are doubling every 2.5 days, but that’s because they are doing more contact tracing. The actual underlying rate of doubling is actually about every five days”.

The claim here is that because the UK was conducting a lot of tests, this was increasing the number of cases being detected, and so making it appear that the virus was spreading more quickly than it was. Investigating this claim is beyond the scope of this article, but what is clear is that the influence of testing rates on doubling rate calculations introduces a further, potentially important, source of uncertainty in estimating the rate at which Covid-19 cases were increasing. A responsible way of dealing with this would be to emphasise the range of potential values involved which, as described in the LPH article, could credibly have been closer to three days than five. Three days after the Channel 4 News programme, on March 16th, the minutes of the SAGE meeting echoed Edmunds’ claim, reporting that UK cases “may be doubling ever 5–6 days” (SAGE, 2020c, p. 2). Later that day, the Government Chief Scientific Adviser, Sir Patrick Vallance, also stated that he expected the epidemic to double “every five days or so” (BBC News, 2020, 27m47s). A week later, on March 23rd, the consensus on five days was broken, with SAGE meeting minutes reporting a doubling time of 3–4 days (SAGE, 2020b, p. 2).

The presentation of doubling time as reasonably certain is puzzling. Both Vallance and Edmunds are esteemed scientists who were surely aware of the inherent uncertainties within the models. Returning to MacKenzie’s certainty trough, one would expect Edmunds in particular, as a knowledge producer, to be highly perceptive of the uncertainties. Indeed, the LPH article on which Edmunds is a co-author demonstrates such perception. So, what is going on? My argument is that this is a case of science misreporting that stems from the conflated roles of key experts in the UK science advice system. In the LPH article, Edmunds fulfils his primary role as a knowledge producer through epidemiological models, providing a detailed account of the uncertainties in the LSHTM group’s research. However, his participation in SAGE and other advisory groups also casts him in the role of knowledge user: translating the outputs of his team’s scientific models into advice. This shows that not only is it possible for an expert to occupy different points on the certainty trough at different times, (see Fig. 2) but that this is a likely result of the close relationship between knowledge production and knowledge use within science advice.

Fig. 2
figure 2

A science advisor can occupy different points in the ‘certainty trough’ depending on their role at a given time. Adapted from MacKenzie (1990, p. 372; 1998, p. 325).

The humility of Edmunds and other advisers regarding the overdue implementation of lockdown is welcome and should provide an example to the UK’s political leaders, if public trust is to be maintained. However, the point of this article is not to highlight individuals, but to open up a dialogue about the structure of the UK’s science advice system and how it came to downplay the multiple uncertainties that seemed obvious to many external observers (Cookson and Mancini, 2020). If data problems in the crucial early days of the pandemic were as acute as science advisers now claim, then the consensus that formed around a doubling rate of five days becomes even less defensible. Instead, poor data availability should have prompted uncertainties to be emphasised rather than downplayed. This in turn could have opened up a wider range of policy options, and at least put on the agenda the rapid lockdown policy which some SAGE participants subsequently wished for (Pielke, 2007).

The future of science advice

This article has analysed the representation of uncertainty regarding the virus doubling rate, a key area of controversy in UK science advice, finding contradictions between the outputs from epidemiological models and their public representation. In particular, I have shown how the doubling rate was represented as relatively certain despite the presence of three significant sources of uncertainty: the R0 number, the time elapsed between infectiousness in a primary case and a secondary case, and the influence of increased testing on doubling rate calculations. This article has been written within six months of the events described, so the analysis is necessarily provisional. However, there are three important findings: first, that the science advice system presented the virus doubling rate with unwarranted certainty; second, that role conflation in science advisers between knowledge production and knowledge use helps to explain the downplaying of uncertainties; and, third, that these issues highlight the need for diversity and clarity in the selection of experts and the means by which consensus in advice is achieved. None of this is straightforward to navigate, with trite slogans such as “follow the science” telling us nothing about the tricky business of producing and using scientific knowledge to inform decision-making (Bacevic, 2020). Rather, the unprecedented stress test of Covid-19 provides an important opportunity to learn lessons and strengthen the science advice system in preparation for future emergencies (Obermeister, 2020).

Two US-based projects, CompCoReFootnote 4 and EScAPEFootnote 5, are starting to build an evidence base, comparing science advice and policy responses across multiple countries. However, more detailed national-level research is urgently required in the UK and elsewhere. This article indicates numerous potential lines of enquiry including: the relationship between medical classification and political imaginaries (Liddiard, 2020; Perego et al., 2020), cultural influences on the representation of uncertainty (Douglas, 2016; Stirling and Scoones, 2020), diversity and consensus within science advice (Leach and Scoones, 2013; Smallman, 2020a), the staging of science advice at press conferences (Hilgartner, 2000; Hollin and Pearce, 2015), the role of blogs and social media in challenges to established science (Raman and Pearce, 2020; Turner, 2013), the emerging demand for alternative science advice such as “Independent SAGE” (Smallman, 2020b; Wise, 2020) and how scientific emergencies such as Covid-19 affect public trust in experts (Dommett and Pearce, 2019).

Such a research agenda is wide-ranging and challenging. Yet the analysis presented here reminds us that science advice is literally a matter of life-and-death. The ultimate responsibility for decision remains on politicians, but there is responsibility too on experts to reflect on whether the scientific advice provided on Covid-19 served the public good. The colossal toll of death, damage and despair left behind by the pandemic demands nothing less.