Peer Reviewed

Partisan reasoning in a high stakes environment: Assessing partisan informational gaps on COVID-19

Article Metrics
CrossRef

4

CrossRef Citations

Altmetric Score

13

PDF Downloads

PDF downloads since July 10, 2023
1139

Page Views

Using a survey conducted in July 2020, we establish a divide in the news sources partisans prefer for information about the COVID-19 pandemic and observe partisan disagreements in beliefs about the virus. These divides persist when respondents face financial costs for incorrectly answering questions. This supports a view in which the informational divisions revealed in surveys on COVID-19 are genuine differences of opinion, not artifacts of insincere cheerleading. The implication is that efforts to correct misinformation about the virus should focus on changing sincere beliefs while also accounting for information search preferences that impede exposure to correctives among those holding misinformed views.

Image by geralt on Pixabay

Research Questions

  • Do partisan differences exist in the public’s information about COVID-19?
  • Are inaccurate beliefs about COVID-19 sincere or due to partisan cheerleading?
  • How does partisan information processing on COVID-19 compare to issues with less immediate personal impact?

Essay Summary

  • We fielded a survey experiment in July 2020 to examine the public’s interest in different sources of information about COVID-19 and the prevalence of misinformation about the virus.
  • We find that partisan media are appealing as health-related news sources and that there is a partisan divide in COVID-related beliefs. Both information source preferences and beliefs about COVID-19 change little in response to financial incentives for correctly answering knowledge questions, suggesting that these views are sincere rather than a result of partisan cheerleading in which respondents knowingly express inaccurate views to support their political party.
  • We find partisan reasoning during a public health crisis resembles the process for polarized issues with less immediate personal impact. The partisan divide we saw on questions about COVID-19 closely resembles those that occur on a set of unrelated questions on topics such as immigration and unemployment included later in the same survey. 

In the current era of polarization, divides between Democrats and Republicans have spilled from political issues and cultural values to factual beliefs (Berinsky, 2017; Flynn et al., 2017; Hochschild & Einstein, 2015). The erosion of widely accepted areas of “common knowledge” raises important normative questions about the importance of truth in the democratic process (Schwartzberg, 2015). However, despite the sometimes sizeable partisan factual disagreements that appear in surveys (Bullock et al., 2015; Prior et al., 2015; Schaffner & Luks, 2018), questions about the political relevance of these disagreements remain. Are informational gaps between members of different parties genuine or do they instead stem from rhetorical “cheerleading” in which people knowingly express inaccurate views to support their party? Scholarship casting doubt on the sincerity of these partisan divides finds that when respondents are incentivized to accurately answer factual questions, the levels of misinformation and partisan division in surveys often shrink (Bullock et al., 2015; Prior et al., 2015; Schaffner & Luks, 2018). 

Previous work on this topic examines issues that can be evaluated with some degree of personal remove or that concern events taking place several years before the survey was fielded. In this study, we instead address an ongoing event with important policy implications by considering partisan divides in information about the COVID-19 pandemic using a survey conducted in July 2020.

The dominant explanation for informational divides between opposing partisans is motivated reasoning (Lodge & Taber, 2013), in which partisans associate their party with a particular belief and then adopt that belief as their own. In the case of COVID-19, the pandemic occurred under President Trump’s watch. The Trump Administration consistently minimized the threat posed by the virus and discounted the need for economic shutdowns, social distancing, and mask-wearing mandates. Through the theoretical lens of motivated reasoning, we would expect Republicans to downplay the severity of the pandemic. We similarly would anticipate that Republicans would prefer to obtain COVID-relevant information from news outlets aligned with President Trump, while ignoring sources that presented health-related expertise that contradicted messaging from the Administration. For their part, Democrats would be inclined to view the virus as a serious threat and favor measures recommended by health officials. These assumptions do align with the partisan divisions in attitudes towards COVID-19 that have emerged in surveys conducted throughout the pandemic (Clinton et al., 2021; Druckman et al., 2021; Gadarian et al., 2021).

In contrast, a different perspective comes from theories of evolutionary psychology which argue humans process information more carefully in high-risk situations (Barkow et al., 1992), a phenomenon known as survival processing(e.g., Nairne et al., 2007). Beyond this mechanism, scholars in other fields hypothesize that substantial stakes, operationalized as the presence of material or psychological costs for incorrect choices, reduce the role of bias in decisions (Lerman & McCabe, 2017; Tetlock, 1985). The COVID-19 virus certainly qualifies as a high-risk threat. In these circumstances, the “careful processing” suggested by evolutionary psychology implies outcomes at odds with motivated reasoning. Relative to topics that pose less salient health threats, we might expect partisan divisions on COVID-19 to be more subject to insincere cheerleading, rather than reflecting genuine disagreement.

To distinguish these opposing accounts of whether partisan divides over information are sincere, we conducted a survey experiment in July 2020, offering some respondents a financial incentive of $0.25 or $1.00 per question to provide accurate answers to information questions, an inducement that increased the accuracy of responses to and reduced partisan divides on information questions, survey questions that ask respondents about factual matters with correct and incorrect answers, in past work (Bullock et al., 2015; Prior et al., 2015). We implemented this manipulation for COVID-related questions as well as questions which, for most respondents, would have less immediate personal impact, such as perceptions of unemployment and climate change. On each question, respondents were informed of the general subject area covered by the question and selected a brief report drawn from real coverage provided from one of five news sources to read. These sources varied by topic, but included partisan media outlets, mainstream news sources, and public health experts. After reading each article, respondents answered a knowledge question. This approach allowed us to assess the public’s preferences for information providers and beliefs about COVID-19, probe the sincerity of these preferences, and compare the process of partisan reasoning on public health to the reasoning process that occurs on other issues.

Our three key findings demonstrate that partisan reasoning during a public health crisis resembles the partisan reasoning process that occurs on polarized issues with less immediate personal impact. First, our study aligns with others in revealing partisan divisions in the public’s beliefs and preferred sources of information about COVID-19 (Clinton et al., 2021; Druckman et al., 2021; Gadarian et al., 2021). In particular, Republicans are more likely to understate the significance of the pandemic and less likely to seek out health care experts as sources of information about the virus, an extension of partisan selective exposure into the public’s searches for health information. Second, we offer evidence for the sincerity of these divides in beliefs and preferred sources of health information by showing they persist in the face of monetary incentives that penalize people for knowingly providing inaccurate answers to political information questions. Third, in taking a similar approach to factual questions on other topics, we find the partisan divide on public health resembles the divisions present on a separate set of polarized political issues considered in the survey. Perhaps surprisingly, the public health domain now appears just as politicized as immigration, unemployment, or any other “mediated” issue largely experienced indirectly through news coverage or word of mouth, despite the large personal health stakes present on this topic.

Our findings are relevant for approaches focused on correcting misinformed beliefs regarding COVID-19. Our evidence suggests misinformed beliefs on these topics are not an artifact of how public opinion is measured in surveys, contrary to earlier findings (Bullock et al., 2015; Prior et al., 2015; Schaffner & Luks, 2018). Rather than recommending improvements in ways of eliciting the public’s views, our findings instead suggest practitioners should develop messaging strategies aimed at correcting misinformed, but genuinely held, beliefs (Jerit & Zhao, 2020). We also find large differences in the information sources that partisans prefer to use when obtaining public health information about COVID-19. Thus, an important consideration for attempting to reduce misinformation on public health matters is overcoming the selective patterns of information consumption that can impede the reach of corrective information among those who already hold misinformed beliefs. 

These results have dual implications for the communicators and channels of communication that may be most relevant for public health. First, in terms of who should communicate, we offer the thought that messengers at a greater distance from Washington may have greater credibility among the public because there is a relatively sharp partisan divide in who is deemed trustworthy among the various national political and media actors our study considered. Regional and local health authorities, as well as peoples’ personal physicians, may be more effective as messengers. Second, we find the misinformed might be reached more effectively through inadvertent or incidental exposure to expert health sources, since they may try to avoid information from health experts when they have the opportunity to do so. Broadcasts of major sporting events and other popular entertainment programs could be potential media platforms for delivering public service messages with corrective information.

Findings

Finding 1: Partisan divides exist in preferred news providers and information about COVID-19.

We measured large partisan divides in preferred news providers and information about COVID-19 in the control conditions of our survey, where respondents were not provided with financial incentives to answer the information questions correctly. In the survey respondents selected a COVID-19 news article to read from a menu of five news options before answering each information question. Figure 1 displays the choices made by Democrats and Republicans, showing the share of choices respondents made to different news categories. There is significant partisan divergence in information preferences. Republicans were 34 percentage points more likely to select right-leaning media sources than Democrats. Republicans were also noticeably less likely than Democrats (by 23 percentage points) to consult mainstream media. There was a further divide in expert source use, where Democrats were 9 percentage points more likely to select the health expert choice.

Figure 1. Large partisan divides exist in preferred COVID-19 news sources.

Table 1 extends the analysis to factual beliefs about COVID-19. Again, this analysis is confined to the control condition of our survey experiment where respondents faced no financial penalty for knowingly providing incorrect answers. For each item, the table displays the share of respondents in each party with correct answers and the difference between the parties, along with 95% confidence intervals for these quantities. Democrats were much more likely than Republicans to acknowledge that the CDC mortality count did not exaggerate deaths from COVID-19, that the virus was not man-made in a Chinese laboratory, and that there was no “general scientific agreement” that hydroxychloroquine could serve as a virus treatment.1We note this answer to the virus origin question represented the scientific consensus at the time we fielded our study. In contrast, Republicans were more likely than Democrats to answer correctly that, on a per capita basis, deaths due to COVID-19 in several Western European countries exceeded those in the United States at the time the survey was conducted. In the lone exception to this polarized pattern, a large majority of partisans on both sides (81%) recognized a disproportionate number of deaths from COVID-19 occurred among those age 65 and over, a pattern that went against our expectation that Democrats would be more likely than Republicans to emphasize the health risks of COVID-19 to the entire population, rather than focusing on the consequences the virus posed for seniors in particular.

Table 1. Average share of respondents in each party that correctly answered the COVID-19 information questions in the unincentivized conditions of the survey.

Finding 2: Inaccurate beliefs about COVID-19 were not reduced by financial accuracy incentives.

These information preferences and factual beliefs do not appear to be inflated by partisan cheerleading. We assessed this by comparing those in the control group of our experiment to the respondents who were randomly assigned with financial incentives to correctly answer the political information questions ($0.25 per correct answer for the low incentive treatment group, $1.00 per correct answer for the high incentive treatment group). These amounts were selected to help us understand any gradation in response to changing incentive levels among the public. These incentive levels resemble those used in earlier studies with a similar, design, which enabled a close comparison with other work on political misinformation. If these differences occurred due to partisan cheerleading (respondents provide an answer they know to be incorrect to support their party), then we would expect a decrease in the number of incorrect answers to the political knowledge questions in the conditions where incentives were available for correct answers (e.g., Bullock et al., 2015; Prior et al., 2015).

First, we removed respondents who identified as political independents in order to focus only on those with a partisan affiliation. Then, to evaluate information source selection, we stacked the five news selections about COVID-19 that each respondent made over the course of the survey and examined the effects of the incentives on their choices. If these apparent divisions were attributable to cheerleading, we would anticipate a reduced reliance on biased copartisan sources in the incentive conditions. 

Table 2. Effects of financial accuracy incentives on the probability survey respondents selected different news source options. The treatment effects are estimated using ordinary least squares regression.

Table 2 shows the extent to which the incentive treatments altered news selection. The table presents the results of a regression analysis in which the outcome is whether a respondent selected information from a particular news source category (e.g., expert or partisan). The regression coefficients corresponding to the two incentive conditions indicate that the availability of incentives did not substantially affect news choice, as the treatment effects do not reach statistical significance across these options and their magnitude is substantively small. In short, the information preferences expressed in the control condition of the survey appear uncontaminated by partisan cheerleading.

Results are largely similar when we examine beliefs about COVID-19. In Table 3, we stacked the separate factual items covering COVID-19 together and regressed the probability a respondent provided a correct answer on the treatment condition to which they were assigned. If cheerleading was behind the response patterns present in the control group, we would expect to see large increases in the probability respondents correctly answer the information questions when incentives for correct answers become available.

Table 3. Effects of financial accuracy incentives on the probability survey respondents correctly answered COVID-19 information questions. The treatment effects are estimated using ordinary least squares regression.

Table 3 provides limited evidence that partisan cheerleading underlies the divisions over factual beliefs. The point estimates for both the low and high incentive conditions are positive, but these effects are small. Respondents in the low incentive condition were 1 percentage point more likely to provide a correct answer (95% CI [-.01, .03]), although this difference does not reach statistical significance. Respondents in the high incentive condition were 3 percentage points more likely to provide a correct answer (95% CI [.01, .05]), showing evidence that cheerleading explains some portion of the incorrect answers in the study’s control condition, although this difference is substantively small.2Appendix Table C6 reports similar findings when instead considering the partisan divide on answers to the information questions.

These results suggest that partisan cheerleading is not behind the partisan divides in information search and beliefs measured in our surveys. However, there is still work to be done in isolating the specific mechanisms behind their appearance. The patterns we see here could be explained by motivated reasoning in which partisans actively distort the information they encounter to maintain party-congenial beliefs (Flynn et al., 2017). These patterns are also consistent with theoretical perspectives in which partisans strive for accuracy (Lupa & McCubbins, 1998) or do not exhibit any particular information processing motivations at all (Pennycook & Rand, 2019), but still ultimately arrive at inaccurate beliefs based on the information they encounter from trusted co-partisan elites and media sources. 

Finding 3: Partisan divides over COVID-19 resemble polarized political issues.

We find the partisan information divides on COVID-19 appear similar to those on other polarized political issues. To compare the partisan information divides on COVID-19 with those on other matters, a later portion of our survey repeated this same information search-factual response process using questions focused on the unemployment rate, immigrant crime, climate change, gun control, and voter fraud in the 2016 presidential election.

In terms of the baseline partisan divisions in the control conditions, the average divide on the COVID-19 items among those in the unincentivized condition is a 29 percentage point difference between Democrats and Republicans. For the other political items, this difference is only slightly larger, with an average 32 percentage point difference between Democrats and Republicans. In other words, the informational divides among partisans on these topics, when incentives are unavailable, are roughly similar in magnitude.

Turning to the experimental evidence, results also remain similar. First, as with the items related to COVID-19, incentive availability did not meaningfully change in information search patterns (Appendix Table D4). Second, the availability of incentives also had only modest consequences for correct answers to the other political information items (Appendix Table D3). Using the same specification pooling together the different items employed in the previous section, accuracy increased by five percentage points relative to the control group (95% CI [.04, .06]) among those in the low incentive treatment group, while accuracy in the high incentive conditions increased by four percentage points compared to the study’s control group that answered without incentives for correct answers (95% CI [.03, .05]).

Altogether, this comparison shows partisan informational divides on COVID-19 are similarly sizeable, and similarly resilient to the expressive responding concern, as other issues chosen to represent highly divisive topics in contemporary politics. These patterns are not consistent with what has been observed in earlier studies that emphasize the “cheerleading” perspective on misinformation in political surveys (Bullock et al., 2015; Prior et al. 2015). However, we do note the resilience of the divides in beliefs about COVID-19 is not entirely unprecedented and does align with other studies that consider misinformation on divisive and ongoing political controversies (Berinsky, 2018; Peterson & Iyengar, 2021).

Methods

We fielded a survey experiment in July 2020 to examine the public’s interest in different sources of information about COVID-19 and the prevalence of misinformation about the virus. We recruited 1700 participants from YouGov’s online panel, using the firm’s standard procedure of drawing a sample that they statistically match to the voting-age population on key demographic characteristics. Because of our interest in partisan differences, our analysis focuses on the 1447 respondents that identified with or “leaned” towards one of the political parties and excludes pure independents.

The survey included five information questions about COVID-19. These items reflected issues that were salient at the time we conducted the survey and were selected to incorporate some items where directional motivations would lead Democrats to answer incorrectly and others where these motivations were expected to lead Republicans to provide incorrect answers. On each item, the survey followed the same multi-step process. First, we informed participants of the general subject area covered by the question. They then selected a brief news report to read from one of five news sources. The sources varied across each item, but the choice set remained consistent. Respondents could always select from one left-leaning media source (i.e., Huffington Post), one right-leaning media source (either Fox News or Breitbart depending on the topic), one source with public health expertise (e.g., excerpts from a statement by Dr. Anthony Fauci), and a pair of mainstream sources (i.e., CNN and The New York Times). In terms of content, we selected stories that originally appeared in the source they were attributed to. This ensured the choice menu and headlines reflected the actual coverage these sources provided and that an article could be realistically viewed as coming from its source. If we had held information constant, the results would be less ecologically valid as respondents might disregard source labels if they did not perceive the articles in the study as genuine.

After reading the report, respondents answered a factual question concerning some aspect of the COVID-19 pandemic and expressed their confidence in this answer. As these stories represented real coverage from these sources, the pattern of information resembled what was present in the real world. While the coverage generally mentioned, at least briefly, the fact the respondent would later answer a question on, coverage from the partisan sources sometimes omitted this information, as did real coverage of these events (Appendix Table A4).

The five factual questions covered the origins of the coronavirus, potential treatments, the demographic profile of those most at risk from the virus, mortality statistics, and a comparison of health outcomes in the United States and several Western European countries. The items were asked in the same order for all respondents. Appendix A contains question wording. A “don’t know” option was not available, though we measure response certainty afterwards and found respondents were generally confident in their answers to these questions (see Appendix Table C3). We selected issues that were salient in public discussion when fielding the survey and included some items where directional motivations to support their political party would lead Democrats to answer incorrectly and some whereRepublicans would be expected to answer incorrectly. We reviewed news coverage and public opinion polling to develop these items and expected there to be two items that Democrats would answer incorrectly about the comparison of the United States and other countries (left-leaning media coverage we reviewed consistently emphasized the poor performance of the United States) and covid mortality (left-leaning media emphasized the health risks to people of all ages). For the other three items, we anticipated that Republicans would answer incorrectly more frequently than Democrats.

The experimental manipulation in the survey assessed the sincerity of factual beliefs. In order to assess any gradation in response to incentives, we assigned respondents to three conditions with varying incentives for correctly answering the information questions: 1) a “control” condition that provided no financial incentives, 2) a “low incentive” treatment condition ($0.25 per correct response), and 3) a “high incentive” treatment condition ($1.00 per correct response). the “high incentive” amount resembles the incentive levels used in previous work with this research design; for example, Study 1 in Prior et al., 2015, also offers respondents $1.00 per correct answer, while incentive levels in Bullock et al., 2015, varied from $0.10 to $1.00. Respondents were informed about the condition they were placed in at the beginning of the information battery before they answered any questions and were in the same condition for all the information questions.

Since the incentives imposed a cost on knowingly providing incorrect, but party-congenial, answers, we expected accuracy to increase among incentivized respondents (Bullock et al., 2015; Prior et al., 2015). If measured knowledge did not increase in the presence of incentives, it would help rule out the alternative explanations that 1) people knowingly answer such questions incorrectly to display their party loyalty (i.e. cheerleading) or 2) because they are not motivated to think deeply about survey items absent financial incentives (Bullock et al., 2015; Bullock & Lenz, 2019). We also assessed whether this treatment encouraged the use of different information sources at the experiment’s news selection stage. Here information-seeking behavior may also reflect partisan cheerleading if respondents indicated an interest in co-partisan sources they did not ordinarily use in the unincentivized conditions to support their party (Prior, 2013).

Finally, to compare the partisan information divides on COVID-19 with other matters, after a washout period of unrelated material, a later portion of the survey repeated this same information search-factual response process using questions from an earlier study of political misinformation. These questions focused on the unemployment rate, immigrant crime, climate change, gun control, and voter fraud in the 2016 presidential election. 

Topics
Download PDF
Cite this Essay

Peterson, E. & Iyengar, S. (2022). Partisan reasoning in a high stakes environment: Assessing partisan informational gaps on COVID-19. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-96

Links

Bibliography

Berinsky, A. (2018). Telling the truth about believing the lies? Evidence for the limited prevalence of expressive survey responding. Journal of Politics, 81(1), 211–224. https://doi.org/10.1086/694258

Berinsky, A. (2017). Rumors and health care reform: Experiments in political misinformation. British Journal of Political Science, 47(2), 241–262. https://doi.org/10.1017/S0007123415000186

Bullock, J. & G. Lenz. (2019). Partisan bias in surveys. Annual Review of Political Science, 22, 325–342. https://doi.org/10.1146/annurev-polisci-051117-050904

Bullock, J., Gerber, A., Hill, S., & Huber, G. (2015). Partisan bias in factual beliefs about politics. Quarterly Journal of Political Science, 10(4), 519–578. https://dx.doi.org/10.1561/100.00014074

Clinton, J., Cohen, J., Lapinski, J. & Trussler, M. (2021). Partisan pandemic: How partisanship and public health concerns affect individuals’ social mobility during COVID-19. Science Advances, 7(2), eabd7204. https://www.science.org/doi/10.1126/sciadv.abd7204

Druckman, J., Klar, S., Krupnikov, Y., Levendusky, M., & Ryan, J. (2021). Affective polarization, local contexts and public opinion in America. Nature Human Behavior, 5, 28–38. https://doi.org/10.1038/s41562-020-01012-5

Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38(S1), 127–150. https://doi.org/10.1111/pops.12394

Gadarian, S., Wallace Goodman, S., & Pepinsky, T. (2021). Partisanship, health behavior, and policy attitudes in the early stages of the COVID-19 pandemic. PLOS One, 16(4), e0249596. https://doi.org/10.1371/journal.pone.0249596

Hochschild, J., & Einstein, K. (2015). Do facts matter? Information and misinformation in American politics. Political Science Quarterly, 130(4), 585–624. https://doi.org/10.1002/polq.1239

Jerit, J., & Zhao, Y. (2020). Political misinformation. Annual Review of Political Science, 23, 77–94. https://doi.org/10.1146/annurev-polisci-050718-032814

Lerman, A., & McCabe, K. (2017). Personal experience and public opinion. Journal of Politcs, 79(2), 624–641. https://doi.org/10.1086/689286

Lodge, M., & Taber, C. (2013). The rationalizing voter. Cambridge University Press.

Lupia, A., & McCubbins, M. (1998). The democratic dilemma: Can citizens learn what they need to know? Cambridge University Press.

Nairne, J., Thompson, S., & Pandeirada J. (2007). Adaptive memory: Survival processing enhances retention. Journal of Experimental Psychology, 33(2), 263–273. https://doi.org/10.1037/0278-7393.33.2.263

Pennycook, G., & Rand, D. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39–50. https://doi.org/10.1016/j.cognition.2018.06.011

Peterson, E., & Iyengar, S. (2021). Partisan gaps in political information and information-seeking behavior: Motivated reasoning or cheerleading? American Journal of Political Science, 65(1), 133–147. https://doi.org/10.1111/ajps.12535

Prior, M., Sood, G., & Khanna, K. (2015). You cannot be serious: The impact of accuracy incentives on partisan bias in reports of economic perceptions. Quarterly Journal of Political Science, 10(4), 489–518. https://dx.doi.org/10.1561/100.00014127

Prior, M. (2013). Media and political polarization. Annual Review of Political Science, 16, 101–127. https://doi.org/10.1146/annurev-polisci-100711-135242

Schaffner, B., & Luks, S. (2018). Misinformation or expressive responding? What an inauguration crowd can tell us about the source of political misinformation in surveys. Public Opinion Quarterly, 82(1), 135–147. https://doi.org/10.1093/poq/nfx042

Schwartzberg, M. (2015). Epistemic democracy and its challenges. Annual Review of Political Science, 18, 187–203. https://doi.org/10.1146/annurev-polisci-110113-121908

Tetlock, P. (1985). Accountability: A social check on the fundamental attribution error. Social Psychology Quarterly, 48(3), 227–236. https://doi.org/10.2307/3033683

Funding

The authors thank the William and Flora Hewlett Foundation for research support.

Competing Interests

The authors declare no competing interests.

Ethics

This research protocol was approved by institutional review boards at Stanford University and Texas A&M University. Subjects provided informed consent before participating in the survey.

Copyright

This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Data Availability

All materials needed to replicate this study are available via the Harvard Dataverse: https://doi.org/10.7910/DVN/X3OWT0