Hostname: page-component-8448b6f56d-sxzjt Total loading time: 0 Render date: 2024-04-18T15:36:03.774Z Has data issue: false hasContentIssue false

Design and implementation of a digital site-less clinical study of serial rapid antigen testing to identify asymptomatic SARS-CoV-2 infection

Published online by Cambridge University Press:  10 May 2023

Apurv Soni*
Affiliation:
Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, USA Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
Carly Herbert
Affiliation:
Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
Caitlin Pretz
Affiliation:
Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
Pamela Stamegna
Affiliation:
Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
Andreas Filippaios
Affiliation:
Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
Qiming Shi
Affiliation:
Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA University of Massachusetts Center for Clinical and Translational Science, University of Massachusetts Chan Medical School, Worcester, MA, USA
Thejas Suvarna
Affiliation:
CareEvolution, LLC, Ann Arbor, MI, USA
Emma Harman
Affiliation:
CareEvolution, LLC, Ann Arbor, MI, USA
Summer Schrader
Affiliation:
CareEvolution, LLC, Ann Arbor, MI, USA
Chris Nowak
Affiliation:
CareEvolution, LLC, Ann Arbor, MI, USA
Eric Schramm
Affiliation:
CareEvolution, LLC, Ann Arbor, MI, USA
Vik Kheterpal
Affiliation:
CareEvolution, LLC, Ann Arbor, MI, USA
Stephanie Behar
Affiliation:
Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
Seanan Tarrant
Affiliation:
Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
Julia Ferranto
Affiliation:
Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
Nathaniel Hafer
Affiliation:
University of Massachusetts Center for Clinical and Translational Science, University of Massachusetts Chan Medical School, Worcester, MA, USA
Matthew Robinson
Affiliation:
Division of Infectious Disease, Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, MD, USA
Chad Achenbach
Affiliation:
Division of Infectious Disease, Department of Medicine, Havey Institute for Global Health, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
Robert L. Murphy
Affiliation:
Division of Infectious Disease, Department of Medicine, Havey Institute for Global Health, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
Yukari C. Manabe
Affiliation:
Division of Infectious Disease, Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, MD, USA
Laura Gibson
Affiliation:
Division of Infectious Disease, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
Bruce Barton
Affiliation:
Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, USA
Laurel O’Connor
Affiliation:
Department of Emergency Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
Nisha Fahey
Affiliation:
Department of Pediatrics, University of Massachusetts Chan Medical School, Worcester, MA, USA
Elizabeth Orvek
Affiliation:
Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, USA
Peter Lazar
Affiliation:
Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, USA
Didem Ayturk
Affiliation:
Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, USA
Steven Wong
Affiliation:
Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, USA
Adrian Zai
Affiliation:
Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, USA
Lisa Cashman
Affiliation:
Quest Diagnostics, Marlborough, MA, USA
Lokinendi V. Rao
Affiliation:
Quest Diagnostics, Marlborough, MA, USA
Katherine Luzuriaga
Affiliation:
University of Massachusetts Center for Clinical and Translational Science, University of Massachusetts Chan Medical School, Worcester, MA, USA
Stephenie Lemon
Affiliation:
Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, USA
Allison Blodgett
Affiliation:
University of Massachusetts Center for Clinical and Translational Science, University of Massachusetts Chan Medical School, Worcester, MA, USA
Elizabeth Trippe
Affiliation:
Division of Microbiology, OHT7 Office of Product Evaluation and Quality, Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD, USA
Mary Barcus
Affiliation:
Division of Microbiology, OHT7 Office of Product Evaluation and Quality, Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD, USA
Brittany Goldberg
Affiliation:
Division of Microbiology, OHT7 Office of Product Evaluation and Quality, Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD, USA
Kristian Roth
Affiliation:
Division of Microbiology, OHT7 Office of Product Evaluation and Quality, Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD, USA
Timothy Stenzel
Affiliation:
OHT7 Office of Product Evaluation and Quality, Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD, USA
William Heetderks
Affiliation:
National Institute of Biomedical Imaging and Bioengineering, NIH, Via Contract with Kelly Services, Bethesda, MD, USA
John Broach
Affiliation:
Department of Emergency Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
David McManus
Affiliation:
Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA Division of Cardiology, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
*
Corresponding author: A. Soni; Email: Apurv.soni@umassmed.edu
Rights & Permissions [Opens in a new window]

Abstract

Background:

Rapid antigen detection tests (Ag-RDT) for SARS-CoV-2 with emergency use authorization generally include a condition of authorization to evaluate the test’s performance in asymptomatic individuals when used serially. We aim to describe a novel study design that was used to generate regulatory-quality data to evaluate the serial use of Ag-RDT in detecting SARS-CoV-2 virus among asymptomatic individuals.

Methods:

This prospective cohort study used a siteless, digital approach to assess longitudinal performance of Ag-RDT. Individuals over 2 years old from across the USA with no reported COVID-19 symptoms in the 14 days prior to study enrollment were eligible to enroll in this study. Participants throughout the mainland USA were enrolled through a digital platform between October 18, 2021 and February 15, 2022. Participants were asked to test using Ag-RDT and molecular comparators every 48 hours for 15 days. Enrollment demographics, geographic distribution, and SARS-CoV-2 infection rates are reported.

Key Results:

A total of 7361 participants enrolled in the study, and 492 participants tested positive for SARS-CoV-2, including 154 who were asymptomatic and tested negative to start the study. This exceeded the initial enrollment goals of 60 positive participants. We enrolled participants from 44 US states, and geographic distribution of participants shifted in accordance with the changing COVID-19 prevalence nationwide.

Conclusions:

The digital site-less approach employed in the “Test Us At Home” study enabled rapid, efficient, and rigorous evaluation of rapid diagnostics for COVID-19 and can be adapted across research disciplines to optimize study enrollment and accessibility.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of The Association for Clinical and Translational Science

Introduction

Over-the-counter rapid antigen SARS-CoV-2 tests (Ag-RDT) present unique opportunities for widespread COVID-19 testing. However, the authorization of these tests under the Food and Drug Administration’s (FDA) Emergency Use Authorization (EUA) generally imposes a requirement on companies to evaluate their test’s performance for detecting asymptomatic infections with serial screening [1]. To satisfy this requirement, it was necessary to conduct a longitudinal study to observe the onset of SARS-CoV-2 infection and performance of Ag-RDT across the course of infection [Reference Smith, Gibson and Martinez2]. This required an innovative research design to generate regulatory-quality, representative data, capture new-onset infections amidst a changing pandemic and understand asymptomatic infections, which represent a small proportion of all SARS-CoV-2 cases.

This study describes the design, implementation, and completion of a novel, digital, site-less study called “Test Us at Home” funded by the National Institutes of Health (NIH) Rapid Acceleration Diagnostics (RADx) Program. In this study, we outline the rationale for key decision-making and describe findings of the operational effort needed to successfully collect data under this paradigm, to provide rapid, rigorous evaluation of over-the-counter diagnostics across diverse participants of all ages in community-based settings.

Methods

Study Design

This prospective cohort study recruited participants over 2 years old who were asymptomatic and self-reported absence of SARS-CoV-2 infection in the 3 months prior to enrollment. Participants were asked to test for SARS-CoV-2 via Ag-RDT and molecular comparator every 48 hours for 15 days. This study was approved by the Institutional Review Board of the WIRB-Copernicus Group.

Study Objectives

The primary objective of this collaboration between the FDA, NIH, and the RADx Tech Clinical Studies Core at UMass Chan Medical School was to generate right-of-reference data that could be used towards FDA EUA. To accomplish this, we aimed to evaluate the performance of serial use of three different Ag-RDT to detect a new-onset SARS-CoV-2 infection, as determined by a laboratory-based molecular test.

Study Population and Recruitment: To approximate the performance of Ag-RDT in the general public, the FDA suggested against preferentially enrolling participants with a known SARS-CoV-2 exposure (oral communication, August 3, 2021). The goal of the study was to recruit at least 60 participants who started the study without any symptoms and with a negative SARS-CoV-2 molecular test and subsequently tested positive during the study period. Due to the fluctuating rates of COVID-19 positivity and the importance of recruiting asymptomatic participants, we employed a decentralized study design whereby eligible participants from anywhere in the mainland USA except for Arizona could participate. Residents from the state of Arizona were excluded because Quest Diagnostics, the laboratory performing molecular tests, did not provide direct-to-consumer tests in that state. Additional inclusion and exclusion criteria are listed in Table 1. Recruitment relied on the engagement of community stakeholders to advertise the study and reach communities nationwide. Periodic analyses were conducted to inform enrollment goals based on observed precision for positive percent agreement.

Table 1. Inclusion and exclusion criteria

Enrollment: Participants who expressed interest in the study received an email or flyer with instructions for downloading the study app and performing eligibility screening. The study app was a custom interface designed through the MyDataHelps platform. All study surveys and instructions were provided in both English and Spanish. To enroll, participants answered a series of questions to determine their eligibility and prioritization status. All eligible participants underwent automated prioritization based on a priori criteria that were adjusted on a daily or weekly basis (Supplemental Figure 1, Supplemental Figure 2). We adjusted prioritization criteria to preferentially enroll populations based on their region’s SARS-CoV-2 transmission rates, community-level vaccination, and sociodemographic characteristics. This step occurred prior to consenting, and data were discarded if the participant did not enroll in the study before the data collection period ended; these data were never accessible to the study team unless the participant consented to take part in the study. Eligible participants who did not match the preferred criteria for enrollment received a note saying that they cannot enroll in the study at this time but would receive communication from the study team when/if they became eligible (Supplemental Figure 3). These participants were placed on a “waiting list,” which was reviewed with regard to changing priority criteria throughout the study.

If eligible for enrollment, participants 18 years of age or older were asked to read the e-consent form and provide their signature through the study app (Supplemental Figure 4). Participants younger than 18 years were asked to review the e-assent and e-consent form with the support of their parent(s) or guardian(s) and provide assent for participation, while the parent/guardian provided written consent for participation in the study. Participants were eligible to receive up to $250 for timely completion of all sample collections (Supplemental Figure 5).

Test Distribution: On enrollment, participants were asked to provide their shipping information through the study app. Participants were assigned to one of three Ag-RDT (Quidel QuickVue At-Home OTC COVID-19 Test, Abbott BinaxNOW COVID-19 Antigen Self Test, or BD Veritor™ At-Home COVID-19 Test) using an automated algorithm that was informed by the investigators’ discretion based on enrollment numbers and geographic location of the participants (Supplemental Figure 6). The three Ag-RDT used in the study are authorized for emergency use by the FDA and were selected to provide multiple assessments and facilitate generalizing results to nonparticipating companies. These tests will be hereby referred to as test brands A, B, and C (randomly assigned), as the purpose of the study was not to directly compare different test performances. Participants residing together and enrolling with the same address or enrolled in a group, such as a cohort of classmates, were assigned the same Ag-RDT, to avoid mixing of test types between participants. Groups were given unique codes to input on enrollment, to ensure members were easily identifiable. As a result of this strategy, the assignment of tests to participants was nonrandom, and we intentionally did not pursue a strategy to compare performance between different tests.

Because molecular assays require a prescription by a licensed physician, the study team contracted with PWNHealth, a national clinician network, and developed an automated order filing system with Quest Diagnostics to place the requisite orders (Supplemental Figure 7). A total of 10 Ag-RDT and 7 home-collection kits for molecular comparators were provided to each participant. Participants received Ag-RDT and PCR home-collection kits by mail through separate shipments. The study team provided additional tests to participants if needed (e.g. tests were damaged upon delivery or lost in the mail).

Testing Schedule: Participants were asked to perform the Ag-RDT and collect the specimen for molecular comparator testing on the same day every 48 hours during the 15-day testing period (Supplemental Figure 8). Immediately prior to testing, participants were asked to record any symptoms on the day of testing. The study app provided a push-notification testing reminder at the 44-hour mark and additional notifications every 2 hours until the 52-hour mark if the participant had not completed their tests. Participants were instructed via the app to have at least a 15-minute break between the Ag-RDT and the sample collection for molecular tests. Participants were asked to record the results of the Ag-RDT when available by selecting one of the following options: “Negative,” “Positive,” “Invalid,” or “Don't Know,” and upload a photo of the test strip through the app (Supplemental Figure 9). Participants assigned to Test C were also asked to download the Test C company-specific app which contained a test reader. The company test reader asked participants to upload a photo of the test strip and provided Ag-RDT results in real time. Test C users were asked to report the test result given by the test reader, rather than self-interpretation, to the study app. If the participant tested positive by Ag-RDT, they were told to contact their healthcare provider for any medical questions. In the event of an invalid Ag-RDT result, the participant was asked to perform a second Ag-RDT. If the second Ag-RDT result was also invalid, the participant was not asked to perform additional Ag-RDT on that day and was instructed to continue to the PCR sample collection (at least 15 minutes after Ag-RDT sample collection). Participants were responsible for shipping the PCR collection kit containing the sample using the pre-paid FedEx envelope using instructions provided by Quest Diagnostics, which were authorized by the FDA for emergency use.

Molecular Testing Procedures: Due to concerns of potential false-positive molecular tests, the FDA recommended the use of two types of molecular assays, and potentially, a third assay, if the prior two assays were discordant (oral communication, August 3, 2021) [Reference Kanji, Zelyas and MacDonald3Reference Lowe, Matic and Ritchie5]. Once participant samples were received at Quest Diagnostics, the sample was divided into aliquots to perform Roche Cobas 6800 SARS-CoV-2 PCR and Quest RC COVID-19 PCR DTC for all participants (Supplemental Figure 10). An additional aliquot was preserved by the laboratory to allow for testing on Hologic Aptima SARS-CoV-2 transcription-mediated amplification (TMA) assay if Roche Cobas and Quest LDT were discordant or inconclusive. All three molecular assays used were highly sensitive, authorized by EUA, and run per instructions per use. Remnant samples of sample fluid from home-collection kits were frozen at −80 °C and shipped to UMass Chan Medical School for future testing. Results of the molecular PCR assays were available to participants through the Quest results portal and communication from the study team. All participants that tested positive by molecular test received a phone call from a study team member. In the event of discordant results, the participant was contacted by the study staff to explain the results.

Communication with the study team: Participants were provided a study hotline staffed with research coordinators to provide support during extended hours during the weekdays from 8 AM to 9 PM EST. Participants were also able to contact the study team via email at all times. All calls between coordinators and participants were documented in a call log containing the length of the call, the reason for the call, narrative about the call, and whether the issue was resolved.

Data Management: There were three primary sources of data (Supplemental Figure 11). All participant-reported data were collected through the study app and downloaded incrementally through secure file transfer protocol to UMass Chan servers. The molecular testing data were shared by Quest Diagnostics through an established datafeed between Quest and UMass Chan servers. However, the PCR cycle threshold (CT) values and results of the tiebreaker assays were not part of routinely abstracted data and required manual abstraction. Finally, tracking reports were used to document all communications between participants and the study staff. All three sources of data were combined using a participant’s unique identifier assigned by the study app. Paired Ag-RDT and PCR data were merged by matching on participant identifier and date of testing. When test dates were not aligned or there were unmatched results, a member of the study team reviewed all testing data from the participants to adjudicate paired findings. The most common reason for mismatched dates was due to missing dates of collection or transcription errors, as date of collection for PCR sample was handwritten by the participant on the Quest requisition form. Study staff reconciled possible mismatched test results by reviewing data for requisition numbers entered in the app, shipment tracking data, and contact reports. If mismatched data could not be adequately aligned, those data were considered ineligible due to failed quality checks. Additional adjudication undertaken by the study team included a manual review of all Ag-RDT images for participants who either had a self-reported positive, don't know, or invalid Ag-RDT result or if they tested positive on molecular test at any point during the study. All data were de-identified and shared with the FDA throughout the study. At the end of the study, the final and cleaned dataset was shared with the FDA and company-specific data were shared with the companies. All data will be made available on the NIH Data Hub once the main findings from this study are published.

Results

Enrollment: In total, 7361 participants enrolled in the study between October 18, 2021 and February 15, 2022 (Fig. 1a). Due to complications with the company app, Test C enrollment began on November 4, 2021, 3 weeks after Test A and B enrollment started (Fig. 1b). Following initial surges in recruitment, the enrollment waitlist was implemented on November 24, 2021 to allow for refinement of enrollment from geographic hotspots. In total, 492 participants tested positive on either molecular or Ag-RDT assays during the study, and 154 were eligible for analyses (Fig. 2).

Figure 1. Weekly test us at home study enrollment.

Figure 2. CONSORT diagram of test us at home.

Geographic and Sociodemographic Participant Characteristics: The geographic and sociodemographic characteristics of study participants changed over the course of the study, reflecting the shifts in the enrollment approach. In total, participants from 44 of the 48 mainland US states enrolled in the study (Fig. 3). In October, the majority of initial study recruitment occurred on a college campus in Wisconsin, resulting in a young, predominantly white, student population (Fig. 3; Table 2). Additionally, only 3.1% of enrolled participants in October were unvaccinated for SARS-CoV-2. In November, study recruitment was expanded throughout the USA, with enrollment from more than 20 different states. Participants from rural populations comprised 14% of the participants recruited in November. In December and January, study enrollment shifted to prioritize unvaccinated participants, preferentially pulling these participants from the study waitlist, resulting in 24.6% and 21.4% of monthly enrollment being unvaccinated (Table 2). In January, recruitment efforts also focused on increasing racial and ethnic diversity in the study population, and the minority of participants enrolled during January identified as White (42.6%). Additionally, 10.5% of participants enrolled in January were of Hispanic origin.

Figure 3. Test us at home enrollment by state and month, October 2021–January 2022.

Table 2. Test us at home participant demographics by month of enrollment

Operational Support and Logistics: Throughout the study, there were 11,646 contacts between coordinators and 4,389 distinct participants (Table 3). This accounted for 33,356 minutes of coordination time. Coordinators and participants communicated by phone call (39.7%), email (30.6%), voicemail (29.7%), and push notification within the study app (4.2%). The most common reason for contact was to share COVID-19 test results from molecular tests, which accounted for 24.3% of all calls. Other common reasons for contact included testing reminders (19.1%), shipping and receipt of test kits (16.7%), compensation (16.3%), and clarifying the testing schedule (10.4%). The company-generated app for Test C was initially only compatible with certain smartphones; therefore, we delayed enrollment for Test C by 2 weeks and troubleshooted the app with the company, to ensure participants were able to use the app appropriately (Fig. 1a). Further, the decentralized design necessitated an increased reliance on shipping vendors, allowing us to reach participants across the mainland US states. However, this also required a variety of tracking methods to keep track of the 58,888 kits shipped. This study spanned the holiday season, which resulted in additional shipping delays; 1.2% (n = 728) of packages were affected by logistical or participant errors that led to an unviable or missing RT-PCR test result.

Table 3. Type of contact and reasons for participant–coordinator contact

a Reasons for contact were not mutually exclusive.

Discussion

Using an innovative digital, site-less study approach, we developed recruitment and enrollment strategies to capture new onset COVID-19 infections most effectively throughout the country, resulting in the detection of nearly three times as many new-onset COVID-19 cases as originally specified. The digital site-less approach allowed us to seamlessly and dynamically change enrollment patterns and sample different populations based on the evolving nature of the pandemic, to assess rapid diagnostics with agility, efficiency, and rigor among asymptomatic individuals.

The digital site-less approach offered great agility to alter the recruitment strategy to fit our needs throughout the evolution of the study. Initially, to iron out the study logistics, we opened study recruitment in October 2021 to members of a Midwestern university marching band who were traveling together, where we anticipated a high amount of unmasked, close contact interactions. This group enrollment is reflected in the demographics of participants in October, who were younger and had a higher proportion of white participants than participants recruited in November through January. Later, we were able to geographically alter the sampling to enroll participants from throughout the USA and use a waitlist to selectively sample certain populations by zip code, determined by community prevalence of COVID-19 and demographics, to ensure we were curating a representative sample. For example, after seeing low enrollment of participants over age 65 years in October through December, in January, we selectively pulled those over 65 years of age off the waitlist, increasing the proportion of participants in this group from 1.5% in October to 13.4% in January. Additionally, the recruitment strategy resulted in 16.6% of all enrolled participants being unvaccinated for SARS-CoV-2, matching nationwide estimates for those over 5 years of age [6].

The waitlist approach also allowed us to try to ensure that enrollment was not solely clustered among those who have the greatest access to recruitment strategies, but rather gave the time for information to disseminate to and within various communities, including populations underrepresented in research and device studies [Reference Clark, Watkins and Piña7]. Approximately 12% of our sample came from rural populations, compared with an estimated 14% of the US population living in rural areas nationally [8]. Previous research has found that rural groups are particularly hesitant about the time commitment involved in research studies; therefore, the siteless design may have helped to alleviate this concern [Reference Friedman, Foster, Bergeron, Tanner and Kim9]. Also, 1.8% of our sample identified as Native American or Alaskan Native only or in addition to another race, which is reflective of national census rates [10]. Lastly, while many population-level diagnostic studies only include individuals over 18 years old, over 10% of our sample was under age 18 years, increasing our understanding of these devices among both children and adults.

While we intentionally scheduled our study to overlap with the winter months due to the predicted seasonal increase of SARS-CoV-2, the launch of our study nationwide also coincided with the beginning of the Omicron COVID-19 surge throughout the USA, as well as a nationwide shortage of rapid antigen tests. Originally, we planned to prioritize enrollment to select communities that met a threshold for high prevalence of COVID-19; however, during the Omicron surge, nearly all communities throughout the USA quickly met that threshold. Furthermore, health departments and community health workers were very receptive to sharing our study information with their communities, as it filled a clear, immediate need in our ability to supply rapid antigen tests. The overwhelming interest in the study made the waitlist approach particularly effective.

The remote design also resulted in gains in study efficiency, in terms of both coordinator time and cost. Instead of requiring coordinators to schedule and supervise serial sample collections for each participant, the virtual, site-less design allowed participants to test in their homes and on their time, increasing accessibility to individuals with various employment and school schedules, as well as living situations. According to the study coordination logs, direct participant–coordinator phone calls averaged to just over 1-hour per day (82.3 minutes over 161 days of the study) and consisted primarily of testing reminders and returning molecular test results to participants. As clinical research coordinators on average support upwards of seven studies concurrently and approximately two-thirds of coordinators report being expected to work more time on studies than allotted, it is important to optimize study coordination time, while maintaining quality and consistency [Reference Speicher, Fromell and Avery11]. Here, we showed the ability for site-less digital trials to run with minimal participant–coordinator interaction, while maintaining high enrollment and adherence to the study protocol.

Site-less cohort studies have become increasingly common, as advances in technology have made virtual recruitment and engagement more feasible [Reference Friedman, Foster, Bergeron, Tanner and Kim9]. However, digital products and solutions in clinical research remain underutilized [10]. Site-less studies can not only facilitate recruitment and participant engagement, but they can also enhance scientific rigor and design. Through coordination with shipping services, home-test distributors, and clinical labs, we organized shipments of study supplies to participants on enrollment, which allowed participants to start testing at home within 24–48 hours of enrollment. Further, we coordinated the pick-up and shipment of biological samples to the central laboratory, which has seldom been done within the framework of a digital study. The success of these workflows opens the door for the adaptation of site-less studies to answer many complex research questions, by integrating survey responses, biological samples, and home health monitoring.

There are potential limitations to the study. We did not require participants to be under video observation while performing the Ag-RDT or collecting samples for molecular tests. However, our study facilitated use of these tests “as intended” and “as authorized” by the FDA and therefore may represent better approximation of real-world performance of the tests than traditional clinical studies. Additionally, the smartphone app required participants to take images of the Ag-RDT; therefore, we could verify the Ag-RDT results for all participants. Due to the scale and nature of this study, we used commercial vendors for assembly and distribution of the testing kits, which precluded our ability to provide study-specific written instructions enclosed within the kit; we instead relied on providing all study-specific instructions through the smartphone app. This resulted in confusion for some participants who did not start testing immediately after receiving the test kits or did not perform both Ag-RDT and PCR tests concurrently. However, our study team was able to detect these instances and intervene accordingly. Because of the logistical constraints, we could not facilitate a cold-chain transportation of the collected specimen, which precluded our ability to perform viral culture studies. Sample collection for molecular tests required participants to handwrite the date of sample collection, and laboratory data had to be linked with Ag-RDT data based on Participant ID and testing date, which resulted in mismatches and required substantial manual effort for reconciling differences. Finally, the requirement of smartphone ownership to participate in the study disadvantages people without smartphones, as well as those living in areas with poor cellular or wireless access. However, smartphone use is ubiquitous and has been accelerated by the SARS-CoV-2 pandemic.12Additionally, we submit that the barrier to participation created by requiring a smartphone is lesser than that created by requiring participants to travel to a clinical study site, multiple times during the study.

These limitations notwithstanding, this study represents the most robust attempt at understanding the performance of Ag-RDT for SARS-CoV-2 detection among asymptomatic participants. We summarize the advantages of our digital siteless approach in Table 4 and outline recommendations for future digital studies. The collaboration between the NIH, FDA, and the RADx Clinical Studies Core allowed for the development of a protocol that was innovative and a study that will provide tremendous value to federal agencies, academic researchers, and companies for elucidating key questions related to raid antigen tests’ performance. Future collaborative efforts to develop best practice guidelines and infrastructure for performing digital clinical studies are needed to advance scientific community’s ability to perform rigorous clinical research and answer vexing questions.

Table 4. Summary of features of this study and opportunity for improvement

Supplementary Material

To view supplementary material for this article, please visit https://doi.org/10.1017/cts.2023.540.

Acknowledgments

This study was funded by the NIH RADx Tech program under 3U54HL143541-02S2 and NIH CTSA grant UL1TR001453. The views expressed in this manuscript are those of the authors and do not necessarily represent the views of the National Institute of Biomedical Imaging and Bioengineering; the National Heart, Lung, and Blood Institute; the National Institutes of Health, or the U.S. Department of Health and Human Services. Salary support from the National Institutes of Health U54HL143541, R01HL141434, R01HL137794, R61HL158541, R01HL137734, U01HL146382 (AS, DDM), U54EB007958-13 (YCM, MLR), AI272201400007C, UM1AI068613 (YCM). We are grateful to our study participants and to our collaborators from the National Institute of Health (NIBIB and NHLBI) who provided scientific input into the design of this study and interpretation of our results, but could not formally join as coauthors due to institutional policies and to the Food and Drug Administration (Office of In Vitro Diagnostics and Radiological Health) for their involvement in the primary TUAH study. We received meaningful contributions from Drs. Bruce Tromberg, Jill Heemskerk, Felicia Qashu, Dennis Buxton, Erin Iturriaga, Jue Chen, Andrew Weitz, and Krishna Juluru. We are thankful to the developers of the rapid antigen tests, who donated their tests for research use in this study. We are greatly appreciative of the contribution to this study by the numerous staff at UMass Chan, including critical support from Karen Gilliam, Mary Janet McCarthy, Amber Showers, Cynthia Kinahan, Kimberly Cantin, and Danielle Howard. We would also like to acknowledge the support provided by clinical coordinators from Threewire, Inc. We are thankful to county health departments across the country who helped with recruitment for this siteless study by spreading the word in their networks.

Data Availability

The datasets generated during the current study are available through the RADx Datahub (https://www.radxdatahub.info/).

Disclosures

VK is principal, and TS, SS, CN, ES, and EH are employees of the health care technology company CareEvolution, which was contracted to configure the smartphone study app, provide operational and logistical support, and collaborate on overall research approach. LS and LR are employees of Quest Diagnostics LLC, which was contracted to provide direct-to-consumer kits, logistical support for nationwide RT-PCR testing, and operational support for producing molecular testing results. DDM reports consulting and research grants from Bristol-Myers Squibb and Pfizer, consulting and research support from Fitbit, consulting, and research support from Flexcon, research grant from Boehringer Ingelheim, consulting from Avania, non-financial research support from Apple Computer, consulting/other support from Heart Rhythm Society. YCM has received tests from Quanterix, Becton-Dickinson, Ceres, and Hologic for research-related purposes, consults for Abbott on subjects unrelated to SARS-CoV-2, and receives funding support to Johns Hopkins University from miDiagnostics. LG is on a scientific advisory board for Moderna on projects unrelated to SARS-CoV-2. AS receives non-financial support from CareEvolution for collaborative research activities. Additional authors declare no financial or nonfinancial competing interests.

Footnotes

AS and CH contributed equally to this work.

References

Policy for Coronavirus Disease-2019 Tests During the Public Health Emergency (Revised) | FDA. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/policy-coronavirus-disease-2019-tests-during-public-health-emergency-revised. Accessed July 4, 2022.Google Scholar
Smith, RL, Gibson, LL, Martinez, PP, et al. Longitudinal assessment of diagnostic test performance over the course of acute SARS-CoV-2 infection. J Infect Dis. 2021;224(6):976982. doi: 10.1093/infdis/jiab337.CrossRefGoogle ScholarPubMed
Kanji, JN, Zelyas, N, MacDonald, C, et al. False negative rate of COVID-19 PCR testing: a discordant testing analysis. Virol J. 2021;18(1):13. doi: 10.1186/s12985-021-01489-0.CrossRefGoogle ScholarPubMed
Wei, Z, Ryhana, M, Elizabeth, S, Berry, GJ. Comparison of four molecular in vitro diagnostic assays for the detection of SARS-CoV-2 in nasopharyngeal specimens. J Clin Microbiol. 2022;58(8):e00743e00720. doi: 10.1128/JCM.00743-20.Google Scholar
Lowe, CF, Matic, N, Ritchie, G, et al. Detection of low levels of SARS-CoV-2 RNA from nasopharyngeal swabs using three commercial molecular assays. J Clin Virol. 2020;128:104387. doi: 10.1016/j.jcv.2020.104387.CrossRefGoogle ScholarPubMed
Clark, LT, Watkins, L, Piña, IL, et al. Increasing diversity in clinical trials: overcoming critical barriers. Curr Probl Cardiol. 2019;44(5):148172. doi: 10.1016/j.cpcardiol.2018.11.002.CrossRefGoogle ScholarPubMed
Demographic and economic trends in urban, suburban and rural communities | Pew Research Center. https://www.pewresearch.org/social-trends/2018/05/22/demographic-and-economic-trends-in-urban-suburban-and-rural-communities/. Accessed April 1, 2023.Google Scholar
Friedman, DB, Foster, C, Bergeron, CD, Tanner, A, Kim, SH. A qualitative study of recruitment barriers, motivators, and community-based strategies for increasing clinical trials participation among rural and urban populations. Am J Health Promot. 2015;29(5):332338. doi: 10.4278/AJHP.130514-QUAL-247.CrossRefGoogle ScholarPubMed
American Indian/Alaska Native - The Office of Minority Health. https://minorityhealth.hhs.gov/omh/browse.aspx?lvl=3&lvlid=62. Accessed April 1, 2023.Google Scholar
Speicher, LA, Fromell, G, Avery, S, et al. The critical need for academic health centers to assess the training, support, and career development requirements of clinical research coordinators: recommendations from the Clinical and Translational Science Award Research Coordinator Taskforce. Clin Transl Sci. 2012;5(6):470475. doi: 10.1111/j.1752-8062.2012.00423.x.CrossRefGoogle ScholarPubMed
Demographics of Mobile Device Ownership and Adoption in the United States | Pew Research Center. https://www.pewresearch.org/internet/fact-sheet/mobile/. Accessed August 14, 2022.Google Scholar
Figure 0

Table 1. Inclusion and exclusion criteria

Figure 1

Figure 1. Weekly test us at home study enrollment.

Figure 2

Figure 2. CONSORT diagram of test us at home.

Figure 3

Figure 3. Test us at home enrollment by state and month, October 2021–January 2022.

Figure 4

Table 2. Test us at home participant demographics by month of enrollment

Figure 5

Table 3. Type of contact and reasons for participant–coordinator contact

Figure 6

Table 4. Summary of features of this study and opportunity for improvement

Supplementary material: PDF

Soni et al. supplementary material

Firgures S1-S11

Download Soni et al. supplementary material(PDF)
PDF 1.8 MB