Addressing Inequities in COVID-19 Testing

June 25, 2020

By Eric Perakslis, PhD & Erich S. Huang, MD, PhD

As states try to reopen their economies against a background of surging COVID cases, we’ve been reminded why we are all taught as children not to run into a burning building, but to instead call the fire department and let the experts handle things. While testing capacity has improved in most areas, experts estimate that we still require roughly 10 times the current COVID-19 diagnostic testing capacity available today and proper COVID testing data management remains an unsolved issue with multiple dimensions. But amid the general rush to fill this gap, our team—one highly experienced with pandemic response and outbreak informatics—is seeing all the usual “tech comorbidities” that occur when technologies must be rapidly scaled during emergency situations.

Specific issues affecting the scaling of COVID testing include rapid deployment of tests with questionable accuracy, supply chain shortages, and waste, fraud, and abuse. But even where available testing capacity is adequate to meet needs, the data is not yet flowing properly from testing sites to local health authorities and onward to the CDC. In this post, we’ll dissect the specific challenges of COVID-19 testing data and knowledge management, and provide an update on our efforts to fix this problem on a national level.


More Testing, More Challenges

Over the last few months, we’ve been working with the World Economic Forum’s (WEF) COVID Action Platform, the Council of State and Territorial Epidemiologists (CSTE), the Rockefeller Foundation, and multiple manufacturers to assist and advise on all aspects of COVID testing deployment and data management. Digging into specific data challenges, we see 3 discrete scenarios emerging based on location: testing in established healthcare settings (clinics, commercial labs, pharmacies), versus testing in retail settings, versus testing at “pop-up” sites (parking lots, prisons, homeless shelters and other isolated locations). 

If healthcare data continues to be about haves and have-nots, health disparities themselves will continue to exist. 

Testing in traditional clinical settings presents fewer challenges with data management. For the most part, these sites already had processes and systems for passing data on known pathogen tests to their local public health offices via electronic health record (EHR) and/or laboratory information management (LIM) systems. This is also true for most commercial labs, pharmacies, Walmart locations, and other similar facilities—these sites either had the necessary infrastructure already in place or had the resources to stand it up quickly. 


Things Are Not Equal

Unfortunately, the same cannot be said for “pop-up” testing sites. Often located in poorly resourced inner city or deeply rural areas, these sites are struggling to deliver adequate testing and to properly aggregate and report testing data, despite the fact that a steady flow of real-time COVID testing data is essential for public health officials to prioritize help. COVID stories, such as those emerging from the Navajo Nation, are heartbreaking and infuriating in their raw injustice. In many ways, these public-health tragedies reflect the inequity that has boiled over onto our streets over the last few weeks – and it is this exact problem that we seek to solve. But how?

Close-up photo of an American 110-volt wall outlet with a two-pronged plug being inserted into one of the sockets. Image credit:  Clint Patterson via Unsplash
Image credit: Clint Patterson via Unsplash

 A critical first step involves enabling electronic data capture and transmission to local health authorities. While this may sound simple, actually accomplishing it is a nontrivial task. Many COVID diagnostic testing instruments, such as the Abbott ID Now, were designed for clinic/laboratory settings that are already equipped with data management capabilities. In fact, the ID Now machine only has a keypad similar to the one on an ATM, and the device itself incapable of storing patient information. In fairness, current testing environments could not have been predicted even a year ago, and these design attributes would normally count as features, not bugs. But the result is that what these machines do “know” about any given test comprises a patient or sample ID and the test result, which is far short of the minimal data set needed by public health officials. 

After studying this problem, we decided that what is needed is a simple testing manufacturer and technology agnostic “Ask at Order” system. Under this approach, a basic set of questions that could be asked and answered at the point of COVID testing, then merged with the sample, patient ID, and test results. This information would be stored in database and automatically transmitted to the governing local health authority for case creation, contact tracing, and aggregation for reporting to the CDC. We have built a prototype mobile app that can do this, and one of us (Eric P.) travels to Utah this week to test the approach and software in 3 under-resourced COVID testing sites: a rural clinic, an organization that cares for the homeless, and a rural hospital that serves local tribal nations, including the Navajo Nation.


APIs, Infrastructure & the Need for a “Plug and Play” Approach to Data

Across the US healthcare enterprise, we need to build the health data equivalent of the 110V power socket. In your house this socket provides power regardless of whether you want to plug in a desk lamp, a laptop, or a refrigerator. Wouldn’t it be great if just as you plug an Abbott ID Now machine into the wall, there was a standardized equivalent of a plug for data? And that this plug would work identically for a Becton Dickinson device?

Outside of the world of healthcare, this kind of “plug-in” data technology is ubiquitous in our daily lives. When you ask for directions from Google Maps, it treats all requests identically, whether they come from an Apple iPhone, an Android device, or a desktop machine running any number of operating systems. This “plug in” architecture for data, also known as “application programming interfaces” (APIs) makes it very easy for developers to write applications that use the Google Maps API. The fact that your favorite local restaurant’s website can provide you directions shows how easily Google has made their plug-in data architecture for maps.

We need the same approaches in healthcare broadly, and in managing the SARS-CoV-2 pandemic specifically. This would have the salutary effects of improving the standardization of testing data and democratizing data flows by clearly documenting the “plug.”


Connecting the Wiring

Traffic signal with all three lights illuminated. Image via Pixabay.
Image credit: Pixabay

So, how specifically would we go about this? We see 2 key steps to this process:

  1. First, we create open, standardized, and publicly-accessible templates for reporting testing data. One such template would align with the set of simple questions (like the ones we described above) to be asked and answered at the point of COVID testing. This particular template would itself be just one of a comprehensive set of interlocking of templates that represent a full set of standardized data elements for testing data that everyone at a national level can understand and work with.
  2. Second, these templates could be formatted in a way that makes them easy to transmit across a network using appropriately secure protocols. Because these templates would embrace a data standard such as the Fast Healthcare Interoperability Resources (FHIR) standard hosted by HL7, the organization responsible for many healthcare information standards, they would be easy to for local, state, and national public health authorities to use.

Ultimately, just as electricity, sewage, or road systems all embrace common standards, we need to learn to adopt a mindset that understands healthcare data as a utility. (After all, the color of traffic lights don’t change from municipality to municipality.) And, similar to other utilities, we need to see health data as a necessary part of the infrastructure whose provision to our most vulnerable communities allows us to begin to address fundamental disparities – in this case, in healthcare access and effectiveness. If healthcare data continues to be about haves and have-nots, health disparities themselves will continue to exist.  This is an especially poignant point when we remember that many underserved communities suffer from inadequate infrastructure, and some tribal nations lack access to basic services such as running water or proper basic sanitation. In truth, we are already a nation of have and have-nots. But there’s a lot we can do to improve on this.

Getting back to the current pilots, we hope and expect that we’ll be able to establish active data feeds from remote test sites directly to their local public health departments. This will enable rapid case creation/assessment and contact tracing, as well as allowing essential services to be provided when and where they are needed, instead of weeks or months late. The work and data flows for each site will be studied and optimized to reduce redundant data entry, accelerate processing, improve accuracy, and eliminate any other undue burdens. Further, it is clear that COVID-19 viral diagnostic testing is just the beginning of what our new world will look like. Eventually, home testing, antibody testing, and employer testing are all imminent and will require a rapid, manufacturer-agnostic data management solution.

Those of us at Duke and our partners at WEF and CSTE hope that this lightweight model will be quickly piloted elsewhere and take hold as a national model, ensuring that COVID testing data is timely, accurate, comprehensive and, most importantly, directly helpful to the community from which it originates.

Wish us luck.

Eric & Erich

Author