With every new international health crisis—be it Ebola, Zika or COVID-19—comes an infodemic. The World Health Organization defines an infodemic as an “overabundance of information, some accurate and some not, that occurs during an epidemic. It can lead to confusion and ultimately mistrust in governments and public health response.”
We all saw how an overabundance of information (and misinformation) about COVID-19 led to confusion, conspiracies and chaos toward the beginning of the pandemic. Even though many hospitals and health systems have since adjusted their operations to better meet the challenges of the pandemic, there is still so much to learn about the COVID-19 virus, how it affects different people, how it spreads, how it mutates, and so much more.
Wading Through a Sea of Studies
As of January 28, 2021, there have been more than 180,000 journal article publications on COVID-19, not including more than 12,600 preprints—full drafts of research papers that are shared publicly before they have been peer reviewed—on MedRxiv, a website that distributes unpublished studies about health sciences. Practicing evidence-based medicine and making clinically sound decisions requires solid, clinically sound evidence. That is why, early in 2020, COL William Smith, MD, a clinical assistant professor at the University of Washington School of Medicine, and Jasmine Rah, a fourth-year medical student at the university, gathered experts within the medical field and established the COVID-19 Literature Surveillance Team. They recognized the need for timely, relevant, peer-reviewed medical literature. The initial goal of the team was to find, review and summarize emerging literature about COVID-19.
Today, the COVID-19 Literature Surveillance Team—or COVID-19 LST—is a volunteer group of more than 120 medical professionals who help their peers stay abreast of the tidal wave of scientific research published each day. The team includes medical students, scientific researchers and physicians who rank, distill and analyze COVID-19 related studies and articles to cut through the noise and bring busy healthcare leaders and decision makers the information they need to make informed decisions.
Evaluating COVID-19 Research
How exactly do the volunteers rate the quality of evidence supporting each study or article they review? The COVID-19 LST rates the articles they review using the Oxford Centre for Evidence-based Medicine Levels of Evidence (CEBM). The CEBM Levels of Evidence can best be described as a “hierarchy of the likely best evidence.” The framework is designed as a short-cut for busy clinicians, researchers or patients who are looking to find the likely best evidence.
Depending on the type of research question category (treatment, prognosis, diagnosis or economic/decision analysis), the type and level of evidence will be different. For example, let’s look the gold standard of a randomized control trial, or RCT. The research question being asked in a treatment study versus prognosis study is different. The level of evidence for a treatment study would be a Level 1A Systematic review (with homogeneity) of RCTs, followed by Level 1B Individual RCT (with narrow confidence intervals). Since an RCT would not be appropriate when evaluating a disease prognosis, the highest evidence would be a level 2A Systematic review (with homogeneity) of cohort studies.
Using the CEBM grading criteria, a reader is given the level of evidence of the research however, it does not guarantee the quality of research. The levels are not intended to provide a recommendation or definitive judgement about the quality of evidence. Still, the COVID-19 LST uses the CEMB system to evaluate each study’s strengths and limitations, which can help healthcare leaders decide which research is appropriate for their needs and decisions.
Saving Lives With the Right Information
When the pandemic hit the U.S., the impact was fast and furious. Healthcare leaders across the country were trying to keep up with the latest news on how to best treat and prevent this disease, while also continuing to provide optimal patient care. One person, one organization, one specialty alone cannot come up with all the answers; a problem of this scale requires a team to crowdsource useful information. The COVID-19 LST found a great benefit in a collective approach to evaluating and analyzing the data and science related to this novel virus.
We know several members of the military healthcare system who have served as COVID-19 LST reviewers and/or contributors, and have used reports from the team to plan for and anticipate changes in resourcing needs. Additionally, the COVID-19 LST has received some feedback from healthcare leaders who have found value in the summaries and case reports about novel treatments and symptoms to look for and track.
Even on a personal note, we have used COVID-19 LST reports to inform how we examine practice patterns. The reports have also allowed us to quickly select which articles are worthy of a deeper dive, which is crucial when the time it takes to learn about a potential treatment option can make all the difference between preventing long-term disability or death in patients with COVID-19.
Those who are interested can subscribe for daily reports from the COVID-19 LST or listen to the podcast on any major podcast platform.
MAJ Chris Armijo, FACHE, is a San Antonio, Texas-based medical operations planner for the U.S. Army. Kimberly Tansey, FACHE, is a health system specialist and business operations data analyst at the Brooke Army Medical Center, also in San Antonio.
The view(s) expressed herein are those of the author(s) and do not reflect the official policy or position of Brooke Army Medical Center, the U.S. Army Medical Department, the U.S. Army Office of the Surgeon General, the Department of the Army, the Department of the Air Force and Department of Defense or the U.S. Government.