Quintiles leads researchers to checklist for evaluating study quality
The Good ReseArch for Comparative Effectiveness (GRACE) checklist was published as part of a peer-reviewed study in the March issue of the Journal of Managed Care & Specialty Pharmacy.
Although thousands of observational studies are undertaken each year, the quality of these studies varies because there is a lack of agreed-upon standards for the conduct and evaluation of this research in the context of a particular study purpose, according to the study.
In the absence of such guidance or standards, the GRACE checklist contains 11 items about data and research methods that can be used as an initial screening tool to separate observational studies that meet baseline quality criteria from those that do not.
Dr. Nancy Dreyer, Global Chief of Scientific Affairs and SVP, Quintiles, and lead author of the study, told Outsourcing-Pharma that the idea for the checklist was initiated in 2008 to address decisions faced by researchers that aren’t necessarily regulatory-related.
“The ultimate form of a comparative effectiveness study is a randomized trial, but we shouldn’t disregard other types of evidence,” she told us. “You don’t need a randomized trial for everything, especially for payers.”
Observational studies in this context are defined as those that are performed prospectively as researchers observe patients while they are treated, but they can also be retrospective analyses of existing data, such as from medical chart reviews or databases.
But the study also cautions against the use of the checklist with meta-analyses. Dreyer clarified that the checklist “doesn’t give a pass/fail score – we don’t go that far with the checklist. But you have to look to see what’s out there.” She added that the six data questions from the checklist in particular can be used by people without a lot of advanced medical training.
The GRACE checklist is "based on existing literature and guidance from experts with extensive experience in the conduct and utilization of observational comparative effectiveness research," and is meant to help determine which observational studies should be considered to support decision-making.
According to the study in which the GRACE checklist was published, the “most consistent predictor of quality relates to the validity of the primary outcomes measurement for the study purpose. Other consistent markers of quality relate to using concurrent comparators, minimizing the effects of bias by prudent choice of covariates, and using sensitivity analysis to test robustness of results.”
But Dreyer also noted that although “no scoring is provided, study reports that rate relatively well across checklist items merit in-depth examination to understand applicability, effect size, and likelihood of residual bias.”
“We need to decide what evidence do we have to inform practices and patients on what treatment is really better … it’s not all or nothing,” Dreyer said. “If you’re willing to open up the world and say non-randomized trials can be useful, we need simple guidance to say a study is good enough to pay attention to.”