- The Issue
- The Solutions
- News & Media
- Case Studies
- Policy Center
An NIH study of treatments for high blood pressure, called the ALLHAT trial, shows some of the strengths and limitations of comparative effectiveness research to improve patient care. More...
For journalists and other media professionals
A few days ago, I posted about what I saw as one of the key lessons from the controversy over the new U.S. Preventive Services Task Force’s (USPSTF) mammography guidelines, namely, that expert panels can sometimes come to different conclusions based on the same evidence.
A reader pointed me to an even more recent USPSTF example that illustrates how the technical decisions experts make about data analysis can lead to very different conclusions.
Earlier this month, USPSTF released a document explaining why its recommendations for aspirin use in primary and secondary prevention of vascular disease were different from the conclusions reached in a May 2009 article in the medical journal The Lancet. The USPSTF recommends preventive use of aspirin in men over 45 and women over 55. The Lancet study authors came to "different conclusions on the net benefit of aspirin in the primary prevent of cardiovascular events," according to the task force.
USPSTF said the analysis in the Lancet did not consider different effects in men and women. Instead, their analysis put everyone together in a combined analysis. They also criticized the researchers’ decision to combine several adverse events – such as heart attack and stroke – into a single outcome measure.
USPSTF concludes that, as evidence becomes increasingly complex, "the increasing use of meta-analyses to synthesize evidence will mean that the re-analysis of the same bodies of evidence may lead to conflicting conclusions depending on how researchers choose to combine, stratify and re-analyze the primary studies. While there is a great appeal in combining data to make a simple "do" or "don't do" recommendation, this simplicity occurs at the expense of our ability to tailor prevention care to the specific patient."
This again underscores how easy it is to end up with comparative effectiveness research (CER) that is not patient-centered. If we don't get CER right, that is exactly where we will end up – results that are over-simplified and misapplied . And this will occur at the expense of our ability to tailor care to the needs of each patient.
A new CER program needs to be truly independent and protected against conflicts of interest. And it needs to include in the governance structure a diverse array of stakeholders from the patient, provider and other communities, so different perspectives can be considered at the front end where it really matters. Any new CER program also should be focused on communicating results, not making policy recommendations or guidelines, to avoid controversies like the one created by the USPSTF mammography guidelines.
Only the Senate bill includes a CER program that meets these requirements. I hope you'll join me in working to see that it is part of health care reform.