In this week’s New England Journal of Medicine, Kesselheim and colleagues publish findings from their randomized study of 269 internists who were asked to interpret the results, described in scientific abstracts, of three hypothetical clinical trials of three made-up drugs: lampytinib, bondaglutaraz, and provasinab. The participants knew that the trials were not real and that the study’s aim was to investigate the role that disclosure of funding plays in physicians’ interpretation of clinical trial research.
The trials were randomly configured as having high, medium, or low methodological rigor, depending on the use of randomization and blinding, sample sizes and dropout rates, length of follow-up, and endpoints. In addition, the trials were randomly identified as being funded by industry, the NIH, or having no external funding.
To the great relief of teachers of evidence-based medicine, who have spent countless hours instructing medical students and house staff in how to evaluate articles in the medical literature, physicians appeared to appropriately assess the trials’ methodological rigor: They were least willing to prescribe drugs tested in low-rigor trials and most willing to prescribe drugs tested in high-rigor trials.
But to the consternation of industry and the Journal’s Editor-in-Chief, who wrote the accompanying editorial, disclosure of industry funding was associated with physicians’ propensity to downgrade the rigor of a trial, their confidence in the results, and their willingness to prescribe the drugs. And physicians had the greatest confidence in NIH-funded trials.
Dr. Drazen writes in his editorial: “We think that decisions about how trials influence practice should be based on the quality of the information conveyed in the full study report.”
I can only partially agree with this statement. Although the full study report is the major consideration in evaluating any study, I believe that physicians were right to be skeptical of industry-funded trials.
Kesselheim and colleagues varied several different aspects of the hypothetical trials’ designs in order to create differences in their rigor. However, a large number of details and decisions are made in the process of designing, conducting, and reporting on a trial, so it would be impossible to capture them all in a scientific abstract. And in truth, it’s even impossible to capture them all in an 8-page article.
Physicians’ experiences with rofecoxib, rosiglitazone, Epogen, gabapentin, olanzapine, and many other pharmaceuticals have taught them the hard lesson that when the stakes are high with billions of dollars at risk, the stakeholders try to cast the drugs in the most positive light possible.
Maybe negative studies are never published. Maybe adverse events are not reported. Maybe a primary outcome is swapped for a secondary outcome that had a larger magnitude of effect. Maybe the comparator is less potent.
But decisions are made that do not obviously affect the apparent rigor of the trial — the trial is still randomized, blinded, with a large sample and long follow-up. So at first glance, it appears to be of high quality. But these decisions distort the trials’ findings and have distorted the medical literature.
While we wait for better evidence in today’s world — which requires trial registration before enrollment, prespecification of primary and secondary outcome measures and safety outcomes, and reporting of results within a year of trial completion — I think physicians are right to remain skeptical.
Are you with me in siding with these skeptics?