BLOGS Voices Follow

26 Sep 2013

Scrutinizing the PRAMI Trial of “Preventive Angioplasty”

Victor Montori offers his analysis of the PRAMI trial, recently published in The New England Journal of Medicine.

THE STUDY

In a patient-blind trial, 465 patients with STEMI and multivessel CAD were randomized to undergo infarct-artery–only PCI or additional PCI in non-infarct arteries during the initial procedure. Patients with cardiogenic shock, prior coronary artery bypass grafting, significant left main disease, or chronically occluded arteries were excluded. The study was stopped early (mean follow-up, 23 months), when a significantly lower incidence of the primary composite endpoint — cardiac death, MI, or refractory angina — emerged in the group that received the additional (non–infarct-artery) PCI compared with the group that underwent infract-artery–only PCI (hazard ratio, 0.35; 95% CI, 0.21–0.58). Rates of procedure-related complications were similar in the two groups.

MONTORI’S ANALYSIS

PRAMI has garnered attention for showing that, in patients with STEMI and multivessel CAD, stenting culprit and nonculprit lesions (“preventive” PCI) reduced the risk for the trial’s primary composite endpoint, compared with stenting only the culprit lesion. Most discussions of this trial have focused on its findings rather than its methods. The crucial question to be addressed: What should our confidence in this estimate of effect be, given that, as the article notes, “By January 2013, the results were considered conclusive by the data and safety monitoring committee, which recommended that the trial be stopped early”?

As my study group has found, stopping trials early because of an unexpectedly large treatment effect is a sure way to overestimate that effect, particularly when the number of events is low. The overestimate because of truncation is small after 500 outcomes, moderate for 200 to 500, and large for <200 events. We have also found that stopping trials early increases the trial’s chances of being published in a top-tier medical journal and of receiving rapid dissemination and incorporation into guidelines. The interpretational challenges increase when the trial is stopped on the basis of the effect of therapy on a composite endpoint: Stopping early guarantees an imprecise assessment of the effect of therapy on the infrequent — and often more important — outcomes that make up the composite endpoint.

The PRAMI trial illustrates all of these points. First, it was stopped after only 74 outcomes had accrued. Second, despite its size, the trial was published in the NEJM. Third, the sparse events were distributed in the components of the composite endpoint that differed significantly in their frequency — cardiac death (14 events), nonfatal MI (27 events), and refractory angina (42 events) — and in their importance to patients. Also, the precision of these estimates and the accompanying P values are extremely sensitive to the addition of just a few events. How many more MIs would need to have occurred in the preventive PCI arm to render the effects on nonfatal MI (P=0.009) nonsignficant? Three. Just three.

One might argue that we should not worry too much about these small trials, given that they can later be pooled in meta-analyses. Our group has shown that such an exercise is fraught with problems: Trials stopped early tend to carry undue weight in meta-analyses because of the effect of publication bias of negative trials. In addition, trials testing the same question are delayed because of the assumption that it is no longer ethical, for example, to randomize patients to not undergo preventive PCI. As a result, the trials that are stopped early gain even more weight in meta-analyses.

Recall that PRAMI’s data and safety monitoring board determined that it was no longer appropriate to continue the trial as planned. So how does one now justify further confirmatory trials? This a false dilemma: The duty to protect people in the trial cannot exceed the duty to protect the much larger population that could be exposed to a potentially harmful intervention that has been supported by an inflated estimate of effect. The imprecise and potentially overestimated results of PRAMI must be tested. It is feasible, ethical, and necessary — and it should have appeared as such to the data safety monitoring board.

The confidence in the estimates coming from PRAMI should be tempered accordingly, to account for the factors described above. What should the researchers have done to prevent this loss in confidence in the results of their trial? They should have decided not to introduce efficacy-stopping rules. If that was not feasible, they should have set up stopping rules that would be triggered only after a large number of outcomes have accrued.

JOIN THE DISCUSSION

Do you agree with Dr. Montori’s analysis of the PRAMI trial?

Click here for a previously published interview with PRAMI’s lead investigator.