Dangerous Deception — Hiding the Evidence of Adverse Drug Effects
http://www.100md.com
《新英格兰医药杂志》
September 30 is becoming a day of infamy for drug safety. On that date in 2004, Merck announced that rofecoxib (Vioxx) doubled the risk of myocardial infarction and stroke, and the company withdrew the drug from the market after 5 years of use in more than 20 million patients. On September 30, 2006, a front-page article in the New York Times reported that the Food and Drug Administration (FDA) had issued a warning that the antifibrinolytic drug aprotinin, widely used to reduce perioperative bleeding in patients undergoing cardiac surgery, could cause renal failure, congestive heart failure, stroke, and death.
Some experts had been concerned about aprotinin (Trasylol) ever since its approval in 1993.1 As Hiatt explains in his Perspective article in this issue of the Journal (pages 2171–2173), one of two epidemiologic studies reported early this year provided support for this concern. In an observational study involving 4374 patients who underwent coronary revascularization,2 Mangano et al. found that patients who were given aprotinin had an incidence of postoperative renal failure requiring dialysis that was more than twice that among patients who received different agents. Among patients undergoing uncomplicated coronary-artery surgery, those given aprotinin had a 55% increase in the incidence of myocardial infarction or heart failure and a 181% increase in the incidence of stroke or encephalopathy. The authors advised against further use of the drug, since safer, cheaper alternatives are available.
After the study was published, the FDA moved to convene an advisory committee to reassess the drug's safety and assembled the relevant data. The committee met on September 21, reviewed the available evidence, and concluded that there was no need for additional warnings on the drug's official labeling.
What put aprotinin on the front page on September 30, however, was the revelation that its manufacturer, Bayer, had hired a private contract research organization to perform its own large observational study of postoperative complications in patients given the drug. The analysis, completed in time for the FDA meeting, reached conclusions similar to those of Mangano et al. It, too, adjusted for a wide variety of clinical characteristics and showed that patients who received aprotinin had higher mortality rates and substantially more renal damage than those given other treatments. But neither Bayer nor its contractor had provided the report to the FDA or even acknowledged its existence before the meeting.
(Figure)
Many aspects of the aprotinin saga are familiar to observers of the drug-evaluation process: a product is approved because it is more effective than placebo, worries emerge about its safety, few or no adequately powered controlled trials are conducted to address these issues, and payers spend huge sums on the drug, despite the dearth of evidence that it is better than older, cheaper agents. The health care system has a hard time performing drug-safety analyses, in large part because it relies on the pharmaceutical industry to conduct most research on the risks and benefits of medications. It is naive to expect companies to voluntarily fund studies that could sink lucrative products, the FDA lacks the regulatory clout to require them, and despite the $220 billion we spend on drugs each year, we apparently can't find the resources to provide public support for these studies, even if the results could be of great clinical importance and save millions of dollars. Although a large randomized trial would have provided a valid means of comparing aprotinin with other treatments, no such study has been undertaken on the necessary scale.
The study by Mangano et al. was observational — its subjects were not randomly assigned to the four study groups. Instead, the investigators reviewed numerous variables for each patient and used a propensity score and multivariable methods to adjust for underlying differences among the groups. Although this approach has important limitations, observational studies often provide the only data available for evaluating critical safety issues.
A confirmatory observational study would have lent key support to the conclusions of Mangano et al. — if its findings had been aired. Bayer has admitted that its suppression of the study was "a mistake," but this is not the first time the company has behaved in this manner. When Bayer was accused of hiding data unfavorable to its cholesterol-lowering drug cerivastatin (Baycol) before it was taken off the market in 2001, litigation uncovered a memorandum from a company executive arguing against performing a study of its risk. "If the FDA asks for bad news, we have to give," read the memo, "but if we don't have it, we can't give it to them."3
Other companies have behaved similarly. Although Merck steadfastly denied that Vioxx increased the risk of myocardial infarction while the drug was on the market, it commissioned two epidemiologic studies of the relationship. My colleagues and I performed one of the studies, but when it confirmed an increased risk, Merck dismissed the findings and assailed the methods that it had previously accepted.4 The second study (by the same contract research group involved in the aprotinin affair) also confirmed the association, but its results were not made public until after the drug had been withdrawn from the market.
The problem is not limited to observational studies. A few years ago, it was discovered that some companies had funded multiple clinical trials of their selective serotonin-reuptake inhibitor antidepressants but reported the results of only the favorable trials — distorting the evidence base physicians use in choosing drugs. But the issue is thornier for epidemiologic analyses. Companies can conduct them secretly, even in-house, with the use of a purchased proprietary database, making the results even easier to conceal.
Carefully performed observational studies may provide the best information available about side effects, but propensity scores and other multivariable techniques applied to epidemiologic research cannot always control for all the inevitable selection bias, making the transparency of methods and raw data even more important than in randomized trials. Rather than yielding "virtual randomized trials," the methods available for controlling confounding in observational research can sometimes look better than they work.5 Thus, these studies can inform our understanding only after their methods have been scrutinized closely, fairly, and objectively — but only if the data are available.
On September 30, 1982, six people in the Chicago area died after taking acetaminophen (Tylenol) that had been laced with cyanide. The tragedy riveted the country's attention for months. We should be able to muster at least a fraction of that concern to address more clinically relevant adverse drug effects that could sicken or kill thousands of patients. How can we capture such interest in less sensational problems of medication safety? A good start would be to make a national commitment to publicly supported studies of drug risks so that no company could take possession of critical findings for its own purposes. The results of that research could be discussed openly at an annual conference on the risks and benefits of drugs. To keep everyone's attention focused on medication safety, perhaps the conference could be held every year on September 30.
Source Information
Dr. Avorn is a professor of medicine at Harvard Medical School and chief of the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women's Hospital — both in Boston.
An interview with Dr. Avorn can be heard at www.nejm.org.
References
Statement on aprotinin. Rockville, MD: Food and Drug Administration, December 30, 1993. (Accessed November 2, 2006, at http://www.fda.gov/bbs/topics/NEWS/NEW00453.html.)
Mangano DT, Tudor IC, Dietzel C. The risk associated with aprotinin in cardiac surgery. N Engl J Med 2006;354:353-365.
Berenson A. Trial lawyers are now focusing on lawsuits against drug makers. New York Times. May 18, 2003.
Burton TM. Merck takes author's name off Vioxx study. Wall Street Journal. May 18, 2004.
Sturmer T, Schneeweiss S, Brookhart MA, Rothman KJ, Avorn J, Glynn RJ. Analytic strategies to adjust confounding using exposure propensity scores and disease risk scores. Am J Epidemiol 2005;161:891-898.(Jerry Avorn, M.D.)
Some experts had been concerned about aprotinin (Trasylol) ever since its approval in 1993.1 As Hiatt explains in his Perspective article in this issue of the Journal (pages 2171–2173), one of two epidemiologic studies reported early this year provided support for this concern. In an observational study involving 4374 patients who underwent coronary revascularization,2 Mangano et al. found that patients who were given aprotinin had an incidence of postoperative renal failure requiring dialysis that was more than twice that among patients who received different agents. Among patients undergoing uncomplicated coronary-artery surgery, those given aprotinin had a 55% increase in the incidence of myocardial infarction or heart failure and a 181% increase in the incidence of stroke or encephalopathy. The authors advised against further use of the drug, since safer, cheaper alternatives are available.
After the study was published, the FDA moved to convene an advisory committee to reassess the drug's safety and assembled the relevant data. The committee met on September 21, reviewed the available evidence, and concluded that there was no need for additional warnings on the drug's official labeling.
What put aprotinin on the front page on September 30, however, was the revelation that its manufacturer, Bayer, had hired a private contract research organization to perform its own large observational study of postoperative complications in patients given the drug. The analysis, completed in time for the FDA meeting, reached conclusions similar to those of Mangano et al. It, too, adjusted for a wide variety of clinical characteristics and showed that patients who received aprotinin had higher mortality rates and substantially more renal damage than those given other treatments. But neither Bayer nor its contractor had provided the report to the FDA or even acknowledged its existence before the meeting.
(Figure)
Many aspects of the aprotinin saga are familiar to observers of the drug-evaluation process: a product is approved because it is more effective than placebo, worries emerge about its safety, few or no adequately powered controlled trials are conducted to address these issues, and payers spend huge sums on the drug, despite the dearth of evidence that it is better than older, cheaper agents. The health care system has a hard time performing drug-safety analyses, in large part because it relies on the pharmaceutical industry to conduct most research on the risks and benefits of medications. It is naive to expect companies to voluntarily fund studies that could sink lucrative products, the FDA lacks the regulatory clout to require them, and despite the $220 billion we spend on drugs each year, we apparently can't find the resources to provide public support for these studies, even if the results could be of great clinical importance and save millions of dollars. Although a large randomized trial would have provided a valid means of comparing aprotinin with other treatments, no such study has been undertaken on the necessary scale.
The study by Mangano et al. was observational — its subjects were not randomly assigned to the four study groups. Instead, the investigators reviewed numerous variables for each patient and used a propensity score and multivariable methods to adjust for underlying differences among the groups. Although this approach has important limitations, observational studies often provide the only data available for evaluating critical safety issues.
A confirmatory observational study would have lent key support to the conclusions of Mangano et al. — if its findings had been aired. Bayer has admitted that its suppression of the study was "a mistake," but this is not the first time the company has behaved in this manner. When Bayer was accused of hiding data unfavorable to its cholesterol-lowering drug cerivastatin (Baycol) before it was taken off the market in 2001, litigation uncovered a memorandum from a company executive arguing against performing a study of its risk. "If the FDA asks for bad news, we have to give," read the memo, "but if we don't have it, we can't give it to them."3
Other companies have behaved similarly. Although Merck steadfastly denied that Vioxx increased the risk of myocardial infarction while the drug was on the market, it commissioned two epidemiologic studies of the relationship. My colleagues and I performed one of the studies, but when it confirmed an increased risk, Merck dismissed the findings and assailed the methods that it had previously accepted.4 The second study (by the same contract research group involved in the aprotinin affair) also confirmed the association, but its results were not made public until after the drug had been withdrawn from the market.
The problem is not limited to observational studies. A few years ago, it was discovered that some companies had funded multiple clinical trials of their selective serotonin-reuptake inhibitor antidepressants but reported the results of only the favorable trials — distorting the evidence base physicians use in choosing drugs. But the issue is thornier for epidemiologic analyses. Companies can conduct them secretly, even in-house, with the use of a purchased proprietary database, making the results even easier to conceal.
Carefully performed observational studies may provide the best information available about side effects, but propensity scores and other multivariable techniques applied to epidemiologic research cannot always control for all the inevitable selection bias, making the transparency of methods and raw data even more important than in randomized trials. Rather than yielding "virtual randomized trials," the methods available for controlling confounding in observational research can sometimes look better than they work.5 Thus, these studies can inform our understanding only after their methods have been scrutinized closely, fairly, and objectively — but only if the data are available.
On September 30, 1982, six people in the Chicago area died after taking acetaminophen (Tylenol) that had been laced with cyanide. The tragedy riveted the country's attention for months. We should be able to muster at least a fraction of that concern to address more clinically relevant adverse drug effects that could sicken or kill thousands of patients. How can we capture such interest in less sensational problems of medication safety? A good start would be to make a national commitment to publicly supported studies of drug risks so that no company could take possession of critical findings for its own purposes. The results of that research could be discussed openly at an annual conference on the risks and benefits of drugs. To keep everyone's attention focused on medication safety, perhaps the conference could be held every year on September 30.
Source Information
Dr. Avorn is a professor of medicine at Harvard Medical School and chief of the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women's Hospital — both in Boston.
An interview with Dr. Avorn can be heard at www.nejm.org.
References
Statement on aprotinin. Rockville, MD: Food and Drug Administration, December 30, 1993. (Accessed November 2, 2006, at http://www.fda.gov/bbs/topics/NEWS/NEW00453.html.)
Mangano DT, Tudor IC, Dietzel C. The risk associated with aprotinin in cardiac surgery. N Engl J Med 2006;354:353-365.
Berenson A. Trial lawyers are now focusing on lawsuits against drug makers. New York Times. May 18, 2003.
Burton TM. Merck takes author's name off Vioxx study. Wall Street Journal. May 18, 2004.
Sturmer T, Schneeweiss S, Brookhart MA, Rothman KJ, Avorn J, Glynn RJ. Analytic strategies to adjust confounding using exposure propensity scores and disease risk scores. Am J Epidemiol 2005;161:891-898.(Jerry Avorn, M.D.)