TCPR: Dr. Oransky, as Deputy Editor of The Scientist, I know that you spend a great deal of time looking at medical statistics, and you do an excellent job of making these concepts understandable in your column in CNS News, Statistically Speaking. It seems that recently we've been seeing a lot of studies in psychiatry talking about “relative risk,” “absolute risk,” “number needed to treat,” and “number needed to harm.” I’m hoping you can help bring some clarity to these terms.
Dr. Oransky: It is confusing to sort through these concepts. And often pharmaceutical companies report risks in ways that make certain study outcomes seem more impressive than they actually are. For example, in medicine, I saw an Eli Lilly advertisement for Evista (raloxifene) which announced that there was a “68% lower risk” of new vertebral fractures in women taking the drug. That’s certainly a pretty impressive-sounding number!
TCPR: It sounds impressive to me. What was misleading about it?
Dr. Oransky: When I actually looked at the study (Arch Int Medicine 2002;162:1140-1143), I saw, indeed, that there were 19 fractures in the placebo group, there were only six fractures in the treatment group, meaning that you’re preventing 19-6=13 fractures by taking Evista. To get their 68% risk reduction, you divide 13 prevented fractures by 19 fractures in the placebo group, and you get a 68% risk reduction.
TCPR: What’s wrong with those statistics? They sound kosher to me.
Dr. Oransky: They are, but keep reading. When you look at Table 2 of the study, you see that there were 2,292 subjects in the placebo group and 2,259 in the treatment group. Doing the math, that gives you a rate of new fractures of 0.8% (19/2,292) in the placebo group and 0.3% (6/2,259) in the drug group. So the absolute difference in risk is actually 0.8% – 0.3%, or only 0.5%.
TCPR: I see. So an absolute risk reduction of 0.5% sounds much less impressive than a relative risk reduction of 68%.
Dr. Oransky: Right, because the absolute risk is computed from the whole population, whereas the relative risk uses only those who have the outcome of interest in a study.
TCPR: And which number is more relevant to patients?
Dr. Oransky: Generally, the absolute risk is more relevant, because when you are deciding on whether it makes sense to take a new drug that might be expensive and have side effects, you want to know what are the chances that you’re going to get the disorder in the first place. If the chances are infintesimal that you’re going to have a vertebral fracture, for example, it may not make sense for you to take a drug like Evista, or to prescribe the drug to such a patient if you are a physician. However, if you knew only the relative risk reduction of 68%, chances are good you’d be tempted to take the drug.
TCPR: Can you give us an example of this in psychiatry?
Dr. Oransky: The classic example of this issue in psychiatry is the recent controversy about whether antidepressant use increases the risk of suicidality in children.
TCPR: In that case, what we heard was that antidepressants double the risk of suicidality in children, and that led to antidepressant drug–makers having to add a black box warning about this risk in their package inserts.
Dr. Oransky: Yes, and in the United Kingdom, the response was even more restrictive. Initially, the UK’s drug regulatory agency announced that all SSRIs except fluoxetine were contraindicated in children and adolescents with depression. They’ve pulled back from that recently, and now allow the drugs to be prescribed, but require that they be used in conjunction with psychotherapy.
TCPR: How should we interpret this issue and the statistics behind it?
Dr. Oransky: Recently, a study was published in Archives of General Psychiatry which reported the risk ratios for suicidal events in all pediatric antidepressant trials submitted to the FDA (Hammad TA et al, 2006;63:332-339).
TCPR: Is “risk ratio” the same thing as “relative risk”?
Dr. Oransky: Yes it is. In this case, the risk ratio is defined as the percent of patients having suicidal events in the medication group divided by the percent of patients having suicidal events in the placebo group. If the percentages were exactly the same, then the risk ratio would be equal to one. If there was double the percent of suicidal events in the medication group, then the risk ratio would be 2, and so on.
TCPR: And what were the numbers in the recent analysis?
Dr. Oransky: The analysis reported that the risk ratio was 1.95, meaning that antidepressant treatment conferred 1.95 times the risk of suicidal events than placebo treatment.
TCPR: But what about the absolute risk?
Dr. Oransky: Interestingly, the study did not report the absolute risk.
TCPR: But isn’t that a vital piece of information for putting the findings into context?
Dr. Oransky: It certainly is, and in my opinion they should have reported the absolute risk. Not to defend the authors, who are FDA scientists, but when you’re doing a meta-analysis, it isn’t always possible to report an absolute risk given that you’re looking at different groups. And they do acknowledge reasons for focusing on risk ratios in the paper. But from the original FDA advisory (http://www.fda.gov/cder/drug/antidepressants/SSRIPHA200410.htm) we can get a pretty good idea of the absolute numbers involved here, which they could have reported as a potential benchmark. About 4,400 patients were involved in the 24 antidepressant trials, and about 4% of drug-treated patients had a suicidal event, versus about 2% of the placebo-treated patients, yielding the reported risk ratio of 4/2, or 2, which is close to the 1.95 reported in the most recent synthesis of the data. In other words, the relative increased risk of a suicidal event due to antidepressants is a 95% increase.
TCPR: But using my newly-found statistical knowledge, the absolute increased risk of a suicidal event is only 4% minus 2%, or 2%. That's a lot less than 95%!
Dr. Oransky: That’s right. And when you consider that there were no completed suicides in any of the trials, and that “suicidality” was defined rather broadly, you can see that the actual danger of antidepressants may not be as severe as originally thought.
TCPR: What about the “number needed to treat”? What does that mean in this example?
Dr. Oransky: Actually, in this example the more relevant statistic is the “number needed to harm (NNH),” that is, the number children that would have to be exposed to an antidepressant in order to cause one episode of suicidality. The calculation is simple, as long as you know the increase in the absolute risk–you simply divide that into 100. So in this example, assuming that the increased absolute risk is 2%, the NNH is 100/2=50. You would need to treat 50 children to encounter one case in which the antidepressant led to suicidality.
TCPR: Is that significant? Is there a commonly agreed upon benchmark for a NNH that is too low to risk using a drug?
Dr. Oransky: Well, obviously, when you are looking at number needed to harm, you want the largest number possible. At the end of the day it is a judgment call about how severe that adverse event is. In the case of Vioxx, which can cause heart attacks, that is a very severe thing. It won't necessarily kill you, but it is going to do its darndest. In the case of suicidal ideation, you have to judge how often that actually translates into suicidal behavior. So there isn’t really a benchmark, since it depends on so many factors, but anything lower than 100 is generally considered worrisome.
Please see our Terms and Conditions, Privacy Policy, Subscription Agreement, Use of Cookies, and Hardware/Software Requirements to view our website.
© 2024 Carlat Publishing, LLC and Affiliates, All Rights Reserved.