(Medscape) – A highly touted recent meta-analysis has been greeted with screams of elation: “Antidepressants work!” Even Medscape has shared in these shouts of joy. Why is the medical community and psychiatric profession so primed to convince itself, once and for all, that its favorite drugs work? Maybe because, deep down, we know they don’t.
The latest attempt to trick ourselves into believing that the past few decades of prescribing antidepressants has been an effective strategy comes from one of the most prestigious medical journals, The Lancet. The published meta-analysis’ basic finding—since repeated all over the press—is that antidepressants work because they are all better than placebo. What they don’t tell you is that they are hardly any better than placebo, and that the only drugs with clinically meaningful benefits are the ones that are used rarely today, the older tricyclic agents.
The context is important.
In the past decade, several other meta-analyses have looked at randomized clinical trials of antidepressants for major depressive disorder (MDD), usually conducted by pharmaceutical companies for government registration. They have found, repeatedly, that antidepressants either are not more effective than placebo, or they are slightly more effective, with an effect size that does not translate into clinically meaningful benefit. The effect sizes seen are about a 2-point improvement versus placebo on the Hamilton Depression Rating Scale, which is lower than the minimum threshold of a 3-point improvement for clinically meaningful benefit set by an earlier 2004 guidance by the UK’s then-named National Institute for Health and Clinical Excellence.
They have found, repeatedly, that antidepressants either are not more effective than placebo, or they are slightly more effective.
Another way of looking at it is via “Cohen’s d,” which describes the differences between mean scores divided by standard deviation. This allows us to directly, simply compare studies with different scales and the absolute benefit they show. A general rule of thumb with Cohen’s d is that a score of 0 to 0.25 indicates small to no effect, 0.25-0.50 a mild benefit, 0.5-1 a moderate to large benefit, and above 1.0 a huge benefit. It is a convention that a Cohen’s d of 0.5 or larger is a standard threshold for clinically meaningful benefit. The meta-analyses conducted over the past decade find an overall effect size of about 0.31 to 0.32 for modern antidepressants,[2,4] which is small and below clinically meaningful benefit.
This latest meta-analysis claims to have found something different—that antidepressants are effective. In fact, its results are basically the same as in prior analyses, confirming that almost all antidepressants are ineffective or at least not in a clinically meaningfully way, when examined as a whole compared with placebo. In other words, the only thing this study confirms is that prior studies were right when they reported that antidepressants “don’t work.”
The authors looked at 522 randomized clinical trials of 21 antidepressants versus placebo in MDD in over 100,000 patients. Overall, all antidepressants were more effective than placebo. In the “network” analysis, which allows for the direct and indirect comparison of multiple treatments, the authors report the lowest direct efficacy for reboxetine (odds ratio [OR], 1.36) and the highest efficacy (OR, 2.13) for the tricyclic antidepressant amitriptyline.
If these results were accepted at face value, we would conclude that clinicians should feel confident that all antidepressants are effective in MDD in general, and they would lean towards the agents listed above that were “more” effective, and against those that were “less” effective. Unfortunately, that isn’t the case.
On the positive side, the authors included much unpublished data (52% of all of the studies). Because of this, their results are not limited to or mostly influenced by the published literature, which is known to be markedly biased in favor of antidepressant drug efficacy. (This is because pharmaceutical companies usually have not published negative studies of antidepressants.)
On the negative side, nowhere in this dense and detailed paper did the authors report the absolute effect size of benefit with antidepressants on the depression rating scales used. Instead, they provide odds ratios, which are relative effect sizes over placebo. A drug might be 50% better (an OR of 1.50), but this could be a difference between 2 points with drug and 3 points with placebo on a depression rating scale (a tiny and clinically meaningless effect). Or it could be a difference between 20 points with drug and 30 points with placebo on the scale (a huge and clinically meaningful effect). In other words, how much better did patients get?
The real truth isn’t found within the published paper but rather within a busy table on page 142 of the online appendix. It is here where the authors report what we want: the actual difference between drugs and placebo, before and after treatment, on the depression rating scales. Here we see that the Cohen’s d standardized mean difference effect sizes range from a low of 0.19 to a high of 0.62 with amitriptyline. Thus, amitriptyline exceeds the clinically meaningful threshold of 0.50, with a traditional meta-analytic method. No other drug does so, with the closest second place being fluvoxamine, with a P value of 0.44.
Looking at all of the agents, 10 drugs have P values less than 0.30, which is very small and clinically meaningless, whereas four have effect sizes from 0.30 to 0.34. Thus, 74% (14/19) of antidepressants clearly have little or no clinically important benefit in this analysis (for some reason, no data are provided in this table with two of the drugs). Four drugs have effect sizes of 0.37-0.44, and as noted, one agent exceeds the 0.50 threshold (amitriptyline).
Perhaps a clearer conclusion than anything else is the well-proven fact that the tricyclic antidepressants are more effective than newer agents (there were no monoamine oxidase inhibitors in this meta-analysis).
The main point to conclude from the above description is that almost all antidepressants had small, clinically meaningless benefits. And only one agent exceeds the threshold of a Cohen’s d effect size of 0.50 or greater, which can be considered clinically meaningful benefit.
In short, one has to go to page 142 of the appendix to find the real result of all this effort: This meta-analysis confirms the results of prior meta-analyses which found that antidepressants have small overall effects in “MDD” and do not provide major clinical benefit in general.
One has to go to page 142 of the appendix to find the real result of all this effort.
This conclusion puts aside the more important issue of the scientific validity of the MDD concept itself, which the authors ignore completely. Our profession seems devoted to believing that antidepressants “work.” They don’t, at least not for “MDD.” Maybe the problem is with “MDD”—which is a heterogeneous clinical syndrome that is not valid scientifically as a single diagnosis—rather than with antidepressants. In other words, these drugs do something biologically, but maybe we aren’t giving them to the right clinical group of patients to see benefits.
The only clear take-away from this analysis, besides confirming the prior analyses that antidepressants are not very effective, is that amitriptyline is the most effective antidepressant tested, and apparently it’s the only one with clinically meaningful benefit. That’s it.
On the larger question of antidepressants as a class, you have two options: Either antidepressants don’t work or MDD doesn’t work. Take your pick.