Evan M. Tager is a Partner with Mayer Brown LLP in the firm’s Washington, DC office and is the WLF Legal Pulse’s Featured Expert Contributor on Judicial Gatekeeping of Expert Evidence. Carmen Longoria-Green is an Associate in the firm’s Washington, DC office focusing her practice on appellate litigation and dispositive motions in trial court.

***

A federal judge in the Southern District of California has granted summary judgment to defendant pharmaceutical companies after excluding the testimony of seven of the plaintiffs’ experts.  The case is In re Incretin-Based Therapies Products Liability Litigation, — F. Supp. 3d —, 2021 WL 880316 (S.D. Cal. Mar. 9, 2021), and this multidistrict litigation involves claims that the prescription drugs Byetta, Januvia, Janumet, and Victoza—all of which are used to treat type 2 diabetes—caused or increased the risk of pancreatic cancer. 

After eight years of litigation, the district court put those contentions to rest because the plaintiffs were unable to produce a single expert who could reliably establish the requisite causal link between the drugs in question and pancreatic cancer.  The opinion provides a wealth of examples on how courts should evaluate whether an expert’s opinions are reliable.

In Daubert v. Merrell Dow Pharmaceuticals, Inc., the Supreme Court charged district courts with a “gatekeeping” role to ensure that all expert testimony is both reliable and relevant.  As the district court here observed, a reliable opinion is “good science,” and expert witnesses must show the same level of “intellectual rigor” in the courtroom as they would when working in the scientific community.  As a result, opinions developed solely for litigation are necessarily suspect.

As in all products-liability cases, the plaintiffs here had the burden to show general causation—“whether the substance at issue had the capacity to cause” the alleged harm—which necessarily required expert testimony, as lay jurors would not have been able to evaluate the relevant scientific data themselves.  Plaintiffs therefore offered seven expert witnesses, each of whom purported to show, in whole or in part, that the drugs in question can cause pancreatic cancer.  But the court found serious reliability problems in the methodology used by each of these experts.

Several of the plaintiffs’ experts “cherry-picked” the data they used in their analyses.  The plaintiffs’ two biostatisticians, for example, purported to show a statistical correlation between consuming the drugs at issue and later developing pancreatic cancer, but they applied inconsistent criteria when determining what counted as pancreatic cancer in both the control and placebo groups.  Their decision to do so “inevitably skew[ed] the data and critically undermine[d] the reliability of” their analyses. 

The biostatisticians also failed to incorporate all relevant studies into their analyses, thereby “disregard[ing] independent research at odds with” their testimony.  One biostatistician, when asked why he did not perform a comprehensive literature review to find all of the relevant studies, simply replied that “[i]t was not what I was asked to do” by plaintiffs’ counsel.  The district court refused to find that such a methodology is “good science.”

One expert limited his analysis to certain clinical trials and when asked why, admitted that “that decision” “was not his own, but [that] of Plaintiffs’ counsel.”

Indeed, the outsized role played by plaintiffs’ counsel in crafting the experts’ opinions repeatedly drew disapproval from the court.  One expert limited his analysis to certain clinical trials and when asked why, admitted that “that decision” “was not his own, but [that] of Plaintiffs’ counsel.” Another expert asserted that it was “Plaintiffs’ counsel who first taught him about the alleged relationship between” the drugs at issue and pancreatic cancer.  The court excluded his testimony after that admission—and after he admitted that he had relied upon only the abstracts of several scientific articles because he did not have the time to read them in their entirety and he did not want to pay to obtain them.

The court also excluded the testimony of the plaintiffs’ “biological plausibility” expert, who hypothesized how the drugs in question could cause pancreatic cancer.  Hypotheses, the court noted, “are verified by testing, not by submitting them to lay juries for a vote.”  To be admissible, therefore, a hypothesis must rise above “speculation” and “unreliable extrapolation.”  To satisfy that standard, experts must support “every necessary link” in their theory with “supporting evidence.” 

But plaintiffs’ expert did not do so:  His hypothesis had never been tested on humans; he failed to review a “large body of relevant animal” studies; he made an unsupported assumption that he could extrapolate from the animal studies he did review to how the drugs at issue would affect humans; he did not evaluate whether the dose administered in the animal studies he relied upon was similar to the dose typically given to humans (because he “just didn’t have time”); and the scientific literature did not support his hypothesis.  In light of these methodological failures, the court excluded his testimony. 

Finally, the court found several problems with the methodology of another of the plaintiffs’ experts, who claimed to have “weighed” all of the relevant evidence and determined that “it is more likely than not” that the drugs in question increased the risk of pancreatic cancer. 

First, taking the expert at his word that his methodology required analysis of all relevant evidence, the court found that the expert had failed to follow his own precepts by ignoring relevant information.  In fact, the expert had blindly relied upon the data compiled by plaintiffs’ biostatisticians, who themselves had imported methodological flaws into their analyses.  The court determined that the expert should have conducted an independent review of the biostatisticians’ data, especially because the biostatisticians themselves lacked the medical expertise to correctly choose the data they should have relied upon (for instance, they labeled a benign tumor a pancreatic cancer). 

Second, the court found the expert’s stated methodology—“weighing” the evidence—to be too vague.  The expert did not explain how he weighed the data he did look at, which made his methodology unreproducible and therefore not good science.

In the end, the court excluded the testimony from all seven experts, leaving the plaintiffs with no evidence to support their allegations that Byetta, Januvia, Janumet, and Victoza increased the risk of pancreatic cancer.  The court granted summary judgment to the defendants on that basis.  The opinion therefore well exemplifies the critical role played by the causation inquiry in products-liability cases:  The inability to prove causation to the level of scientific rigor required in federal court is often fatal to plaintiffs’ claims. 

The plaintiffs have appealed the district court’s decision to the Ninth Circuit.