Tager_09181Featured Expert Column: Judicial Gatekeeping of Expert Evidence

By Evan M. Tager, Mayer Brown LLP, with Carl J. Summers, Mayer Brown LLP

Expert testimony is typically thought of as providing an insight into the evidence in the case, or drawing a conclusion from the evidence, that requires knowledge beyond the ken of a typical judge or juror.  But expert testimony also can be used as a substitute for evidence that a party cannot, or does not want to, present through traditional evidentiary methods.  Although courts have allowed such expert testimony in certain contexts, there is cause for concern when a party offers an expert whose function is to fill a gap in the evidence.

Notable among this category of expert testimony are opinions offered during class-certification proceedings in an effort to show that a case can be efficiently managed on a class-wide basis.  Such testimony often takes the form of surveys or other statistical sampling techniques designed to establish liability or damages on a class-wide basis without requiring adjudication of each individual claim.

In the past few Terms, the Supreme Court has addressed the permissible role of such surveys under Federal Rule of Civil Procedure 23 in certifying and maintaining class actions. See Tyson Foods, Inc. v. Bouphakeo, 136 S. Ct. 1036 (2016); Wal-Mart Stores, Inc. v. Dukes, 131 S. Ct. 2541 (2011). Even when such gap-filling expert testimony is allowed, however, it still must pass muster under the rules governing admissibility of expert testimony.

A recent decision authored by Judge Charles Breyer of the U.S. District Court for the Northern District of California addresses both the permissible uses of surveys under Rule 23 and the admissibility of those surveys under Daubert. Of interest to us here, the decision provides an evidentiary-based blueprint for excluding surveys that are often commissioned by plaintiffs’ attorneys for litigation.

In In re: AutoZone, Inc. Wage and Hour Employment Practices Litigation, 2016 WL 4208200 (N.D. Cal. Aug. 10, 2016), the plaintiffs alleged that their employer, AutoZone, failed to provide rest breaks in accordance with California law. The district court initially certified a class on the premise that AutoZone had a facially invalid policy throughout the class period and that the defendant’s records could establish whether individual employees were permitted to take rest breaks. As litigation progressed, however, it became apparent that AutoZone’s policy changed during the class period, that the policy was applied inconsistently, and—most important—that relevant plaintiff-specific records did not exist. These new developments prompted AutoZone to file a motion to de-certify the class.

The district court agreed that, in light of the evidentiary gaps in the record, the class did not satisfy Rule 23’s requirements of predominance and manageability. In so concluding, the district court rejected the plaintiffs’ argument that a survey commissioned for litigation purposes could fill those evidentiary gaps, holding both that it was an impermissible use of a survey under Rule 23 and that the particular survey offered by the plaintiffs was inadmissible under Federal Rule of Evidence 702 and Daubert. Without that survey evidence, the district court concluded, the plaintiffs could not maintain the class.

Before delving into the district court’s reasoning, a few words about the survey are in order. The plaintiffs commissioned their survey of class members to help calculate damages. The plaintiffs attempted to use the survey to maintain the class and establish liability only after the evidentiary gaps in the record became glaring. The survey asked class members various questions concerning whether they were allowed to take rest breaks during their shifts. For instance, of the survey respondents who had worked shifts lasting between six and eight hours, 29% stated that they were not authorized and permitted to take two rest breaks, 53% stated that they were authorized and permitted to do so, and 17% stated that they did not know or could not remember.

Relying on Ninth Circuit precedent, the district court explained that, as long as there is a proper foundation for the survey and it is conducted using accepted principles, questions about the expert’s methodology and the survey design generally go to the weight of the survey, not its admissibility. The district court noted, however, that it could exclude a survey if there were substantial deficiencies in its design or execution.

The district court identified five crucial flaws that undermined the admissibility of the plaintiffs’ survey. First, the “survey had a woefully low response rate.” The survey used a random sample of 10,000 individuals in the class, but only 343 usable responses were obtained—a 3.43% response rate. And even after excluding the 4,320 individuals whom the expert categorized as “nonreachable,” the response rate was still only 6%. The district court noted that the Reference Manual on Scientific Evidence states that surveys with response rates below 75% should receive “greater scrutiny,” those with response rates lower than 50% should be regarded with “significant caution,” and those with response rates between 5% and 20% are “very unlikely” to “provide any credible statistics of the population as a whole.” The district court concluded that the response rates in the plaintiffs’ survey were too low, especially in light of case law holding that response rates of 5% and 8% were inadequate.

Second, and relatedly, the district court reasoned that the low response rate suggested a form of “nonresponse bias.” In other words, the individuals who responded to the survey were materially different from the general population, and therefore the survey was an unreliable tool for developing information about the class as a whole. The district court found it particularly troubling that 572 individuals refused to participate in the survey, meaning that “refusals outnumbered surveys responded to by almost two-thirds.” The district court further found that the expert failed to adequately explain or correct for these drastically different response rates.

Third, the district court concluded that the survey was plagued by the problem of self-interest bias: The survey’s prompt informed individuals that the survey was being performed as part of a class-action lawsuit. The court noted that this renders the survey inherently suspect because it leads to at least two biasing phenomena.  On the one hand, those recipients who do not believe that they have an interest in the outcome of the class action because they were afforded their rest breaks would be less likely to respond, leading to a response sample that is biased in favor of those individuals who experienced a violation.  On the other hand, those individuals who do respond may, consciously or unconsciously, skew their answers to advance their self-interest as potential beneficiaries of the class action.

Fourth, the district court faulted the survey for asking individuals to recall specific events that occurred between three-and-a-half and eleven years prior to the survey. As evidence that this type of recall-driven survey leads to unreliable results, the district court noted that some individuals’ responses as to the number and types of shifts that they worked made no sense in light of the limited evidence in the record.  For example, a number of respondents said that they were given their rest breaks a specific percentage of the time when they worked a particular type of shift when it turned out that they had worked that type of shift only once.

Fifth and finally, the district court determined that the survey was imprecise as to both its questions and its sample. The survey, for example, failed to exclude the possibility that individuals voluntarily chose not to take rest breaks in certain contexts even though the break would have been allowed.  And the respondents to the survey included at least one managerial employee who should not have been part of the survey because the survey failed to adequately inform participants that the questions applied only to breaks taken while working as an hourly employee.

Given all of these flaws, the district court held that the “problems with the survey are fundamental and demonstrate that it is an unreliable means of measuring AutoZone’s potential liability to individual employees” and “therefore exclude[d] it under Rule 702 and Daubert.”

The district court’s decision is a straightforward application of Daubert to exclude a seriously flawed study in its entirety. The district court properly focused on the survey’s low response rate and the concomitant problems of bias that arise in such circumstances. The district court’s reasoning is especially helpful as applied to studies commissioned solely for the purposes of litigation: The fact that survey respondents were told that the survey was being conducted for a class-action lawsuit gave respondents a biased incentive to participate in the survey (or not) and answer questions in a way that might benefit them monetarily. Given the uncertain impact that Tyson Foods may have on Dukes with respect to the permissible uses of such surveys in class-certification proceedings under Rule 23, the district court’s decision provides an important evidentiary means of excluding plaintiffs’ surveys and thus defeating class certification, avoiding liability, and diminishing damages calculations.

Finally, AutoZone is to be applauded for its tenacious opposition to class certification, which continued after the district court initially certified the class in 2012. Such persistence resulted in a very positive precedent that future class-action defendants can employ to their advantage.