Challenging Weight of the Evidence Methodology

Troutman Pepper
Contact

Use of a WOE methodology may be appropriate for government regulation, but it should not establish legal liability.

The article was published in the Winter 2017 issue of In-House Defense Quarterly, a publication of DRI. It is reprinted here with permisson.

"Because I say so" is not a reliable scientific methodology. But plaintiffs’ counsel have, with some success, invoked "weight of the evidence" (WOE) methodology—a process used by some regulatory bodies to classify theoretical hazards—in an effort to mask their experts’ "say so" approach. And some courts have gone along. But WOE methodology has no legitimate place in the courtroom.

Regulators do not shoulder the burden borne by plaintiffs. They use WOE to alert the public to possible hazards. Legal factfinders, on the other hand, impose liability in cases where the evidence establishes that a product more likely than not caused an injury. Given these different goals, the fact that regulators may use WOE methodology to evaluate scientific data does not earn it admission in civil litigation.

We believe that decisions admitting expert testimony based on WOE are wrong, but do not repeat the reasons why here, as they have been discussed extensively by others. See, e.g., David E. Bernstein, The Misbegotten Judicial Resistance to the Daubert Revolution, 89 Notre Dame L. Rev. 27 (2013) (criticizing the First Circuit’s endorsement of the WOE approach as an example of "judicial noncompliance" with Federal Rule of Evidence 702 and Daubert); see also Jennifer L. Mnookin, Atomism, Holism, and the Judicial Assessment of Evidence, 60 UCLA L. Rev. 1524, 1580 (2013) ("To be sure, methods for aggregation in science, even relatively informal, weight-of-the-evidence approaches certainly ought not to be based on the expert’s mere say so."). Instead, after reviewing the conflicting case law, we offer practical approaches for challenging experts who use WOE to offer ipse dixit testimony.

WOE Methodology: Conflicting Definitions

"Weight of the evidence" is a common phrase without a clear meaning. The phrase has been defined as "a process or method in which all scientific evidence that is relevant to the status of a causal hypothesis is taken into account." Sheldon Krimsky, The Weight of Scientific Evidence in Policy and Law, 95 Am. J. Public Health (Supp. 1) S129 (2005). In practice, WOE is most often used as a metaphorical term for a subjective assessment of "relevant" data examined for some risk or hypothesis, without reference to any interpretive methodology. Douglas L. Weed, Weight of Evidence: A Review of Concept and Methods, 25 Risk Analysis 1545, 1546–47 (2005); see also Krimsky, supra, at S129. But it may also be used to describe a methodological approach that could include systematic reviews, quality criteria for toxicology studies, causal criteria in epidemiology, meta-analysis, mixed epidemiologic-toxicology models and quantitative weighting schemes. Weed, supra, at 1547–52; see also Krimsky, supra, at S129.

The variability in WOE characterizations leads to wide variability in how scientists and regulatory agencies exercise judgment under WOE. "Metaphorically, judgment is a kind of intellectual glue, cementing together the evidence and the methods." Weed, supra, at 1553. "Without an explication of how evidence is ‘weighed’ or ‘weighted,’ the claim WOE seems to be coming out of a ‘black box’ of scientific judgment." Krimsky, supra, at S131 (quoting M.A. Ibrahim, et al., Weight of the Evidence on the Human Carcinogenicity of 2,4-D, 96 Envtl. Health Perspectives 213 (1991)).

WOE Regulatory Methods Are Problematic in a Courtroom

Use of a WOE methodology may be appropriate for government regulation, but it should not establish legal liability. As the Reference Manual explains:

The agency assessing risk may decide to bar a substance or product if the potential benefits are outweighed by the possibility of risks that are largely unquantifiable because of presently unknown contingencies. Consequently, risk assessors may pay heed to any evidence that points to a need for caution, rather than assess the likelihood that a causal relationship in a specific case is more likely than not.

Reference Manual on Scientific Evidence, Fed. Jud. Ctr., at 33 (2d ed. 2000).

Following this logic, courts have appropriately recognized that the U.S. Food and Drug Administration (FDA) utilizes a much lower standard of proof for taking regulatory action than that applied by a court to determine causation. See, e.g., Allen v. Pennsylvania Engr’g Corp., 102 F.3d 194, 198 (5th Cir. 1996) (rejecting experts’ reliance on the methodology employed by regulatory agencies because "[t]he agencies’ threshold of proof is reasonably lower than that appropriate in tort law, which ‘traditionally make[s] more particularized inquiries into cause and effect’ and requires a plaintiff to prove ‘that it is more likely than not that another individual has caused him or her harm’"). Similarly, WOE regulatory risk assessments used by federal, state, and international agencies to evaluate potential health risks have also been rejected by courts as proof of causation. See, e.g., Abarca v. Franklin Cnty. Water Dist., 761 F. Supp. 2d 1007, 1040-41 (E.D. Cal. 2011) (granting defendants’ motion for partial summary judgment where plaintiffs relied on an EPA Risk Assessment to establish causation); Rhodes v. E.I. DuPont de Nemours & Co., 253 F.R.D. 365, 377–78 (S.D.W.Va. 2008) ("Because a risk assessment overstates the risk to a population to achieve its protective and generalized goals, it is impossible to conclude with reasonable certainty that any one person exposed to a substance above the criterion established by the risk assessment has suffered a significantly increased risk").

Nonetheless, given the many definitions of WOE and different uses of WOE in the regulatory public health context, courts have reached conflicting results in deciding the admissibility of opinions based on WOE.

WOE Methodology in the Courtroom

WOE Cases: Conflicting Decisions

Federal Courts

Some federal courts have found that, when "properly applied, the weight-of-the-evidence methodology is not an unreliable methodology." Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 602 (D.N.J. 2002), aff’d, 2003 U.S. App. LEXIS 12972 (3d Cir. June 25, 2003) (expert excluded). See also Milward v. Acuity Specialty Prods. Group, 639 F.3d 11 (1st Cir. 2011) (reversing exclusion of expert testimony based on WOE methodology); Allen, 102 F.3d at 197 (excluding experts who used a WOE methodology); Waite v. AII Acquisition Corp., No. 15-cv-62359, 2016 U.S. Dist. LEXIS 107820, *35 (S.D. Fl. July 11, 2016) (finding WOE methodology employed was "sound"); In re Chantix, 889 F. Supp. 2d 1272, 1293 (N.D. Ala. 2012) (permitting expert testimony relying on WOE methodology).

In Magistrini, the court excluded the plaintiffs’ general causation expert testimony because the expert’s application of WOE methodology did not explain his rejection of studies that failed to support his conclusions. Id. at 603. There, the plaintiff claimed that her occupational exposure to a dry cleaning chemical known as perchloroethylene caused her to develop acute myelomonocytic leukemia. Id. at 589. The plaintiff’s general causation expert, Dr. David Ozonoff, cited the studies that he considered, but he did not explain "the methodology he used in weighing them against contra scientific evidence." Id. at 600. He had not presented "good grounds" for his assumptions as to what studies should be included in the analysis; as a result, the "body of evidence… was not shown to be reliably composed." Id. at 603. Furthermore, Dr. Ozonoff did not "explain which studies were more or less reliable based upon statistical methods," and he did not identify or factor in the confidence interval from each study. Id. at 605.

Magistrini concluded that Dr. Ozonoff had "not set forth the methodology he used to weigh the evidence." Id. at 606. "In order to ensure that the ‘weight-of-the-evidence’ methodology is truly a methodology, rather than a mere conclusion-oriented selection process[,]… there must be a scientific method of weighting that is used and explained." Id. at 607. Because the expert did not explain his methodology, the court found "‘simply too great an analytical gap’" between the data that the expert relied on and the opinion that he proffered. Id. at 608 (citing Gen. Elec. Co. v. Joiner, 522 U.S. 136, 146 (1997)).

Perhaps the most well-known and controversial federal case to discuss a WOE methodology is Milward v. Acuity Specialty Products Group, 639 F.3d 11 (1st Cir. 2011). There, the First Circuit reversed a district court’s exclusion of Dr. Martyn Smith’s expert opinions based on a WOE methodology. Dr. Smith opined that there was a causal link between exposure to benzene and acute promyelocytic leukemia (APL), a rare subtype of acute myeloid leukemia (AML). The First Circuit cited two primary factors to justify its acceptance of Dr. Smith’s WOE approach. First, it described Dr. Smith’s WOE methodology as an interpretive approach that he applied after following the widely accepted Bradford Hill guidelines for assessing causation. Id. at 16-17. Second, the court found that Dr. Smith’s WOE application was also consistent with the "near consensus among government agencies, experts and active researchers in the field that benzene can cause AML as a class." Id. at 19. This purported independent corroboration of the expert’s underlying opinions muddled the court’s evaluation of whether WOE methodology itself is valid.

In Milward, the defendants’ experts also agreed that some of Dr. Smith’s opinions were reasonable, including one of the defendants’ experts who agreed "there are a group of reasonable scientists who reasonably believe that all forms of AML arise from the same progenitor cell." The defense experts further conceded that Dr. Smith’s opinion was "consistent with most of the evidence." Id. at 20, n.10.

Frye Courts

WOE methodology has also received a mixed reception in Frye courts. Under Frye, novel scientific evidence "is admissible if the methodology that underlies the evidence has general acceptance in the relevant scientific community as a method for arriving at the conclusion the expert will testify to at trial." Frye v. United States, 293 F. 1013, 1014 (D.C. Ct. App. 1923).

In Jacoby v. Rite Aid Corp., 93 A.3d 503 (Pa. Super. Ct. 2013), the Pennsylvania Superior Court rejected WOE methodology, concluding: "[W]eight of the evidence and totality of the evidence are not scientific methodologies. They are not verifiable or replicable, but rather are based on subjective judgment." Jacoby was a personal injury case in which the plaintiff, among others, alleged that Fixodent, a denture adhesive cream, causes a neurological condition called myeloneuropathy. The plaintiffs’ experts, including Milward’s Dr. Smith, opined that the zinc in Fixodent led to a copper deficiency that caused myeloneuropathy.

Dr. Smith claimed that he followed a Bradford Hill framework and WOE approach. But Dr. Smith could not define association under Bradford Hill. He admitted that no studies demonstrated a statistically greater risk of myeloneuropathy for Fixodent users compared to nonusers. Id. Another of Jacoby’s experts, Dr. Frederick K. Askari, claimed that he used a "totality of the evidence" methodology. The court rejected Dr. Askari’s opinions as well because he, too, failed to define his methodology or undertake a "systematic weighing of factors." Id.

The court supported its exclusion of the plaintiffs’ experts in Jacoby by citing the absence of evidence showing how much zinc was absorbed in the body from Fixodent. The experts had no basis to opine that the amount of zinc absorbed in the body from Fixodent could result in a copper deficiency. Furthermore, the experts lacked evidence regarding "‘how low a person’s copper must be or for how long a duration before it potentially results in myeloneuropathy.’" Id. Two years later, for similar reasons, another court rejected testimony from Drs. Smith and Askari in another Fixodent case. In re Denture Adhesive, 134 A.3d 488 (Pa. Super. Ct. Nov. 12, 2015).

Jacoby contrasts with the decision in Murray v. Motorola, Inc., No. 2001 CA 008479 B, 2014 D.C. Super. LEXIS 16, *48 (D.C. Super. Aug. 8, 2014), where the court found that "in certain scientific circles [WOE] is generally accepted." The Murray court recognized that WOE is "amorphous" compared to such other methodologies as Bradford Hill, and, therefore, "an expert asserting that she used the WOE method needs to supply more detail as to what her methodology entails." Id. at *50 (citing Krimsky and Weed, supra). The court excluded one expert who claimed to use WOE, but then failed to apply it. Id. at *61. The court, however, permitted another expert to testify to opinions formed through a WOE methodology because his methodology required a review of "all relevant information and studies on a potential carcinogen, including epidemiological studies, whole animal experimental studies, mechanistic (in vivo and in vitro) studies, incidence data, and any other evidence." Id. at *90 n.53, *96.

Confronting a WOE Methodology in Litigation

Given that the WOE methodology has been accepted by several courts, and plaintiffs’ attorneys’ desire to give their experts the appearance of scientific legitimacy, some experts continue to rely on WOE methodology. In addition to the critical task of explaining to the court why regulatory standards are not reliable measures of legal causation, there are multiple ways to challenge an expert’s WOE methodology.

What Does Weight of the Evidence Mean to the Expert?

Given the many definitions of WOE, how the expert defines WOE methodology is important. What steps are required in the WOE methodology? How does the WOE methodology differ from other scientific methodologies (e.g., Bradford Hill)? What steps were taken to ensure that the WOE methodology produces replicable results? Has the expert’s WOE methodology been previously accepted in litigation? These questions will reveal whether the expert is only subjectively weighing evidence or whether the methodology used is a repeatable analytical process for reviewing various types of evidence. This inquiry was critical to the court’s decision in Magistrini, where the expert’s inability to describe an analytical process behind his conclusions revealed that the expert had not employed a scientifically valid methodology. 180 F. Supp. 2d at 607.

Did the Expert Consistently Apply an Analytical Methodology?

It is not enough for an expert to claim that the methodology employed is a true analytical framework—the expert must have actually employed an analytical framework. Murray v. Motorola, Inc., 2014 D.C. Super. LEXIS 16, at *48 ("Before the court may determine whether a methodology is generally accepted [under Frye], an expert must identify her methodology and establish that she actually did what she said she did.").

An expert should demonstrate the use of defined criteria in forming an opinion. For instance, Bradford Hill requires an expert to weigh the following: (1) temporal relationship; (2) strength of the association; (3) dose-response relationship; (4) replication of the findings; (5) biological plausibility (coherence with existing knowledge); (6) consideration of alternative explanations; (7) cessation of exposure; (8) specificity of the association; and (9) consistency with other knowledge. If an expert claims to have used a methodology based on Bradford Hill, the expert must use the nine Bradford Hill guidelines. Magistrini, 180 F. Supp. 2d at 606. How does the expert evaluate the strength of an association? What criteria are used to weigh the studies that support an association against those that do not? What data allows the expert to evaluate whether a dose-response relationship exists?

The expert’s methodology also must be consistently followed. Experts have been excluded where they have not applied the same methodology to different datasets and failed to offer a reasonable explanation for the deviation. In In re Zoloft (Sertraline Hydrochloride) Products Liability Litigation, MDL No. 2342 12-md-2342, 2015 U.S. Dist. LEXIS 161355, *57 (E.D. Pa. Dec. 2, 2015), the court excluded an expert who "failed to consistently apply the scientific methods he articulates, has deviated from or downplayed certain well-established principles of his field, and has inconsistently applied methods and standards to the data so as to support his a priori opinion." The same expert had previously been excluded for such reasons in In re Lipitor (Atorvastatin Calcium) Marketing, Sales Practices and Products Liability Litigation, MDL No. 2:14-mm-02502-RMG, 2015 U.S. Dist. LEXIS 157593, *13 (D. S.C. Nov. 20, 2015) (excluding expert testimony where statistician used one method to analyze a set of data, but then chose another method, without explanation, to analyze other data).

How Did the Expert Select Evidence?

An expert’s methodology is no better than the quality of the information being analyzed. Has the expert relied on the same type of data used in his or her peer-reviewed publications? If not, why not? Outside of litigation, have others relied on the same type of data to support similar conclusions? To what extent has the expert relied on scientific information from peer-reviewed, publicly available sources?

If peer-reviewed data are used, they should not be manipulated. Courts prohibit an expert’s unjustified re-analysis of published data: "[A]n expert cannot simply, without any explanation for rejecting a published, peer-reviewed analysis, conduct his own ‘re-analysis’ solely for the purposes of litigation and testify that the data support a conclusion opposite that of the studies’ authors in a peer-reviewed publication." In re Lipitor (Atorvastatin Calcium) Mktg., Sales Practices & Prods. Liab. Litig., 2015 U.S. Dist. LEXIS 157593, *57 (emphasis in original); see also In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., 2015 U.S. Dist. LEXIS 161355, *51 ("results-oriented, post-hoc re-analyses of existing epidemiological studies are disfavored by scientists and often deemed unreliable by courts, unless the expert can validate the need for re-analysis in some way").

Understanding how the expert searched for information will help a court weigh whether the expert’s evidence is reliable. A purported WOE methodology is inadequate if an expert cannot explain how the data evaluated were selected in the first instance, or if the bases on which the data were selected are inappropriate. Magistrini, 180 F. Supp. 2d at 603. Experts must explain their search criteria, such as identifying search terms and date parameters. Experts must then explain why publications that met their search terms were included or excluded.

Is the Expert’s Analysis Capable of Being Tested?

An expert’s methodology cannot be generally accepted or scientifically reliable if it cannot be scrutinized and replicated.

For example, In re Denture Adhesive rejected a WOE methodology because the experts relied on purported causation assessments of other physicians treating patients with myeloneuropathy. The experts had no ability to evaluate the other physicians’ qualifications for making myeloneuropathy causation assessments, nor an ability to evaluate the accuracy of the other physicians’ myeloneuropathy diagnoses. Therefore, the court was unable to rule that the proffered expert testimony was based on "generally accepted methodologies."

In re Denture Adhesive is a logical extension of other decisions that have excluded expert opinions that are not based on sufficient facts or data. For instance, a treating physician’s causation assessment may not meet Rule 702 admissibility requirements. See, e.g., Tamraz v. Lincoln Elec. Co., 620 F.3d 665, 670 (6th Cir. 2010) (holding that treating physician’s causation testimony should be excluded under Rule 702 because it was based on speculation); Kruszka v. Novartis Pharms. Corp., No. 07-2793, 2014 U.S. Dist. LEXIS 68439, *27–30 (D. Minn. May 19, 2014) (excluding specific causation testimony of plaintiff’s treating physicians under Rule 702 because it had no evidentiary basis). Relying on the opinions of treating physicians is also problematic because they "must make care decisions even in the face of uncertainty." Reference Manual on Scientific Evidence, 714 (3d ed. 2011).

Has the "Regulatory" Standard Been Previously Accepted as a Reliable Measure of More Likely than Not Causation?

No expert should be permitted to survive a Daubert or Frye challenge on the mere assertion that regulators have used the same methodology. If the expert is following a regulatory methodology, the expert must explain why that regulatory framework is adequate to determine legal causation under a more likely than not standard. Which regulatory agencies use the expert’s version of WOE? What are the goals of the regulatory analysis where the method has been used? Has the method been used outside of the regulatory context in litigation? Would the expert reach the same result if Bradford Hill were applied?

An expert without a justifiable basis for using WOE over a universally accepted scientific methodology is using regulatory agency methods as a smokescreen for his or her "say so."

Conclusion

Regulators use WOE methodology to safeguard health, not to determine causation and liability. Courts must demand more rigorous and exacting methods of experts to achieve justice.

 

 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Troutman Pepper | Attorney Advertising

Written by:

Troutman Pepper
Contact
more
less

Troutman Pepper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide