Cross‑Study “Establishment” Claims: Third Circuit Clarifies That Side‑by‑Side Superiority Messaging Is Literally False Absent Reliable Comparability—and That Damages Still Require Proof of Actual Deception and Reliance
Case: CareDx, Inc. v. Natera, Inc., Nos. 23‑2427, 23‑2428 (3d Cir. Aug. 28, 2025) (non‑precedential)
Court: United States Court of Appeals for the Third Circuit
Panel: Shwartz, Matey, Fisher, JJ. (Opinion by Judge Shwartz)
Introduction
This appeal arises from a hard‑fought false advertising dispute in the highly regulated market for post‑transplant rejection diagnostics. CareDx (maker of the “AlloSure” blood test) sued Natera (maker of the competing “Prospera” test) for allegedly false superiority claims published in press releases, websites, brochures, and physician presentations. The claims compared the products’ diagnostic performance metrics by placing the results of two different studies—the Bloom study (CareDx’s multi‑site, prospective study) and the Sigdel study (Natera’s single‑site, retrospective study)—side‑by‑side.
A jury found nine of ten challenged statements literally false and awarded substantial actual and punitive damages. Post‑trial, the district court sustained liability and entered an injunction, but vacated all damages for lack of proof of actual deception and reliance. On appeal, the Third Circuit affirmed across the board.
Although non‑precedential, the decision offers a detailed and practical roadmap for assessing scientific “establishment” advertising: when advertisers juxtapose outcomes from different studies as if they were head‑to‑head, they necessarily imply comparability; if the studies are not reliably comparable (for design, population, or statistical reasons), superiority claims can be literally false by necessary implication. At the same time, recovering money under the Lanham Act still requires evidence that buyers actually relied on the falsehood—willfulness and sales success, without proof of reliance, do not suffice.
Summary of the Judgment
- Literal falsity upheld (injunctive relief affirmed): The court affirmed that nine challenged statements (Claims A, B, C, D, E, F, G, H, and J) were literally false under the Lanham Act and Delaware Deceptive Trade Practices Act (DTPA). The claims necessarily implied that the Bloom and Sigdel studies were comparable and established Prospera’s superiority on sensitivity, specificity, AUC, NPV, and pediatric performance—when they did not.
- No damages absent actual deception and reliance: The court affirmed judgment as a matter of law vacating the jury’s damages. CareDx did not prove that Natera’s falsehoods actually deceived purchasers or that purchasers relied on them in choosing Prospera over AlloSure.
- Unfair competition and punitive damages vacated: CareDx’s Delaware unfair‑competition claim failed for lack of causation and harm, and punitive damages tied to that claim were properly vacated.
- Injunction stands: Given literal falsity, the stipulated injunction against the nine advertisements was appropriate without proof of actual deception.
Factual and Procedural Background
- The products and studies:
- AlloSure (CareDx) was validated in the Bloom study (multi‑site, prospective), reporting sensitivity 59.3%, specificity 84.7%, AUC 0.74, NPV 84%.
- Prospera (Natera) was validated in the Sigdel study (single‑site, retrospective), reporting sensitivity 88.7%, specificity 72.6%, AUC 0.87, NPV 95.1%.
- The advertising: Natera circulated materials (press releases, physician brochures, website pages, conference slides) proclaiming Prospera’s superiority—often with side‑by‑side charts and comparisons drawn directly from Bloom and Sigdel.
- The litigation posture:
- A jury found nine of ten challenged statements literally false, willful, and awarded $21.2 million in actual damages and $23.7 million in punitive damages.
- The district court sustained literal falsity and entered an injunction, but vacated damages (Lanham Act and state law), found unfair competition not proven, and set aside punitive damages.
- On remand from an earlier Third Circuit order (seeking the trial court’s “first instance” assessment of the remaining claims under Rule 50(b)), the district court held the rest of the claims literally false for the same comparability reasons.
- Appeal: Both parties appealed; the Third Circuit affirmed.
Legal Issues Framed by the Court
- Literal falsity of “establishment” claims: When an ad says studies prove superiority, a plaintiff can establish literal falsity by showing that the cited tests are not sufficiently reliable, or even if reliable, do not support the proposition asserted.
- Necessary implication and ambiguity: Side‑by‑side scientific results and “superior” headlines can convey an unambiguous message that the studies are comparable and establish superiority; if not, the message is literally false by necessary implication.
- Relief dichotomy under the Lanham Act: For injunctions, literal falsity obviates the need to prove actual deception; for damages, the plaintiff must prove actual deception and reliance by purchasers.
- State‑law unfair competition: Requires proof of interference, causation, and harm; without evidence tying lost sales to the false statements, the claim fails, as do punitive damages tethered to it.
Detailed Analysis
1) Precedents Cited and How They Shaped the Decision
- Parkway Baking Co. v. Freihofer Baking Co., 255 F.2d 641 (3d Cir. 1958): Supplies the Lanham Act framework and, importantly for damages, requires proof that the falsehood actually deceived part of the buying public and that customers relied on the misrepresentation. The Third Circuit leaned on Parkway to affirm the vacatur of damages.
- Groupe SEB USA, Inc. v. Euro‑Pro Operating LLC, 774 F.3d 192 (3d Cir. 2014): Reinforces that courts assess the ad as a whole and can find literal falsity by necessary implication; also confirms that proof of deception is not required for injunctive relief when the ad is literally false.
- Novartis Consumer Health, Inc. v. Johnson & Johnson‑Merck, 290 F.3d 578 (3d Cir. 2002): Clarifies literal falsity analysis (unambiguous message first; falsity second) and the “necessary implication” doctrine.
- Castrol, Inc. v. Quaker State Corp., 977 F.2d 57 (2d Cir. 1992): The court expressly adopts the “establishment claim” burden: if an advertiser says studies prove superiority, a challenger can win by showing the studies are not sufficiently reliable or do not establish the stated proposition. The panel invoked this test to evaluate Natera’s cross‑study comparisons.
- Castrol Inc. v. Pennzoil Co., 987 F.2d 939 (3d Cir. 1993): Used for unambiguous, necessary‑implication messaging and the treatment of comparative superiority claims.
- Apotex Inc. v. Acorda Therapeutics, Inc., 823 F.3d 51 (2d Cir. 2016): Supports the principle that placing incomparable metrics or datasets side‑by‑side to claim superiority can be literally false.
- Pernod Ricard USA, LLC v. Bacardi U.S.A., Inc., 653 F.3d 241 (3d Cir. 2011): Confirms that, for injunctive relief, actual deception is presumed where an ad is unambiguous and literally false.
- American Home Products v. Johnson & Johnson, 577 F.2d 160 (2d Cir. 1978): Literal falsity can be found without resort to consumer reaction evidence.
- Southland Sod Farms v. Stover Seed Co., 108 F.3d 1134 (9th Cir. 1997): Ads must be read as a whole; context determines literal falsity.
- United Indus. Corp. v. Clorox Co., 140 F.3d 1175 (8th Cir. 1998): The more an ad forces consumers to infer, the harder literal falsity becomes—but the court here still read the ads holistically and found necessary implication.
- Resource Developers, Inc. v. Statue of Liberty‑Ellis Island Found., 926 F.2d 134 (2d Cir. 1991): The Second Circuit’s presumption of deception for egregious intent was discussed but not adopted; the Third Circuit reaffirmed its own damages standard requiring proof of actual deception and reliance.
- Schering‑Plough Healthcare Prods. v. Neutrogena Corp., 702 F. Supp. 2d 266 (D. Del. 2010): DTPA standards track Lanham Act false advertising; thus, the same analysis governs.
- Total Care Physicians, P.A. v. O’Hara, 798 A.2d 1043 (Del. Super. Ct. 2001) and Agilent Techs., Inc. v. Kirkland, 2009 WL 119865 (Del. Ch. Jan. 20, 2009): Define the elements and causation/harm requirements for Delaware unfair competition. Without proof tying lost business to the wrongful act, recovery fails.
- Unitherm Food Systems, Inc. v. Swift‑Eckrich, Inc., 546 U.S. 394 (2006): Grounds the earlier limited remand—trial judges must assess Rule 50(b) issues “in the first instance.”
- Curley v. Klem, 499 F.3d 199 (3d Cir. 2007); Marra v. Phila. Hous. Auth., 497 F.3d 286 (3d Cir. 2007); Leonard v. Stemtech Int’l Inc., 834 F.3d 376 (3d Cir. 2016): Standards for Rule 50(b)/new trial review, emphasizing deference to the jury unless no reasonable basis supports the verdict.
2) The Court’s Legal Reasoning
a) Establishment claims and necessary implication. The court categorized Natera’s statements as “establishment claims” because they asserted superiority “as proven by” studies. Once Natera placed Bloom and Sigdel results side‑by‑side to proclaim “higher,” “superior,” “more sensitive and specific,” or “unparalleled precision,” the ads necessarily implied that the studies were comparable and that the comparisons established Prospera’s superiority.
b) Why the claims were literally false. Substantial evidence showed the two studies were not reliable head‑to‑head comparators: Bloom was multi‑site and prospective; Sigdel was single‑site and retrospective, drawing on banked samples; patient populations and event distributions differed; overlapping confidence intervals and lack of statistical testing undercut claims of superiority; metrics like NPV were computed under different prevalence assumptions. Internal Natera documents and testimony acknowledged that true “apples‑to‑apples” comparison was “not possible,” that specificity was lower for Prospera, that AUC differences lacked statistical significance, and that pediatric sensitivity could not be claimed because there were zero rejection events in that subgroup. Reading the ads in context, the panel agreed a reasonable jury could find the necessary implication of comparability to be false, satisfying literal falsity.
c) No consumer‑perception proof required for literal falsity. Because the ads conveyed unambiguous messages by necessary implication, the court reaffirmed that plaintiffs need not present survey or perception evidence to establish literal falsity or to obtain injunctive relief.
d) Damages require actual deception and reliance. The court separated the remedies tracks. For an injunction, literal falsity suffices. For money damages, however, the plaintiff must prove that the falsehood actually deceived a portion of the purchasing public and that buyers relied on it in choosing the accused product. CareDx’s proof—general statements about “confusion,” evidence of campaign prominence and sales success, and willfulness—did not show that any physician or purchaser chose Prospera because of the false claims or would have chosen differently had the truth been known. Without customer reliance, damages cannot stand.
e) State‑law unfair competition and punitive damages. Mirroring the damages failure, the unfair‑competition claim fell for lack of causation and harm; the record did not link any lost center or purchase to the false statements. Punitive damages tethered to that claim were properly vacated.
3) Claim‑by‑Claim Highlights
- Claim A (“More sensitive and specific than current assessment tools”): Unambiguous reference to AlloSure. Literally false because specificity was lower for Prospera and the studies were not comparable.
- Claim B (“Better performance” on sensitivity 89% vs. 59%): Side‑by‑side messaging necessarily implied comparability and superiority; record evidence showed the studies did not establish Prospera’s sensitivity superiority.
- Claims C/D/G (“Superior data/precision”; “Higher AUC 0.87 vs. 0.74”): Literal falsity because AUC comparisons lacked statistical significance and cross‑study comparability; internal admissions underscored the point.
- Claims E/F (“3x fewer rejections missed”; NPV bar charts): Literal falsity where NPV depends on prevalence and cohorts; recalculations and cross‑cohort comparisons did not cure non‑comparability.
- Claim H (“Unparalleled Precision” wheel incorporating sensitivity and NPV claims): Unambiguous in context; literally false for the same reasons as B/E/F.
- Claim J (“Highly sensitive across … patients,” including under‑18s): Literally false as to pediatrics; zero pediatric rejection events meant sensitivity could not be calculated or claimed.
4) Impact and Forward‑Looking Implications
- Scientific comparative advertising: The opinion crystallizes a practical rule: if an advertiser places results from different studies side‑by‑side to claim superiority, courts will treat that as an implied head‑to‑head comparison. Without reliable comparability (design, cohort, endpoints, statistical significance), the claim risks being literally false by necessary implication.
- Life‑sciences marketing: Metrics like sensitivity, specificity, AUC, and NPV are not plug‑and‑play across studies. Differences in prevalence, cutoffs, sites, prospective vs. retrospective designs, and confidence intervals matter. “Recalculations” to harmonize prevalence may not salvage comparability.
- Litigation strategy:
- To win liability/injunction: Focus on study design disparities, internal admissions, and lack of statistical significance. Survey evidence is optional for literal falsity.
- To recover money: Build a record of actual deception and reliance—e.g., testimony from physicians/procurement committees that they switched because of the specific statements; contemporaneous call notes; well‑designed HCP surveys linking purchasing decisions to the challenged claims; econometric analyses connecting ad exposure to conversions.
- Compliance and risk management: Avoid sweeping subgroup claims without event support (e.g., pediatrics with zero events). Avoid “more sensitive and specific” if one metric is worse. If asserting AUC superiority, ensure statistical significance and clinical comparability.
- Remedies bifurcation remains sharp in the Third Circuit: Plaintiffs can enjoin literally false ads without consumer deception evidence, but they cannot obtain damages without proving actual deception and reliance. Willfulness does not create a presumption of deception for damages in this Circuit.
Complex Concepts Simplified
- Sensitivity: The chance a test correctly flags true rejection (avoids false negatives). Missing a rejection can endanger graft survival.
- Specificity: The chance a test correctly gives a negative when there is no rejection (avoids false positives), reducing unnecessary biopsies.
- AUC (Area Under the Curve): A combined measure of sensitivity and specificity across thresholds; higher can be better but only meaningful if cohorts, thresholds, and analyses are comparable and differences are statistically significant.
- NPV (Negative Predictive Value): The probability that a negative test result truly means no rejection. Critically depends on disease prevalence; NPV cannot be compared across studies with different prevalence without careful adjustment and still may not be comparable because of cohort differences.
- Prospective vs. retrospective studies: Prospective designs enroll patients and then collect outcomes going forward (reducing selection bias). Retrospective designs analyze existing samples/data, which may be “cleaner” but more prone to selection bias.
- Head‑to‑head comparability: A true head‑to‑head comparison measures both products under the same design, population, endpoints, and statistical plan. Cross‑study comparisons rarely satisfy this unless rigorous methods establish equivalence of conditions.
- Confidence intervals/statistical significance: Overlapping confidence intervals often mean no statistically significant difference; claiming superiority in that circumstance can be misleading or false.
Practical Checklists
For Advertisers (especially life sciences and diagnostics)
- Before claiming “superior,” “higher,” or “more sensitive/specific,” confirm:
- Same design type (prospective vs. retrospective), similar sites, and comparable patient populations.
- Harmonized endpoints, thresholds, and timing.
- Robust statistical testing showing differences are significant.
- NPV/PPV comparisons properly adjusted for prevalence and still clinically comparable.
- Avoid subgroup performance claims unless there are sufficient events to calculate metrics in that subgroup.
- Do not rely on “recalculated” competitor metrics without confirming the underlying cohort comparability and disclosing key assumptions.
- Train marketing teams to recognize establishment claims; ensure medical/scientific review signs off on comparability.
- Maintain documentation showing why you believed comparability existed at the time of publication.
For Plaintiffs Seeking Damages
- Collect direct reliance evidence: declarations/depositions from decision‑makers that the specific ad claims influenced their purchases.
- Deploy validated HCP surveys tying purchasing decisions to the challenged statements (not just “confusion”).
- Use econometric methods (e.g., difference‑in‑differences with ad exposure instruments) to connect false claims to conversions and quantify impact.
- Preserve contemporaneous CRM/call notes documenting objections and reasons for switching.
- Budget for corrective advertising studies and tie the spend causally to the falsehood’s impact on decisions.
Conclusion
CareDx v. Natera underscores two enduring Lanham Act truths in the Third Circuit. First, in scientific advertising, side‑by‑side cross‑study claims that trumpet “superiority” will be read to imply head‑to‑head comparability; where designs, cohorts, and statistics do not support that premise, such “establishment” claims are literally false by necessary implication and enjoinable without consumer surveys. Second, damages are different: they require proof that the falsehood actually deceived purchasers and that buyers relied on it when choosing the competitor’s product. Willfulness, marketing emphasis, or generalized marketplace “confusion” do not bridge that evidentiary gap.
While non‑precedential, the opinion offers concrete guidance to life‑sciences marketers and litigants alike: build claims on truly comparable data (or avoid superiority rhetoric), substantiate statistical significance, omit subgroup boasts lacking event support, and, if you seek money, assemble a record of real‑world reliance. The court’s careful parsing of scientific comparability and remedial standards will likely prove persuasive in future disputes over data‑driven comparative advertising.
Comments