Just another WordPress weblog
Select Page
Beth Tucker Long

Beth Tucker Long

A common but troublesome factual cause problem arises in the following medical malpractice scenario. A doctor negligently treats or fails to diagnose a patient’s medical condition, and the patient dies or suffers serious harm from the condition. The patient (or the patient’s family) can prove that due care might have prevented that harm but cannot prove this causal link by a preponderance of the evidence. In recent years, most courts have responded to this “loss of a chance” of a better medical outcome (LOC) problem not by denying all liability, and not by awarding full damages, but instead by awarding partial damages. Most scholars, and the most recent drafts of two Restatement Third, Torts projects, 1 endorse this response.

In her illuminating and provocative article, Damned Causation, Professor Elissa Philip Gentry takes a different tack. She is deeply skeptical of overreliance on general statistics in LOC cases and urges a more nuanced approach, an approach that grants much greater discretion to the jury. In the course of her careful analysis, Gentry clarifies the complex statistical issues that these cases raise and offers a promising alternative to current judicial practice.

Gentry begins by identifying the “attributable risk rate” as the most appropriate initial metric for measuring the chance that a doctor’s negligence made the patient worse off (P. 434). The meaning of that rate is best understood through examples. Suppose that at the time of the doctor’s negligence, the patient had a background (or “inevitable”) 60% risk of dying of cancer, which the doctor’s negligence increased to a 90% risk of death. The patient dies of cancer. Under the traditional preponderance test of factual cause, the doctor would not pay any damages, because it is more likely than not that the patient would have died even if the doctor had used due care. In this example, the “avoidable” ex ante risk—i.e. the risk of death that due care could have prevented—is 30%.

Gentry would compute the attributable risk that the doctor caused the death of this patient as 33%, because there was a 30/90 chance that the negligence caused the death. Thus, if a court rejects the traditional preponderance test and permits the award of proportional damages for the patient’s death, those damages should equal 33% of the full damages that would be awarded if the negligence of the doctor unquestionably was a factual cause of the death.

Gentry also gives the example of a doctor whose negligence decreases the patient’s chance of survival from 85% to 80%, and whose patient dies of the relevant disease (P. 434). If we convert these percentages into the mathematically equivalent risk of death, the doctor has increased that risk from 15% to 20%. Gentry then computes the attributable risk rate as 5/20, or 25%. In both this example and the prior example, Gentry’s computation of the size of the chance of survival that was “lost” deliberately ignores the ex ante risk that the patient would not die, because in both examples, it is known as of trial that the patient did die. Most scholars who have addressed this issue agree that, insofar as these probabilities are intended to provide the best ex post approximation of the chance that the doctor caused the death, this ratio method is the best method of computing the probabilities.2

But, Gentry argues, courts should not be satisfied with initial probability estimates. They should be very careful when employing this type of probabilistic statistical information, recognizing its limits as well as its value. Specifically, they should not automatically permit damages (even partial damages) simply because the attributable risk exceeds some specified threshold, such as 50% or 30% or 10%; nor should they automatically exclude damages (even full damages) simply because the attributable risk falls below some threshold.

Why should courts hesitate? Because, as Gentry points out, the statistical information typically offered by experts in LOC cases is group-based information from empirical studies, such as overall survival rates if cancer is diagnosed at Stage I, II, III or IV; but that information is sometimes a poor approximation of (a) the individual patient’s preexisting risk of suffering harm apart from the doctor’s negligence or (b) that patient’s amenability to cure if the doctor uses due care.

In a series of highly instructive tables and graphics, Gentry presents scenarios in which the group defined by an initial attributable risk rate (such as 30%) actually contains several distinct subgroups, some with a much higher risk rate, and others with a much lower one. These subgroups reflect individualized factors such as the patient’s demographic characteristics, medical history, lifestyle choices, and genetic endowments. And, she claims, if further evidence is available to distinguish which subgroup the patient is a member of, that patient might properly obtain either a full damage recovery, partial recovery (but not necessarily in proportion to the overall initial group risk rate), or no recovery.

Some of Gentry’s examples illustrate the danger that reliance on overly general statistical information will result in overcompensation of plaintiffs, by awarding full or partial damages even though more detailed patient-specific information might reveal a very strong likelihood that the patient was not made worse off by the doctor’s negligence. But using overly general statistical information can also undercompensate plaintiffs, because more specific ex post information sometimes indicates that the general statistical information understates the probability that defendant’s negligence caused the patient’s harm.3

What, then, is Gentry’s solution? She proposes that, instead of giving decisive weight to initial probability estimates based on readily available information, courts should undertake a two-step process. First, they should “personalize” the attributable risk information, adjusting it to make it as accurate as possible, in light of both the patient’s observable and unobservable4 characteristics.

Second, they should “operationalize” the information by determining whether the patient’s harm is “distinguishable.” It is distinguishable if ex-post evidence, acquired after the patient suffers harm, does demonstrate, or potentially can demonstrate, whether the patient’s harm was inevitable or instead avoidable; otherwise, it is indistinguishable. For example, available ex-post evidence might show that the patient’s tumor grew unusually quickly, or unusually slowly, relative to the population in the initial statistical study. The jury should, according to Gentry, adjust the attributable risk rate to reflect such evidence.

This innovative analysis holds the promise of achieving greater accuracy in determining whether the defendant’s negligence was the factual cause of the patient’s harm. The analysis is plausible in the abstract, but it does not resolve some questions. First, an important rationale for awarding partial damages in LOC cases is to avoid a recurring pocket of legal immunity from developing. If the inevitable risk of death is greater than 60%, for example, the traditional preponderance test cannot be satisfied, yet most courts have held that optimal deterrence and fairness support a damage award. Although Gentry’s proposed refinement of the probability analysis might further this rationale in some cases, it is not guaranteed to do so, because it gives no explicit weight to whether awarding partial damages will avoid a pocket of immunity.

Second, “distinguishability” of the harm is a key component of Gentry’s proposal, but distinguishability is a matter of degree.5 Thus, the inquiry into distinguishability will itself be costly to the parties and prone to error. A court that adopts the proposal might therefore need to rely on presumptions and bright-line rules in order to keep the two-stage inquiry manageable. However, if we complicate the current practice of using cruder statistics in LOC cases by adopting numerous refinements, it might be extremely difficult for experts to offer plausible probabilistic estimates of both the preexisting or “inevitable” risk of harm faced by the individual plaintiff and the additional “avoidable” risk that the defendant’s negligence created.

If that is correct, then courts might well be uncomfortable permitting any award of partial damages, because expert evidence for computing the proportion of damages that plaintiff should receive is lacking. The upshot? The jury would be left with the choice of awarding either full or no damages. Yet the desire to avoid that all-or-nothing choice has been a major impetus behind judicial recognition of LOC as a distinct legal doctrine.

A related question is when, under Gentry’s proposal, a partial damage award should be awarded for LOC. She endorses proportional damages in indistinguishable harm cases (P. 459), but she is doubtful that the jury can make reliable proportional damage calculations in distinguishable harm cases (P. 461). But if a large proportion of current LOC partial damage cases are characterized as distinguishable harm cases, then the partial damage remedy will become much less common, a result that might be to the disadvantage of injured patients.

Notwithstanding these lingering questions, Gentry’s article is a major contribution to the literature on LOC, properly emphasizing (as most courts have not) the importance of the question whether it is feasible to distinguish, based on ex post evidence, whether the harm to a patient was inevitable or was due to negligence. And more generally, she offers an illuminating framework for evaluating the ways in which statistical information can and should be used when the factfinder is considering whether the plaintiff has established legal causation. Medical malpractice is the only area of tort law in which most courts have been willing to make wide use of probabilistic statistical information in determining the causation and valuation of harm. As Gentry emphasizes, such information is likely to become easier to collect and aggregate in the future. It will become increasingly important for courts and scholars to develop justifiable and refined methods for using probabilistic information in fields outside of medical malpractice. (P. 422, 462.) The insights of Damned Causation will be invaluable as we explore these new horizons.

Download PDF
  1. Restatement Third, Torts: Medical Malpractice, § 8 (Council Draft No. 1, 2023); Restatement Third, Torts: Remedies, § 11 (Tentative Draft No. 2 (2023).
  2. However, almost all courts that have awarded partial damages in LOC cases have instead adopted a subtraction computation method. In the first example in the text, they would award 30% of the usual damages for death (=90% minus 60%), rather than 33%; and in the second example, they would award at most 5% of the usual damages (=85% minus 80%), rather than 25%. For an extensive discussion of the choice between these computation methods, concluding that the subtraction method frequently undercompensates plaintiffs and that the ratio method is almost always superior, see Kenneth W. Simons, Lost Chance of a Better Medical Outcome: New Tort, New Type of Compensable Injury, or New Causation Rule? __ DePaul L. Rev. __ (forthcoming), available at SSRN (Aug. 25, 2023).
  3. In a telling example, Gentry explains how statistical data can result in an implausible analysis of factual cause in the routine scenario of a speeding driver: [A] recent study suggests that a 1% increase in speed results in an increased chance of crash of 2%. [Thus] a 50% increase in speed will lead to a 100% increase in harm rate)… [If] a driver … was going 90-mph in a 70-mph zone (roughly a 28.6% increase in speed), a reasonable jury may well find that the driver breached the standard of care; however, a jury would not be allowed to find that speeding caused an accident unless the driver was going 105-mph … in a 70-mph zone.As Gentry explains, if the speed of the driver was less than 105 mph, the statistics by themselves suggest that the crash was probably not due to speeding. (At 105 mph, as compared to 70 mph, the driver has increased the risk of a crash by 100%, so it is equally probable that the crash was (a) avoidable, or due to speeding or (b) inevitable, i.e. it would have occurred even if the driver had not been speeding.) She concludes: “Intuitively, this seems over-restrictive, missing many cases in which a reasonable jury could find that the speeding caused the crash.” (P. 423.) Gentry is surely correct that a jury would be permitted to find factual cause and to award full damages in most scenarios in which a driver exceeded the speed limit by 20 mph, even if the defendant introduced the probabilistic evidence that she mentions. One potential explanation, consistent with Gentry’s analysis, is that when that speeding driver’s car harms a plaintiff, it is very likely that the driver was also negligent in some other way, such as failing to pay sufficient attention to his surroundings or failing to keep a safe distance from other drivers. Thus, even if the probabilistic evidence about the general effect of different degrees of speeding on crashes is statistically valid, that evidence is not decisive, because it does not reflect the likelihood that, when a crash occurs, the driver was negligent in some additional respect.
  4. Gentry does not fully clarify how she expects courts and juries to consider unobservable characteristics, such as genetic endowment. Perhaps she believes that the existence of such characteristics should simply cause these legal actors to give less weight to initial probability estimates; or perhaps, after the harm has been caused, it is sometimes feasible to identify such characteristics and thus to refine the relevant probabilities.
  5. Gentry states that the jury should rely on the following type of expert testimony: “First, what sort of individuating evidence, if any, is likely to be available to a member of the avoidable class? Second, does the evidence on the record constitute such evidence?” (P. 455; emphasis added). The italicized language makes this criterion difficult to apply.