Just another WordPress weblog
Select Page

Refining the Use of Probabilistic Evidence in Loss of a Chance Cases

Beth Tucker Long

Beth Tucker Long

A common but troublesome factual cause problem arises in the following medical malpractice scenario. A doctor negligently treats or fails to diagnose a patient’s medical condition, and the patient dies or suffers serious harm from the condition. The patient (or the patient’s family) can prove that due care might have prevented that harm but cannot prove this causal link by a preponderance of the evidence. In recent years, most courts have responded to this “loss of a chance” of a better medical outcome (LOC) problem not by denying all liability, and not by awarding full damages, but instead by awarding partial damages. Most scholars, and the most recent drafts of two Restatement Third, Torts projects,1 endorse this response.

In her illuminating and provocative article, Damned Causation, Professor Elissa Philip Gentry takes a different tack. She is deeply skeptical of overreliance on general statistics in LOC cases and urges a more nuanced approach, an approach that grants much greater discretion to the jury. In the course of her careful analysis, Gentry clarifies the complex statistical issues that these cases raise and offers a promising alternative to current judicial practice.

Gentry begins by identifying the “attributable risk rate” as the most appropriate initial metric for measuring the chance that a doctor’s negligence made the patient worse off (P. 434). The meaning of that rate is best understood through examples. Suppose that at the time of the doctor’s negligence, the patient had a background (or “inevitable”) 60% risk of dying of cancer, which the doctor’s negligence increased to a 90% risk of death. The patient dies of cancer. Under the traditional preponderance test of factual cause, the doctor would not pay any damages, because it is more likely than not that the patient would have died even if the doctor had used due care. In this example, the “avoidable” ex ante risk—i.e. the risk of death that due care could have prevented—is 30%.

Gentry would compute the attributable risk that the doctor caused the death of this patient as 33%, because there was a 30/90 chance that the negligence caused the death. Thus, if a court rejects the traditional preponderance test and permits the award of proportional damages for the patient’s death, those damages should equal 33% of the full damages that would be awarded if the negligence of the doctor unquestionably was a factual cause of the death.

Gentry also gives the example of a doctor whose negligence decreases the patient’s chance of survival from 85% to 80%, and whose patient dies of the relevant disease (P. 434). If we convert these percentages into the mathematically equivalent risk of death, the doctor has increased that risk from 15% to 20%. Gentry then computes the attributable risk rate as 5/20, or 25%. In both this example and the prior example, Gentry’s computation of the size of the chance of survival that was “lost” deliberately ignores the ex ante risk that the patient would not die, because in both examples, it is known as of trial that the patient did die. Most scholars who have addressed this issue agree that, insofar as these probabilities are intended to provide the best ex post approximation of the chance that the doctor caused the death, this ratio method is the best method of computing the probabilities.2

But, Gentry argues, courts should not be satisfied with initial probability estimates. They should be very careful when employing this type of probabilistic statistical information, recognizing its limits as well as its value. Specifically, they should not automatically permit damages (even partial damages) simply because the attributable risk exceeds some specified threshold, such as 50% or 30% or 10%; nor should they automatically exclude damages (even full damages) simply because the attributable risk falls below some threshold.

Why should courts hesitate? Because, as Gentry points out, the statistical information typically offered by experts in LOC cases is group-based information from empirical studies, such as overall survival rates if cancer is diagnosed at Stage I, II, III or IV; but that information is sometimes a poor approximation of (a) the individual patient’s preexisting risk of suffering harm apart from the doctor’s negligence or (b) that patient’s amenability to cure if the doctor uses due care.

In a series of highly instructive tables and graphics, Gentry presents scenarios in which the group defined by an initial attributable risk rate (such as 30%) actually contains several distinct subgroups, some with a much higher risk rate, and others with a much lower one. These subgroups reflect individualized factors such as the patient’s demographic characteristics, medical history, lifestyle choices, and genetic endowments. And, she claims, if further evidence is available to distinguish which subgroup the patient is a member of, that patient might properly obtain either a full damage recovery, partial recovery (but not necessarily in proportion to the overall initial group risk rate), or no recovery.

Some of Gentry’s examples illustrate the danger that reliance on overly general statistical information will result in overcompensation of plaintiffs, by awarding full or partial damages even though more detailed patient-specific information might reveal a very strong likelihood that the patient was not made worse off by the doctor’s negligence. But using overly general statistical information can also undercompensate plaintiffs, because more specific ex post information sometimes indicates that the general statistical information understates the probability that defendant’s negligence caused the patient’s harm.3

What, then, is Gentry’s solution? She proposes that, instead of giving decisive weight to initial probability estimates based on readily available information, courts should undertake a two-step process. First, they should “personalize” the attributable risk information, adjusting it to make it as accurate as possible, in light of both the patient’s observable and unobservable4 characteristics.

Second, they should “operationalize” the information by determining whether the patient’s harm is “distinguishable.” It is distinguishable if ex-post evidence, acquired after the patient suffers harm, does demonstrate, or potentially can demonstrate, whether the patient’s harm was inevitable or instead avoidable; otherwise, it is indistinguishable. For example, available ex-post evidence might show that the patient’s tumor grew unusually quickly, or unusually slowly, relative to the population in the initial statistical study. The jury should, according to Gentry, adjust the attributable risk rate to reflect such evidence.

This innovative analysis holds the promise of achieving greater accuracy in determining whether the defendant’s negligence was the factual cause of the patient’s harm. The analysis is plausible in the abstract, but it does not resolve some questions. First, an important rationale for awarding partial damages in LOC cases is to avoid a recurring pocket of legal immunity from developing. If the inevitable risk of death is greater than 60%, for example, the traditional preponderance test cannot be satisfied, yet most courts have held that optimal deterrence and fairness support a damage award. Although Gentry’s proposed refinement of the probability analysis might further this rationale in some cases, it is not guaranteed to do so, because it gives no explicit weight to whether awarding partial damages will avoid a pocket of immunity.

Second, “distinguishability” of the harm is a key component of Gentry’s proposal, but distinguishability is a matter of degree.5 Thus, the inquiry into distinguishability will itself be costly to the parties and prone to error. A court that adopts the proposal might therefore need to rely on presumptions and bright-line rules in order to keep the two-stage inquiry manageable. However, if we complicate the current practice of using cruder statistics in LOC cases by adopting numerous refinements, it might be extremely difficult for experts to offer plausible probabilistic estimates of both the preexisting or “inevitable” risk of harm faced by the individual plaintiff and the additional “avoidable” risk that the defendant’s negligence created.

If that is correct, then courts might well be uncomfortable permitting any award of partial damages, because expert evidence for computing the proportion of damages that plaintiff should receive is lacking. The upshot? The jury would be left with the choice of awarding either full or no damages. Yet the desire to avoid that all-or-nothing choice has been a major impetus behind judicial recognition of LOC as a distinct legal doctrine.

A related question is when, under Gentry’s proposal, a partial damage award should be awarded for LOC. She endorses proportional damages in indistinguishable harm cases (P. 459), but she is doubtful that the jury can make reliable proportional damage calculations in distinguishable harm cases (P. 461). But if a large proportion of current LOC partial damage cases are characterized as distinguishable harm cases, then the partial damage remedy will become much less common, a result that might be to the disadvantage of injured patients.

Notwithstanding these lingering questions, Gentry’s article is a major contribution to the literature on LOC, properly emphasizing (as most courts have not) the importance of the question whether it is feasible to distinguish, based on ex post evidence, whether the harm to a patient was inevitable or was due to negligence. And more generally, she offers an illuminating framework for evaluating the ways in which statistical information can and should be used when the factfinder is considering whether the plaintiff has established legal causation. Medical malpractice is the only area of tort law in which most courts have been willing to make wide use of probabilistic statistical information in determining the causation and valuation of harm. As Gentry emphasizes, such information is likely to become easier to collect and aggregate in the future. It will become increasingly important for courts and scholars to develop justifiable and refined methods for using probabilistic information in fields outside of medical malpractice. (P. 422, 462.) The insights of Damned Causation will be invaluable as we explore these new horizons.

  1. Restatement Third, Torts: Medical Malpractice, § 8 (Council Draft No. 1, 2023); Restatement Third, Torts: Remedies, § 11 (Tentative Draft No. 2 (2023).
  2. However, almost all courts that have awarded partial damages in LOC cases have instead adopted a subtraction computation method. In the first example in the text, they would award 30% of the usual damages for death (=90% minus 60%), rather than 33%; and in the second example, they would award at most 5% of the usual damages (=85% minus 80%), rather than 25%. For an extensive discussion of the choice between these computation methods, concluding that the subtraction method frequently undercompensates plaintiffs and that the ratio method is almost always superior, see Kenneth W. Simons, Lost Chance of a Better Medical Outcome: New Tort, New Type of Compensable Injury, or New Causation Rule? __ DePaul L. Rev. __ (forthcoming), available at SSRN (Aug. 25, 2023).
  3. In a telling example, Gentry explains how statistical data can result in an implausible analysis of factual cause in the routine scenario of a speeding driver: [A] recent study suggests that a 1% increase in speed results in an increased chance of crash of 2%. [Thus] a 50% increase in speed will lead to a 100% increase in harm rate)… [If] a driver … was going 90-mph in a 70-mph zone (roughly a 28.6% increase in speed), a reasonable jury may well find that the driver breached the standard of care; however, a jury would not be allowed to find that speeding caused an accident unless the driver was going 105-mph … in a 70-mph zone.As Gentry explains, if the speed of the driver was less than 105 mph, the statistics by themselves suggest that the crash was probably not due to speeding. (At 105 mph, as compared to 70 mph, the driver has increased the risk of a crash by 100%, so it is equally probable that the crash was (a) avoidable, or due to speeding or (b) inevitable, i.e. it would have occurred even if the driver had not been speeding.) She concludes: “Intuitively, this seems over-restrictive, missing many cases in which a reasonable jury could find that the speeding caused the crash.” (P. 423.) Gentry is surely correct that a jury would be permitted to find factual cause and to award full damages in most scenarios in which a driver exceeded the speed limit by 20 mph, even if the defendant introduced the probabilistic evidence that she mentions. One potential explanation, consistent with Gentry’s analysis, is that when that speeding driver’s car harms a plaintiff, it is very likely that the driver was also negligent in some other way, such as failing to pay sufficient attention to his surroundings or failing to keep a safe distance from other drivers. Thus, even if the probabilistic evidence about the general effect of different degrees of speeding on crashes is statistically valid, that evidence is not decisive, because it does not reflect the likelihood that, when a crash occurs, the driver was negligent in some additional respect.
  4. Gentry does not fully clarify how she expects courts and juries to consider unobservable characteristics, such as genetic endowment. Perhaps she believes that the existence of such characteristics should simply cause these legal actors to give less weight to initial probability estimates; or perhaps, after the harm has been caused, it is sometimes feasible to identify such characteristics and thus to refine the relevant probabilities.
  5. Gentry states that the jury should rely on the following type of expert testimony: “First, what sort of individuating evidence, if any, is likely to be available to a member of the avoidable class? Second, does the evidence on the record constitute such evidence?” (P. 455; emphasis added). The italicized language makes this criterion difficult to apply.

test post

Beth Tucker Long

Beth Tucker Long

Gráinne de Búrca, Rosalind Dixon, & Marcela Prieto Rudolphy, Engendering the Legal Academy, 22 Int’l J. Const. L. __ (forthcoming, 2023).

Here is some more text.

A PHP 8.1 Test Post

David B. Froomkin & A. Michael Froomkin, Saving Democracy from the Senate, 2024 Utah L. Rev. __ (forthcoming, 2024), available at SSRN.

Large majorities in the United States are frustrated with our national political system1—and for good reason: Congress responds to neither their desires nor their needs.2 Some part of Congress’s non-responsiveness can be characterized as inherent to a system originally designed by a white male elite permeated with a suspicion, if not outright fear, of democracy and the mob. Even today, one can argue as to what extent some structural veto points, brakes on the popular will, might serve long-term democratic interests. But whatever the ideal, we are now far past that point.

When the Constitution was ratified, the United States limited the franchise almost exclusively to white men, and some states had property qualifications to vote for decades later. Since then, the United States has made major strides towards becoming an increasingly representative democracy. One hundred years ago, the U.S. adopted the Seventeenth Amendment, making the Senate directly elected. A few years later the Nineteenth amendment gave women the right to vote. The Fourteenth and Fifteenth Amendments, combined with the Voting Rights Act of 1965, first established and then effectuated the rights of Blacks and other minorities to equal access to the polls. The trend towards equal suffrage of all citizens was furthered by the Supreme Court’s decision in Baker v. Carr, which required equal representation by population for elections to the House of Representatives.

Unfortunately, recent years have seen notable backsliding: Congress today suffers from a severe and growing (small-d!) democratic deficit caused primarily by malapportionment of the Senate. (The House’s current difficulties reflect genuine divisions exacerbated by gerrymandering.) Even at its best, the Senate gave disproportionate power to States with smaller populations as a result of the so-called “Connecticut Compromise” under which all states have representational parity regardless of population. Changes in relative population of the states now create an imbalance in voting power well beyond anything intended or imagined by the Framers, beyond what any rational architect of a representative democracy would desire. The result—exacerbated by the filibuster3—is to give blocking power to small, and increasingly radical, national minorities.

The Senate’s deeply undemocratic structure is linked to other worrying tendencies in American government. Public trust in government is at near historic lows. Since 2000, there has been growing recognition of the shortcomings of the Electoral College, highlighted by the 2016 election. Legal scholars also have become increasingly interested in the undemocratic nature of the Supreme Court. Senate malapportionment is a leading contributor to both problems. Indeed, the Senate’s problems are so serious that its countermajoritarian features present an imminent threat to democracy. The federal government’s inability to respond to voter suppression efforts at the state level, due to Senate sclerosis, also endangers the integrity of the democratic process generally.

Unfortunately, the Senate is also the most difficult federal institution to reform, because of a unique constitutional entrenchment clause erecting a high bar to constitutional amendments seeking to alter the Senate’s composition. As if the ordinary Article V amendment process were not daunting enough, an amendment altering the Senate’s composition must receive the consent of any state “deprived of its equal Suffrage in the Senate.”

Legal scholarship has paid too little attention to the distorting effects of the Senate’s fundamental design, even as its consequences have become increasingly serious. Some scholars importantly have recognized the centrality of the Senate in driving constitutional sclerosis, but legal scholarship has devoted little systematic effort to thinking about reform or alternatives. Some scholars have proposed circumventing Article V entirely. But these proposals contradict express constitutional text, making them controversial at best, flatly unconstitutional at worst. Others have proposed Machiavellian workarounds to facilitate an Article V amendment reforming the Senate. But constitutional hardball invites anger and retaliation. Before resorting to it, it is worth investigating systematically the more conventional avenues available for reform.

  1. Even before the January 6 storming of the Capitol, only a minority of U.S. persons surveyed said they were “satisfied with the way democracy is working.” Pew Research Center, Many in U.S., Western Europe Say Their Political System Needs Major Reform (Mar 31, 2021). From September of 2020 to September of 2022, Gallup found disapproval of Congress to range from 61% to 82%. Congress and the Public, Gallup (2022), https://news.gallup.com/poll/1600/congress-public.aspx. Since 2010, Congress’ yearly approval rating has ranged from 11% to 30%Claire Miller, Congressional Approval Rating Breakdown, Quorum (Apr. 6, 2022).
  2. Sixty-two percent of voters support a $15 an hour minimum wage. Guy Molyneux, Hart Research Associates, NATIONAL EMPLOYMENT LAW PROJECT, 1 (Feb 2, 2021).
  3. See, e.g. Ezra Klein, The Senate Has Become a Dadaist Nightmare, N.Y. TIMES (Feb. 4, 2021). Because of the filibuster, Congress has passed little substantial legislation outside of the reconciliation process in over a decade; see also Josh Chafetz, The Unconstitutionality of the Filibuster, 43 CONN. L. REV. 1003 (2011); Gerard N. Magliocca, Reforming the Filibuster, 105 NW. U. L. REV. 303 (2011); Jonathan S. Gould, Kenneth A. Shepsle & Matthew C. Stephenson, Democratizing the Senate from Within (unpublished manuscript), available at https://ssrn.com/abstract=3812526.
Cite as: Michael Froomkin, A PHP 8.1 Test Post, JOTWELL (May 31, 2023) (reviewing David B. Froomkin & A. Michael Froomkin, Saving Democracy from the Senate, 2024 Utah L. Rev. __ (forthcoming, 2024), available at SSRN), https://zetasec.jotwell.com/a-php-8-1-test-post/.

Test after upgrade to PHP 8.1

This is a test citation value.
Beth Tucker Long

Beth Tucker Long

This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working.

This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working. This is just a test to see if the posting is working.

Cite as: Beth Tucker Long, Test after upgrade to PHP 8.1, JOTWELL (March 9, 2023) (reviewing This is a test citation value), https://zetasec.jotwell.com/test-after-upgrade-to-php-8-1/.

Testing Latest Feedreader Update

A. Michael Froomkin, Phillip Arencibia & P. Zak Colangelo-Trenner, Safety as Privacy, not available at SSRN.

New technologies, such as the Internet of Things (IoT) and connected cars, create privacy gaps that can cause danger to people. In prior work, two of us sought to emphasize the deep connection between privacy and safety, in order to lay a foundation for arguing that U.S. administrative agencies with a safety mission can and should make privacy protection one of their goals. This article builds on that foundation with a detailed look at the safety missions of several agencies. In each case, we argue that the agency has the discretion, if not necessarily the duty, to demand enhanced privacy practices from those within its jurisdiction, and that the agency should make use of that discretion.

Even without new privacy legislation, many U.S. agencies tasked with protecting safety could be doing more to protect personal privacy. Examples of agencies with untapped potential include the Federal Trade Commission (FTC), the Consumer Product Safety Commission (CPSC), the Food and Drug Administration (FDA), the National Highway Traffic Safety Administration (NHTSA), the Federal Aviation Administration (FAA), and the Occupational Safety and Health Administration (OSHA). Each of these agencies has a duty to protect the public against threats to safety and thus – as we have argued previously – should protect the public’s privacy when the absence of privacy can create a danger. The FTC’s general authority to fight unfair practices in commerce enables it to regulate commercial practices threatening consumer privacy. The FAA’s duty to ensure air safety could extend beyond airworthiness to regulating spying via drones. The CPSC’s authority to protect against unsafe products authorizes it to regulate products putting consumers’ physical and financial privacy at risk, thus sweeping in many products associated with the IoT. NHTSA’s authority to regulate dangerous practices on the road encompasses authority to require smart car manufacturers include precautions protecting drivers from hacker interference with control of smart cars and from misuses of connected car data. Lastly, OSHA’s authority to require safe work environments encompasses protecting workers from privacy risks that threaten their physical and financial safety on the job.

Arguably an omnibus, federal statute regulating data privacy would be preferable to doubling down on the U.S.’s notoriously sectoral approach to privacy regulation. Here, however, we say only that until the political stars align for some future omnibus proposal, there is value in exploring methods that are within our current means. It may be only second best, but it is also much easier to implement. Thus, we offer reasonable legal constructions of certain extant federal statutes that would justify more extensive privacy regulation in the name of providing enhanced safety, a regime that would we argue would be a substantial improvement over the status quo yet not require any new legislation, just a better understanding of certain agencies’ current powers and authorities. Agencies with suitably capacious safety missions should take the opportunity to regulate to protect relevant personal privacy without delay.

Cite as: Michael Froomkin, Testing Latest Feedreader Update, JOTWELL (July 13, 2021) (reviewing A. Michael Froomkin, Phillip Arencibia & P. Zak Colangelo-Trenner, Safety as Privacy, not available at SSRN), https://zetasec.jotwell.com/testing-latest-feedreader-update/.

Testing of Version 5.6

Test Author, The Wages of Testing, 82 Unyale L.J. 920 (1973).

Just over twenty years ago I gave a talk to the AALS called The Virtual Law School? Or, How the Internet Will De-skill the Professoriate, and Turn Your Law School Into a Conference Center. I came to the subject because I had been working on Internet law, learning about virtual worlds and e-commerce, and about the power of one-to-many communications. It seemed to me that a lot of what I had learned applied to education in general and to legal education in particular.

It didn’t happen. Or at least, it has not happened yet. In this essay I want to revisit my predictions from twenty years ago in order to investigate which were wrong and which were just premature. The massive convulsion now being forced on law teaching due to the social distancing required to prevent COVID-19 transmission presents an occasion in which we are all forced to rethink how we deliver law teaching. After discussing why my predictions failed to manifest before 2020, I will argue that unless this pandemic is brought under control quickly, the market for legal education may force some radical changes on us—whether we like it or not, and that in the main my earlier predictions were not wrong, just premature.

Back in 2000, I started my talk with hard truths which are no longer controversial but perhaps were not entirely fit for polite company when the law teaching industry was still very much in a go-go growth mentality: First, that law teaching is a business, therefore legal education has to worry about the bottom line. Second, that at least a private law school—which is where I found and still find myself—is just one of many ‘product lines’ for a private university. Third, that at least from the University’s point of view the law school is a “profit center.” And, fourth, that, as businesses go, law schools were a business with great structural problems.

Even in 2000 it was obvious that costs were out of control. Law school tuition was going up every year. Students were beginning to experience sticker shock and were graduating with ever higher levels of debt. Indeed, for the first time law schools were beginning to experience some customer resistance on price and this was leading to discounting competition between law schools. Meanwhile, student attitudes towards educational institutions were changing, leading to the rise of what became called the ‘consumer mentality’ in place of the ‘student mentality.’ And students from Harvard onward began to complain about the perceived hostility and unfriendliness of many law schools.

Law schools were also feeling pressure on the other end, from legal employers. The publication of the MacCrate report in 1992 inaugurated a new era of employer and clinician focus on skills training as opposed to traditional doctrinal courses. Whatever its merits, one thing was clear about most skills training – it was more expensive, because it required a considerably higher instructor-to-student ratio than ordinary teaching. So law schools were being squeezed by complaints about their costs, but also by expenses.

Distance learning seemed to be an answer. By the year 2000 it was increasingly widely used in higher education, notably the open University in the UK and London University, which offered law degrees by distance learning. In the United States, many undergraduate education programs offered some distance learning courses, and the Western Governors University was marketing itself as “Virtual U.” New York University’s school of continuing education unveiled what it called a “virtual college.” And Concord University School of Law, created by Kaplan, Inc., was in fact offering degrees only online, although with far from a full selection of courses, and without ABA accreditation.

Few law schools had gone anywhere near so far, but many were availing themselves of online resources to enhance their programs, whether posting online syllabi, using email and chat rooms as an extension of the classroom, or linking extension campuses to the main campuses for remote participation. And of course I and others were teaching classes about the Internet.

Some law school pioneers were taking things further such as by offering the same course to several schools at once. Peter Martin taught the same course at four law schools simultaneously. But early attempts to share curricular offerings ran into a number of problems. For example, law schools use different calendars, different grading systems, and different curving systems. Thus, attempts to grade students together created equity issues.

Despite these teething problems it seemed to me that the movement towards online teaching would only accelerate. One needed, I thought, to make only two assumptions: first that the technology would continue to improve quickly (which indeed the hardware did, if not so much the software), and second that the ABA would not be able to stand in the way—at least not for long.

There were a number of reasons to believe that the ABA would not be able to hold back the tide of distance learning. Already, ABA accreditation rules, long designed to hold the line against the dreaded correspondence-course-based law school, were beginning to change. More grandly, in a period where it seemed the Internet was going to change everything, and in which ABA accreditation rules had an uncanny resemblance to an anti-trust violation, it seemed possible to envision a radically different learning model. Perhaps law teaching could be rethought internet-style? Instead of having the ABA approve only degree-granting institutions which took responsibility for the bulk of a student’s teachings, and for setting degree requirements, we could imagine the growth of law schools that did little or no teaching of their own, but instead had students mix and match courses delivered by more traditionally accredited schools. The degree-granting body would set how many credits students would need to get a J.D., set distribution requirements, and perhaps set some rules about which courses, or which school’s courses, would count

Cite as: Michael Froomkin, Testing of Version 5.6, JOTWELL (January 6, 2021) (reviewing Test Author, The Wages of Testing, 82 Unyale L.J. 920 (1973)), https://zetasec.jotwell.com/testing-of-version-5-6/.

Testing the Intro ParaLimit

A. Michael Froomkin, The Virtual Law School 2.0, __ J. Legal Ed. __ (forthcoming 2021).

Just over twenty years ago I gave a talk to the AALS called The Virtual Law School? Or, How the Internet Will De-skill the Professoriate, and Turn Your Law School Into a Conference Center. I came to the subject because I had been working on Internet law, learning about virtual worlds and e-commerce, and about the power of one-to-many communications. It seemed to me that a lot of what I had learned applied to education in general and to legal education in particular.

It didn’t happen. Or at least, it has not happened yet. In this essay I want to revisit my predictions from twenty years ago in order to investigate which were wrong and which were just premature. The massive convulsion now being forced on law teaching due to the social distancing required to prevent COVID-19 transmission presents an occasion in which we are all forced to rethink how we deliver law teaching. After discussing why my predictions failed to manifest before 2020, I will argue that unless this pandemic is brought under control quickly, the market for legal education may force some radical changes on us—whether we like it or not, and that in the main my earlier predictions were not wrong, just premature.

Back in 2000, I started my talk with hard truths which are no longer controversial but perhaps were not entirely fit for polite company when the law teaching industry was still very much in a go-go growth mentality: First, that law teaching is a business, therefore legal education has to worry about the bottom line. Second, that at least a private law school—which is where I found and still find myself—is just one of many ‘product lines’ for a private university. Third, that at least from the University’s point of view the law school is a “profit center.” And, fourth, that, as businesses go, law schools were a business with great structural problems.

Even in 2000 it was obvious that costs were out of control. Law school tuition was going up every year. Students were beginning to experience sticker shock and were graduating with ever higher levels of debt. Indeed, for the first time law schools were beginning to experience some customer resistance on price and this was leading to discounting competition between law schools. Meanwhile, student attitudes towards educational institutions were changing, leading to the rise of what became called the ‘consumer mentality’ in place of the ‘student mentality.’ And students from Harvard onward began to complain about the perceived hostility and unfriendliness of many law schools.

Law schools were also feeling pressure on the other end, from legal employers. The publication of the MacCrate report in 1992 inaugurated a new era of employer and clinician focus on skills training as opposed to traditional doctrinal courses. Whatever its merits, one thing was clear about most skills training – it was more expensive, because it required a considerably higher instructor-to-student ratio than ordinary teaching. So law schools were being squeezed by complaints about their costs, but also by expenses.

Distance learning seemed to be an answer. By the year 2000 it was increasingly widely used in higher education, notably the open University in the UK and London University, which offered law degrees by distance learning. In the United States, many undergraduate education programs offered some distance learning courses, and the Western Governors University was marketing itself as “Virtual U.” New York University’s school of continuing education unveiled what it called a “virtual college.” And Concord University School of Law, created by Kaplan, Inc., was in fact offering degrees only online, although with far from a full selection of courses, and without ABA accreditation.

Few law schools had gone anywhere near so far, but many were availing themselves of online resources to enhance their programs, whether posting online syllabi, using email and chat rooms as an extension of the classroom, or linking extension campuses to the main campuses for remote participation. And of course I and others were teaching classes about the Internet.

Some law school pioneers were taking things further such as by offering the same course to several schools at once. Peter Martin taught the same course at four law schools simultaneously. But early attempts to share curricular offerings ran into a number of problems. For example, law schools use different calendars, different grading systems, and different curving systems. Thus, attempts to grade students together created equity issues.

Despite these teething problems it seemed to me that the movement towards online teaching would only accelerate. One needed, I thought, to make only two assumptions: first that the technology would continue to improve quickly (which indeed the hardware did, if not so much the software), and second that the ABA would not be able to stand in the way—at least not for long.

There were a number of reasons to believe that the ABA would not be able to hold back the tide of distance learning. Already, ABA accreditation rules, long designed to hold the line against the dreaded correspondence-course-based law school, were beginning to change. More grandly, in a period where it seemed the Internet was going to change everything, and in which ABA accreditation rules had an uncanny resemblance to an anti-trust violation, it seemed possible to envision a radically different learning model. Perhaps law teaching could be rethought internet-style? Instead of having the ABA approve only degree-granting institutions which took responsibility for the bulk of a student’s teachings, and for setting degree requirements, we could imagine the growth of law schools that did little or no teaching of their own, but instead had students mix and match courses delivered by more traditionally accredited schools. The degree-granting body would set how many credits students would need to get a J.D., set distribution requirements, and perhaps set some rules about which courses, or which school’s courses, would count. Students would then assemble their own course lists, picking from the best on offer nationwide.

Cite as: Michael Froomkin, Testing the Intro ParaLimit, JOTWELL (August 27, 2020) (reviewing A. Michael Froomkin, The Virtual Law School 2.0, __ J. Legal Ed. __ (forthcoming 2021)), https://zetasec.jotwell.com/testing-the-intro-paralimit/.