Just another WordPress weblog
Select Page

Disclaimers and Family Settlement Agreements as Possible Solutions to Election Out and Document Construction Problems

S. Alan Medlin, F. Ladson Boyle, and Howard M. Zaritsky, 2010: It Was A Very Good Year . . . To Die—Or Was It?, 45 Real Prop. Tr. & Est. L.J. 589 (2011).

In this comprehensive article, the authors address the effects of Congress’ reinstatement, on December 17, 2010, of the estate tax and the generation skipping transfer tax. The authors first analyze how the reinstatement presents certain election out and document construction problems, and then they propose disclaimers and family settlement agreements as possible solutions.

The authors have two election out problems: First, tax-sensitive language in documents may be difficult to interpret because estate or GST taxes may not have been applicable on the date of the decedent’s death in 2010–possibly even without regard to any retroactivity. (P. 592.) Second, the personal representative of the estate of a decedent who died in 2010 must decide whether to elect out of the estate tax regime (and therefore into the carryover basis regime for income tax purposes) or to allow the default estate tax regime to apply. (P. 592.) The tax results under both scenarios must be compared, including reviewing the “calculation of the net appreciation in each asset, the character of the gain on the sale of each asset, the tax rate applicable to the gain on the sale of each asset, when each asset is likely to be sold and whether tax benefits exist that might reduce the tax on such sales, and how the modified carryover basis rules will apply to these assets” as well as related factors such as passive losses and partnership interests. (P. 595.)

With the decision to elect out or not, a personal representative may face a conflict with beneficiaries because “the personal representative of an estate owes both a duty of fairness and impartiality towards all of the beneficiaries of the estate and a duty to conserve the estate, including a duty to minimize taxes.” (P. 595.) For example, the beneficiary of a specific gift (which generally bears estate taxes only if the residuary is insufficient), if the gift is of an appreciated asset, may prefer that the estate tax regime apply instead of being subject to income tax on the gain on sale of the appreciated asset. (P. 596.) A second example occurs when equitable apportionment applies such that, for example, a surviving spouse receiving substantially appreciated assets (under a distribution that does not bear any estate taxes) may object to an election out of the estate tax regime even though the estate may benefit from the election out. (PP. 596-597.) Yet another conflict may arise when the decedent’s documents address the possibility of estate and GST tax repeal. (P. 597.) If a document provides different dispositions depending upon whether the personal representative elects out of the estate tax regime, then a personal representative who is a family member of the decedent also has a duty of loyalty to the estate and must prioritize the estate’s interests before those of the personal representative (P. 597.) Also, the personal representative’s conflict of interests “may be more serious if a 2010 ‘patch’ statute has been enacted” (P. 598.) The authors suggest to personal representatives facing the foregoing conflicts that they provide detailed analyses to the beneficiaries and then seek the beneficiaries’ consent for all actions (or non-actions); if consent is not obtained, the authors propose seeking a court order approving the personal representative’s actions (or non-actions). (PP. 597-599.) Finally, as to the election out or not, the authors note that courts may require, or at least permit, a personal representative to adjust the shares of the various beneficiaries of an estate “when an election clearly works a disproportionate disadvantage” (P. 599.) The failure to make an equitable adjustment “may create income and transfer tax consequences for an estate or beneficiary.” (P. 602.)

The authors’ document construction problems include interpreting such terms as “unified credit, the applicable credit amount, the applicable exclusion amount, the exemption equivalent, the optimal marital deduction, the maximum marital deduction, or the GST exclusion” because those terms “had no meaning in 2010 until December 17.” (PP. 592-593.) These construction problems remain ”despite the 2010 Tax Act” as follows:  First, “the usual rule for construing a testator’s intention applies the law in effect at the date of death,” so the authors posit that, “If the will or revocable trust of a 2010 decedent contained tax-sensitive terms but no express provisions about allocating the estate if no estate or GST taxes existed at the time of death, then the retroactive application of the 2010 Tax Act arguably should not cure property allocation construction issues because no estate or GST taxes existed at the time of death.” (P. 593.) Second, the ”construction problem clearly exists for estates of 2010 decedents whose personal representatives elect out of the retroactive estate tax,” and, third, the construction problem may arise when certain documents contain “alternative dispositive provisions based on whether the estate is subject to estate or GST tax.” (P. 594.)

As to the foregoing problems, the authors propose disclaimers and family settlement agreements as possible solutions. For a decedent’s will leaving a nonmarital or bypass trust an amount equal to the decedent’s applicable exclusion amount and the residue to the surviving spouse, the authors posit that, when there is no “applicable exclusion amount” such that all property goes to the surviving spouse, a disclaimer may be appropriate to fund the nonmarital or bypass trust. (PP. 611-612.) For a will leaving to the decedent’s adult children an amount equal to the most that can pass free from federal estate taxes and the balance outright to the decedent’s surviving spouse, the authors posit that, if all property goes to the adult children, a disclaimer may be appropriate to fund the marital share. (P. 613.) The authors provide other possible disclaimers and address the attendant tax, timing, family harmony, and state law issues. (PP. 613-618.) Family settlement agreements may also address election out and construction problems because the beneficiaries agree to the settlement of an estate. (P. 618.) Any agreement, though, may have adverse tax consequences and may raise some risks under Commissioner v. Estate of Bosch. (PP. 627-634.)

This summary obscures the depth and breadth of the article. The authors’ discussion of the fiduciary duties of a personal representative in the election out decision is comprehensive, as is the analysis of possible income and transfer tax consequences for both actions and non-actions by a personal representative. The discussion of document construction problems is particularly thought-provoking, explicitly and implicitly addressing both the advantages and disadvantages of document language mirroring terms in the Internal Revenue Code. Finally, disclaimers and family settlement agreements, if the authors’ concerns about them are overcome, provide great solutions to the election out and document constructions problems impressively discussed by the authors.

 

How To Regulate the Legal Services Market? Starting From First Principles.

Christopher Decker & George Yarrow, Understanding the Economic Rationale for Legal Services Regulation, A Report for the Legal Services Board (Regulatory Policy Institute, 2010).

Dr. Christopher Decker and Professor George Yarrow are economists at the Regulatory Policy Institute, Oxford, who were commissioned to consider the “case for regulation” and the role of professions in the legal services market in the UK. Their report appears at a time when the professions in England and Wales are in the midst of a quiet revolution, precipitated by the Legal Services Act 2007 (LSA). The Act places a range of professional groups, from the mainstream solicitors and barristers to the more esoteric trade marks and patent agents, under the purview of the Legal Services Board (LSB), an “oversight regulator.” This means that the professions retain a large measure of regulatory control, over ethics and education for example, but that they, and the LSB, must pursue statutory objectives.

While much of the theory that Decker and Yarrow refer to is familiar to scholars of the legal professions, in Rick Abel’s work for example, it is valuable for scholars of professions and legal services to see the argument through the prism of another discipline. The report is accessible to those without an economics background and might therefore provide a better foundation for dialogue between lawyers, economists and others than presently exists. This potential to stimulate debate is not purely parochial. Although the report uses examples of the practices of the English professions, the general approach is an “in principle” analysis of the rationale for regulation. Such a study might undermine the basis of legal professionalism, but it might also doubt the rationale for regulation per se, even public regulation by an oversight regulator. Decker and Yarrow do not disappoint in this regard, but also point to the limits of economic analysis in answering the questions they were posed.

The LSB has statutory duties that appear to compete, for example promoting consumer interests and a strong and independent legal profession. In the two years it has been operating, the LSB has worked hard to navigate this difficult terrain. While economics informs other analyses of legal services provisions in England and Wales, it is unusual for regulators to fund theoretical projects. Commissioning a report that examines the economic rationale for regulation, that questions fundamental assumptions, is symptomatic of its open approach.

A particularly impressive feature of the report is a willingness to concede ambiguities and limits of economic analysis. Early in the report Decker and Yarrow concede that most of the economic analysis of legal professions and legal markets is in the tradition of modern neo-classical economics (MNC). This approach theorizes an efficient market equilibrium achieved by perfect competition. One of the features of the model is that any situation that does not conform to the unrealistic assumptions of the model is labelled a “market failure.” This, the authors suggest, is a common analytic misunderstanding. Rather, because MNC is an abstract theory, scarcely applicable to concrete situations, it has limited explanatory power.

While MNC has severe limitations, it is the foundation of public interest theories of regulation, which assumes capacity to correct market failures. In fact, it cannot. Decker and Yarrow argue that “market failures” such as information asymmetries between suppliers and consumers are too difficult or expensive to eradicate completely. In fact, they suggest reputation, one of the main priorities of professions and professional firms, is one of the most effective means of mitigating the impact of information asymmetry. Therefore, distinctions such as Queen’s Counsel (QC), awarded to elite advocates, are ambiguous. Challenged by the UK competition authorities as a restrictive practice, the QC fudge could also be seen as a reputational indicator helpful to consumers. For example, see Joel Poldony’s  sociological study of market competition.

Decker and Yarrow’s report suggests that the regulatory experiment in the UK is largely built on political conviction rather than economic facts. In fact although they find no evidence of cartelization or other anticompetitive practices among UK lawyers, the assumption of regulatory policy in the UK is that professions, left to their own devices, will favor their own members over members of the public. Indeed, Decker and Yarrow point out that changing regulation is often a way of shifting market advantage from one group to another.

The rights or wrongs of government intervention in the legal services market may not be the main point of Decker and Yarrow’s report. Rather they are considering whether any regulation of the market is necessary. Their starting point for an affirmative answer is that the provision of legal aid points to legal services being a necessary “social good,” and therefore, like public utilities, worthy of regulatory effort. This analogy, however immediately breaks down. Legal services, unlike gas or electricity, are not capable, for example, of being organized so as to facilitate cross subsidy. Large commercial firms at one end of the market make huge profits while legal aid firms at the other, are, increasingly, going out of business.
The decline of the legal aid sector in the UK points to the central problem in legal services regulation. Government no longer trusts that legal services suppliers (lawyers, as we used to call them), will take enough from transactions with consumers to ensure efficient supply. Therefore, although there is scant economic evidence that they do take more than they need to run an efficient service, it is assumed that lawyers do so, contrary to consumer interests. The regulatory oversight attempted by the Legal Services Act will either ensure that they are “more efficient,” or admit others (non-lawyers) to the market. This is the entry point for Alternative Business Structures, also to be introduced under the LSA, but not a phenomenon dealt with at length by Decker and Yarrow.

Those who are interested to read the report should note the availability of a supplementary collection of essays, many of which make valuable points on, or contribute important perspectives about, the report. The great value of Decker and Yarrow’s report is that it authoritatively supports and questions many of the economic assumptions on which the emerging regulatory philosophy, based on ideal competition, is based. As it is, a strong possibility is that legal services in the UK will become progressively de-regulated, albeit with a transitional period of public regulation. This, as Decker and Yarrow show, would be to replace one flawed system of regulation with another. Before that happens, other, more nuanced and empirical studies will, it is to be hoped, test the assumptions of the economic model, illuminating the theoretical shadows identified in this report.

Regulating Cyberspace: Can Online Ever Equal Offline?

Chris Reed, Online and Offline Equivalence: Aspiration and Achievement, 18 Int’l J. L. Law & Info. Tech. 248.

Works of pure theory in Anglophone European internet law scholarship are fairly rare, and those that exist often come from scholars whose background is in a field other than traditional law, e.g. sociology, politics or criminology. While some of this work is excellent, it may lack a full understanding both of the nuances of legal analysis and the realities of commercial legal culture. For all these reasons, it is to be warmly welcomed that in what one might call the second stage of his distinguished career, Chris Reed, one of Europe’s leading researchers into the more commercial and practical aspects of internet law, has decided to turn his years of experience in helping both draft and critique European internet and e-commerce laws towards theorising how to regulate for the on-line world, in the form of a series of pieces which so far include Taking Sides on Net Neutrality, The Law of Unintended Consequences–embedded models in IT regulation and more recently, How to Make Bad Law: Lessons from Cyberspace. The latest of these pieces (which are destined eventually to form a book on regulation, I believe)1 appeared in late 2010 and takes on the near cliché of internet law that “what is legal offline should also be legal online,” or more formally, the principle of equivalence. While it is something of a kneejerk assumption in many domains, notably freedom of speech, that this approach is axiomatically mandatory, Reed dissects the desirability, applicability and most interestingly perhaps, the failures of the principle in the context of the history of (mainly European) internet regulation.

Reed defines equivalence as a starting point as “an approach in which all laws and regulations should, so far as possible, be equivalent online and offline. In other words, the same legal principles should regulate an online technology activity as those which applied to the equivalent offline technology activity.” Reed’s first point is that this should not be confused with the similarly-popular notion of technology neutrality. “Technology neutrality addresses the choice between the available substantive rules which could be used to implement … legal principles,” while equivalence, in his view, is about choosing those legal principles for regulating the online world in the first place. Equivalence therefore takes precedence in the regulatory toolkit and is arguably the more important issue to get right. Reed also muses as to whether a distinction is needed between “technology indifference”–which is an “attempt … to define a rule in such a way that it applies equally well to the activity whatever technology is used to undertake it” and a concept he does not name but  I will call technology non-discrimination which is “a legislative aim that the rules should not discriminate between technologies and should continue to apply effectively even if new technologies are developed.” A good example of problematic regulation which might have been elucidated by applying these concepts lies in the recent controversial redrafting of the part of the EU Privacy and Electronic Communications Directive dealing with cookies (art 5(3[/note], where despite frequent claims to technology-neutrality the results have been nothing of the kind either initially or after reform.

Returning to equivalence though, Reed makes a cogent distinction between “pure” equivalence and “result” equivalence (a concept which seems drawn partially, one might hazard, partly from feminist legal theory and partly from the comparative law doctrines of e.g. Zweigert and Kotz). Applying the exact same rules on and offline will often simply produce a mess, given the huge differences in the environment–one of the best examples being the attempt in early jurisprudence to map ISPs and hosts of  unlawful defamatory material to newspaper or TV publishers with consequent full liability. Instead, Reed points us towards “functional equivalence,” where the idea is to get the new online rule right by making sure that even if formally or even substantively quite different, it achieves the same result offline as online.  This raises the further problem that, in Reed’s view, “equivalence” is often most neatly met by having one rule for both online and offline activities, with the practical result of a need to revise (and generalise?) the offline rule to cover both domains. If the rule brings in entirely new regulation, this may be politically plausible–Reed’s example is the UK Terrorism Act 2006, which introduced the new offence of disseminating terrorist publications, and applied it to both hard copy and electronic versions simultaneously–but in other cases it may require political will or judicial happenstance and may never or only very slowly happen. One success story Reed cites is the adaptation of the English common law of fraud by the UK Fraud Act 2006 to deal with the problem that in online fraud the fraudster rarely knows the mental state of his victim (the offence was redrafted to pivot solely on the intention of the fraudster). This however depended on funded law reform by the English Law Commission–whose time and resources are finite (as are, one imagines, those of similar national bodies). Such holistic reform may simply often not be possible.

But the biggest problem with “functional equivalence” is how to define what is functionally the same scenario to be regulated. This is hard enough offline: online, it is a corker. Reed highlights one of the most obvious problems, that of categorisation. Is a search engine, for example, a piece of essential infrastructure, like water or gas supplies;  a distributor of electronic content like an ISP or host; an intentional recopier of copyright material, possibly without permission of the rightsholder; a publisher like a newspaper with the freedom of speech privileges that implies; or the virtual equivalent of physical trespasser? Get this wrong (as the Belgian courts notoriously did in Copiepresse) and you have a scenario where the internet disappears in the deluge of unparseable material and the digital society vanishes. One sleight of hand Reed doesn’t mention is to avoid the categorisation problem by explicitly regulating only functions, not who undertakes them. This is largely what the DMCA and the EU E-Commerce Directive do to deal with the problems of online intermediary liability: a strategy that has lead to much testing of limits, yes, and of course essentially passes the buck back to the courts  (cf. Napster, Grokster, Google Adwords, L’Oreal v eBay  at the ECJ, et al) but at least has had some durability about it.

But sometimes there simply is no functional equivalent between the online and offline worlds. What then? How can we tell when equivalence simply won’t work? Here Reed’s analysis does falter. In his view, for example, there are “no major theoretical obstacles” to regulating copyright online and offline by “equivalent” rules (P. 269): the problem is a procedural not a substantive one, namely the restraining influence of international treaties preventing states from going it alone with their own most appropriate solutions (a bit like Greece being stuck in the Eurozone). This writer would beg to differ: one of Reed’s own criteria for applying equivalence is that there is a balance of interests among stakeholders which can be identified offline and mirrored online–it is hard to see how this is possible in the current online content wars where balances are entirely skewed from the offline by easy copying, easy distribution, anonymity and encryption (to name but a few factors). But these cavils aside this is a rare and enormously useful primer on how to regulate for the internet–one wishes some elected representatives could be forced to read a copy.


  1. Usefully, Reed has also posted on his blog his use of these pieces in teaching a coherent course on Internet law and regulation, along with extra conclusions and slides.

A New Solution to an Old Problem: Section 1447(d) and Appellate Review of Remand Orders

It may not be the most headline-grabbing issue on the Supreme Court’s docket. But it has occupied more of the Court’s attention during the past half-decade than abortion, affirmative action, the Commerce Clause, or the Second Amendment. It is 28 U.S.C § 1447(d)’s command that “[a]n order remanding a case to the State court from which it was removed is not reviewable on appeal or otherwise.” This apparent ban on appellate review has generated an awkward line of cases, beginning with Thermtron Products v. Hermansdorfer in the 1970s, which struggle to determine when § 1447(d) “means what it says.” In the Court’s most recent decisions on the issue, several Justices have penned separate opinions voicing their frustration with current doctrine. Enter Jim Pfander and his recent article Collateral Review of Remand Orders: Reasserting the Supervisory Role of the Supreme Court. Pfander expertly diagnoses what is wrong with the jurisprudence surrounding § 1447(d) and, more importantly, offers a new solution to this long-standing puzzle.

Here is the crux of the dilemma: the text of § 1447(d) forbids appellate review of a district court order remanding a case to state court. Period. Full stop. No exceptions. In Thermtron, however, the Court circumvented this ban on review by reading § 1447(d) as applying only to remands based on grounds specified in § 1447(c). The Thermtron exception is hard to justify as an interpretive matter given the text of § 1447(d). Perhaps more troublingly, it is functionally misguided. It means that § 1447(d) does forbid an appeal if the remand is based on a lack of federal subject-matter jurisdiction—a ground that is specified in § 1447(c)—even though the scope of federal subject-matter jurisdiction can be a very significant issue, both for the parties to a particular case and for our judicial system as a whole. Yet Thermtron permits review for issues of far less significance and impact—such as a district court’s discretionary decision whether to remand state law claims after all federal claims have been resolved—because such remands are not governed by § 1447(c). The problem has been compounded, as Pfander points out, by the Supreme Court’s holding in Quackenbush v. Allstate that a remand order was a “final decision” for purposes of 28 U.S.C. § 1291. While Thermtron contemplated that remand orders qualifying for its judicially-created exception to § 1447(d) would still have to meet the heightened showing required for a writ of mandamus, Quackenbush has been read to make such orders appealable as of right.

Here is Pfander’s solution. Section 1447(d) would be enforced, without exception, to prevent direct appellate review of all district court remand orders. But § 1447(d) would not prevent the Supreme Court from “exercising powers of supervisory oversight conferred in the All Writs Act.” (P. 499.) Therefore, notwithstanding § 1447(d), a party may petition the Supreme Court for leave to file an “original” writ of mandamus or prohibition challenging a district court’s order remanding a case to state court.

There is much to be said for Pfander’s proposal. It avoids the current doctrine’s paradoxical result that appellate review is required in cases where the district court’s decision (in Justice Breyer’s words) “is unlikely to be wrong and where a wrong decision is unlikely to work serious harm,” yet review is forbidden “where that decision may well be wrong and where a wrong decision could work considerable harm.” Under Pfander’s approach, the Supreme Court could focus on those remand orders for which review is most pressing, based on “the significance of the error and the importance of its correction.” (P. 515.)

The biggest textual obstacle to Pfander’s solution is § 1447(d)’s command that a remand order “is not reviewable on appeal or otherwise” (emphasis added). Arguably, the Supreme Court’s use of an original writ would be an example of “otherwise” reviewing a remand order, and so would be equally foreclosed by § 1447(d). But Pfander has compelling responses to this objection. Decisions as old as Ex parte Yerger (1868) and as recent as Felker v. Turpin (1996) reflect a presumption against implied statutory repeals of the Supreme Court’s supervisory authority. Pfander also provides a thorough historical discussion confirming that “the restriction in § 1447(d) was aimed at review conducted by the intermediate courts of appeals and did not affect the Supreme Court’s all-writs authority.” (P. 522.)

Commentators have long suggested that the ideal solution to all this would be to amend § 1447(d)—either by legislation or by federal rulemaking—to codify a more sensible approach to review of remand orders. But wishing it has not made it so. Thus, this remains an issue for the Supreme Court to confront in the next round of § 1447(d) cases. To that end, Pfander’s article is a must-read not only for civil procedure and federal courts scholars, but for practitioners as well. For litigants wishing to block an appeal of a remand order that would currently qualify for a Thermtron-inspired exception to § 1447(d), Pfander’s critique of current doctrine is likely to find a receptive ear with Justices who are skeptical of such exceptions and may finally be ready to change course. For litigants seeking to challenge a remand order for which § 1447(d) still means what it says, Pfander provides the roadmap for invoking the Supreme Court’s supervisory authority via an original writ.

What will happen next is anybody’s guess. Until we get there, members of the academy, the bar, and the judiciary should give Pfander’s proposal a very serious look.

Stock Issuances and Managerial Agency Costs

Mira Ganor, The Power to Issue Stock (2011), available at SSRN.

Every state corporation statute authorizes the board of directors to issue stock. While one could imagine arguments for allocating this authority to the shareholders, the board of directors is better positioned to respond quickly to financing needs or to provide stock as a motivation for employees. Nevertheless, whenever the board of directors is given an important power, we must be attentive to the potential for abuse. In her new article, The Power to Issue Stock, Mira Ganor reveals various ways in which directors may pursue their own interests at the expense of a majority of the shareholders or thwart the veto power of minority shareholders through the issuance of stock.

Stock issuances are important in Ganor’s account of corporate governance because of the possibility of voting dilution, which occurs when an existing shareholder owns a smaller ownership interest after a new stock issuance. For example, assume that an investor owned one million shares of common stock in Company A, equal to a 25% ownership interest (i.e., the investor owned one million of four million shares outstanding). If Company A subsequently proposed to sell another one million shares to a new investor, the existing investor would see her ownership interest decline from 25% to 20% (she would own one million of five million shares outstanding).

Recognizing this risk of dilution, corporations (especially privately held corporations) sometimes place constraints on the power to issue stock to reassure prospective investors. For example, the number of authorized shares in the corporate charter may be limited or the existing investors may have veto rights or preemptive rights, which would allow them to maintain their ownership interest. In addition, public corporations may be subject to stock exchange listing requirements, which force managers to gain shareholder approval for all new stock issuances exceeding 20% of the outstanding shares.

Despite these potential constraints on the power to issue stock, most publicly held corporations grant the board of directors a great deal of discretion in this area, and boards frequently use that discretion for control purposes. The most familiar example of an issuance motivated by control is the poison pill, which is employed by managers to resist hostile takeovers. Another example is the top-up option, which has become an important mechanism used by managers to facilitate two-step mergers by a favorite bidder. A top-up option gives bidders who acquire a specified percentage of the target company–usually 50%–the option to purchase enough newly issued shares of the target company to reach 90% of the outstanding shares. At that level of ownership, the bidder is allowed to consummate a short-form merger, which does not require a shareholder meeting or a vote of the minority shareholders. Ganor describes the details of the purchase as follows:

Once the bidder exercises the top-up option, she needs to buy the new shares from the company and pay for these shares the same price that she paid in the tender offer. A lower price will not represent a fair market price and may be easily challenged since the tender offer price establishes a fair market price for the shares. [A] large number of shares is issued when the top-up option is exercised, hence the consideration that the bidder should pay the company for these shares is substantial. However, the consideration for the shares can be, and often is, paid with an unsecured note except for a small part, which represents the par value of the shares. Following the short form merger, the unsecured note issued in exchange for the shares is nulled, because after this merger the holder of the note is combined with the issuer of the note and they become one.

Dissenting shareholders may pursue an appraisal remedy after a short-form merger, but their ability to stop the merger seems rather limited. In a case involving a top-up option in the acquisition of Cogent, Inc. in 2010, In re Cogent, Inc. Shareholder Litigation, the Delaware Court of Chancery denied a request for an injunction, reasoning that the harm from the top-up option was too speculative. The plaintiffs in Cogent argued that the top-up option was a sham transaction because the note offered in consideration of the option shares was “illusory consideration,” but Vice-Chancellor Parsons was deferential to the board of directors, concluding that the Delaware code “leaves the judgment as to the sufficiency of consideration received for stock to the conclusive judgment of the directors, absent fraud.”

Top-up options provide an excellent illustration of the agency problems that may arise from the power to issue stock. The most original and important contribution of this article is Ganor’s attempt to capture the potential for abuse with the “excess-ratio,” which is the ratio of authorized non-outstanding shares to the issued and outstanding shares. Ganor observes:

[A]n excess-ratio of one signifies that there are enough authorized but not outstanding shares to double the number of shares already issued and outstanding. The stock exchanges‘ requirement of shareholder approval for an increase of more than 20% of the issued share is equivalent to a 0.2 excess-ratio; and the German limit of 50% can be expressed as a 0.5 excess-ratio.

Ganor concludes her paper with some limited empirical evidence on the excess-ratios of non-financial companies incorporated in Delaware that have completed an initial public offering in the United States. While the ratios seem high–with reported means in excess of 5 and reported medians typically between 3 and 4–Ganor found no meaningful correlations between the ratio and firm size or between the ratio and the likelihood of acquisition.

This paper focuses our attention on an aspect of director power that is rarely acknowledged in the vast literature on managerial agency costs. Ganor offers useful descriptions of the manner in which the power to issue stock can be problematic, and she takes the first step toward systematically analyzing that power.

Playing by the Rules

Mitchell N. Berman, Let ‘em Play:” A Study in the Jurisprudence of Sport, 99 Geo L.J. (forthcoming 2011).

What does sport have to do with jurisprudence?  Not a great deal, one might think. To be sure, particular sports, like legal systems, are rule-governed practices. This commonality and the relative simplicity of sports makes them useful as a source of examples that might be deployed to explain more complex legal-theoretical ideas.

Philosophers of law and legal theorists commonly use sports examples in just this way. Most famously, H.L.A. Hart used examples from games and sport both in criticizing other views about the nature of law and in clarifying his own distinctive view. In his critique of Austin’s command theory of law, for example, Hart invoked the scoring rules of a game as he explained why nullification under the power-conferring rules common to modern legal systems cannot be assimilated to sanctions under duty-imposing rules. (H. L. A. Hart, The Concept of Law). And he adverted to chess and cricket to explain one of his most distinctive theses—that rules, and so law, have an “internal aspect.” Chess players, he observed, do not merely have “habits of moving the Queen in the same way,” which an external observer might record. In addition, “they have a reflective critical attitude to this pattern of behavior: they regard it as a standard for all who play the game.”

Given the frequent appeals to sport in the work of legal philosophers, we should be surprised at the scant attention that has been paid to its complexities. So Mitchell Berman observes, in his wonderfully engaging essay on the “jurisprudence of sport.” As his discussion makes clear, sport isn’t merely a source of useful examples. On the contrary, “sports leagues constitute distinct legal systems,” he explains, sharing many features in common with legal systems proper, such as primary and secondary rules and institutional actors who function like legislators and adjudicators. For this reason alone, sport merits serious and sustained investigation by legal theorists.

Berman launches his discussion with a detailed recounting of the semi-final match of the 2009 U.S. Open. Serena Williams, the “odds-on favorite to win her third grand slam tournament of the year,” faced off against Kim Clijsters, in her surprise return to the tennis world. Williams lost the first set to Clijsters and was down 15-30, when a line judge called her for a foot fault on her second serve. Williams lost it, in more ways than one. Her outburst and threatening behavior toward the line judge resulted in a one-point penalty and a win for Clijsters, as well as additional penalties and universal criticism. As for the substance of her complaint, however, some sided with Williams, arguing that you just don’t call a foot fault at such a critical juncture. The position of Williams’ defenders rests on what Berman calls “temporal variance”—the view that “at least some rules of some sorts should be enforced less strictly toward the end of close matches.”

Having thus set the stage, Berman’s article goes on to investigate the “contours and bases of optimal temporal variance.” But this, he quickly makes clear, is just the “surface agenda.” His grander ambition, he explains, is to illustrate the worth of theoretical investigation of sport in order to encourage the development of a jurisprudence of sport. And yet he hints at a still grander ambition when he offers that one might see his article as “a manifesto of sorts for an enlarged program of jurisprudential inquiry.”

Berman offers persuasive considerations in favor of this broadest aim. Sport and law confront many of the same issues, he observes, including when and where to guide conduct by formal written norms rather than by informal social norms, when and where to make use of rules rather than standards, when and where to leave adjudicators with discretion, and how appropriately to limit that discretion. Sport and law must each manage epistemic uncertainty, as well as the normative uncertainty that arises when gaps are exposed between “the law in the books” and “the law in action.” An enlarged jurisprudential inquiry would improve our understanding of the “phenomena and dynamics” common to law and sport. The development of a jurisprudence of sport might prove particularly useful on the law side, not only because the rich examples sport provides can be plumbed as a source of jurisprudential hypotheses, but also because our intuitions about particular practices common to both are, in the sports context, “less likely … to be colored or tainted by possibly distracting substantive value commitments and preferences.”

Berman tells us a bit about where he anticipates that his own arguments might yield dividends for legal theory. Without elaborating, he offers that in light of his arguments, we can better understand the lost chance doctrine in torts, the difference between claim-processing rules and jurisdictional rules, and the granting of equitable remedies in certain contexts such as appellate litigation. Whatever the import of a jurisprudence of sport, and of Berman’s arguments in this article for the particular legal-theoretical issues he identifies, I suspect the broader inquiry Berman envisions may yield equally large dividends for more general legal theory.

Viewed in its entirety, Berman’s exploration of temporal variance and the rule-governed practices of various sports provides a compelling corrective to the temptation to think of rules as functioning to settles issues, and to settle them in particular by precluding appeal to the reasons for the rule—the underlying aims or standards the rule serves. Rules no doubt sometimes settle issues and always have some tendency to constrain, or better, to guide decision making. And yet as Berman’s essay illustrates, even the formal invariance of a rule does not always settle all questions we might have about the rule and its proper application “in action,” and not simply because of the complex interaction of the rules of a practice.

What makes his essay compelling in this way is precisely that if there are any practices in which we might be inclined to think of rules as operating pretty straightforwardly, surely sporting practices are among them. But if playing by the rules can be so complex and controversial in sport, where matters are not of such great moment, then so much the worse for law. Of course, one might naturally wonder whether puzzles of the sort Berman considers surrounding the rules and “rulified standards” in sport have important relevant counterparts in the law. More deeply, one might wonder whether the distance the study of sport allows from the evaluative commitments and preferences that operate when we consider law is such a good thing for jurisprudence. For a critical difference between law and sport is that legal issues do engage the values, interests, and concerns most fundamental to human life. The influence of our evaluative commitments and preferences can be distorting, but it can also help us to see things rightly.

As I suspect Berman would correctly point out, the expansive jurisprudential inquiry he advocates need neither assume near parity nor overlook how values can correct our perceptions and judgments. To be sure, among the things we would want to learn, and would learn, from a jurisprudence expanded to include sport are the limitations of sport for understanding as normatively rich a system as law. But what we learn about that also stands to improve our legal jurisprudence.

Mitchell Berman’s essay is beautifully written, rich in detail, and deep in its exploration of the complex interplay of the rules of sport and the distinctive aims and excellences that form the “internal morality” of particular sports. I found it thoroughly fascinating from start to finish and was left, as someone with no particular interest in sport, with an appreciation of why people as smart and insightful as Berman would find it so gripping. One can only hope that others will take him up on his invitation and that Berman himself will, in future work, begin to bring the seeds he has planted in this essay to fruition.

 

New Governance, Decentring & Unionization as the Default Option

David Doorey, Decentring Labor Law (June 14, 2010), available on SSRN.

There is a cadre of terrific Canadian labor and employment scholars, many of whom have received insufficient recognition in the U.S. As a group, these scholars bring interesting and sharp insights into the general problems of employment law not only in Canada but also around the world. They are much better versed in U.S. law than we generally are about Canadian law. Their insights are particularly useful for us since Canada and the U.S. share the basic “Wagner” model of union-management law. Among a long list of Canadian scholars, I want to focus on David Doorey, Professor of Labour and Employment Law, York University. His current piece on decentring workplace law is clever, bold, and interesting. He synthesizes a considerable range of theory, from the U.S. and elsewhere, to support a very provocative proposal.

The background for his article is the continuing decline of union membership which, with only a couple of exceptions–the Scandinavian countries and, curiously, China–is a worldwide phenomenon. With economic globalization reducing the significance of separate national economies and the laws of nation-states tied to the regulation of those economies, the decline should be no surprise because unionism and labor law are paradigmatically national. Other factors, especially the ideological rejection of unionization by management, also play an important role in the decline. There is, of course, a tremendous amount of interesting and valuable scholarship addressing the situation and frequently calling for reforms aimed at reversing that trend. The now failed Employee Free Choice Act (“EFCA”) was considered to be justified on the basis that it would help shift the momentum away from decline. The EFCA has been the subject of considerable scholarship, much of it aimed at evaluating its potential for turning momentum towards greater union density. (For what it is worth, my view is that EFCA would make only a marginal difference because, the decline in unionism being worldwide, it has to be based on much more than the weaknesses of the NLRA to protect the right of workers to organize.)

David Doorey takes a very different tack. His piece is provocative because he takes as given the decline of unionism and the lack of political will to do anything to directly counter that decline. Instead, he focuses on the role unionism would be able to play in an employment system that is generally non-union and where compliance with labor standards is low. He starts by discussing what we call “new governance” theory and what he calls “decentring” or “legal pluralism” theory. The starting point for decentring is that traditional, top-down, control-and-command legal regulation has failed to be efficacious. Decentring theory looks instead to how those subject to regulation operate and how legal regulations can be framed to create incentives for the regulatees to voluntarily comply. In other words, regulations should, to the extent possible, align the interest of the state in having its employment law implemented with other incentives to the firm. He rejects the critique that the new governance or decentring theories are a pollyannaish call for purely voluntary compliance that simply turn “hard” into “soft” law which equates to no enforcement. For Doorey, decentring theory focuses on constructing regulations that do lead to private actors conforming their conduct to the goals of the law but doing so by figuring ways in which the regulations can be structured to enhance voluntary compliance.

Looking at the role unionization can play in a world of much reduced union density, Doorey devises a regulatory scheme that has unionism operate as a default position available to workers unhappy with their treatment by their employers. What differs from the present system in the U.S. is that Doorey’s proposal would create a dual regulatory system. One track would be for “good” employers and would remain unchanged from the present law. The change would occur in the second track for employers found to be “bad,” based on their failure to comply with a defined set of employment standards–“targeted employment laws.” His proposal for the new track applicable to bad employers essentially channels the Employee Fair Choice Act–card check recognition and first contract interest arbitration–but adds more elements, including requiring employers to provide more information to unions, mandating unions access to workers at the workplace, and assigning a labor official to each organizing campaign by a union at one of these bad employers. Since he acknowledges the fierce resistance management has toward unionization, he utilizes the risk of unionization as the primary incentive for employers to comply with these targeted employment laws. Management’s antiunionism is the driving force pushing the employer to comply with the law to avoid being categorized as a bad employer that faces an increased risk of unionization. In other words, his proposal relies on “risk as labour law.” Management’s risk analysis should lead them to increase compliance with labor standards, despite the burdens of compliance because of their desire to avoid an increased risk of unionization. Incorporated into that risk analysis would be the fact that workers generally favor unionization or at least some sort of independent representation vis-à-vis their employers. What I like so much about this article is that it is a concrete application of decentring theory. It avoids the rather abstract scholarship that has been all too common among its theorists.

Doorey’s intriguing proposal raises several questions. First, would such a dual regulatory system have any greater chance of enactment than the Employee Free Choice Act has had?  Employers would still prefer managerial slack, but perhaps it would be harder for them to mount opposition because the question of unionization or not would depend on their compliance with the law. Raising the risk of unionization for lawbreakers but not law abiders is quite a different situation from increasing the general risk of unionization by, for example, adopting card check recognition that gives employers the argument that workers should hear the employer’s side before deciding whether or not to support a union. In other words, Doorey’s proposal would undercut the high road argument that card check recognition interferes with employee free choice. Second, if adopted, Doorey’s system would essentially turn the right to organize into an instrumental rather than a basic right. While increasing the chance of employees of bad employers to get union representation, it would consign the workers of good employers to the diminished chance to exercise their rights that the present woefully inadequate laws provide. Third, traditional enforcement techniques would seem to be required to establish which employers were bad and which are good. While it might be possible to rely on data about employer compliance to establish which ones were bad, employers would presumably fight long and hard to avoid being characterized as bad employers and so the system might be hard to get into operation.

In sum, David Doorey is inventive and knowledgeable in his scholarship. His approach in his decentring article should stimulate new and interesting avenues for scholarly and political debate and development. His proposal essentially bridges the gap between new governance theorists and more traditional labor law scholars. He is just one example of the Canadian labor law scholars who can enrich our own, all too often, parochial vision of labor and employment law.

Cambian Rings of Constitutional Amendment

William W. Van Alstyne, Clashing Visions of a “Living” Constitution, CATO Supreme Court Review 2011 (Forthcoming Sept. 2011), available at SSRN.

Can a constitution “live”? Is the alternative to a “living” constitution reinterpreted and modernized by judges a “dead” constitution hopelessly out of touch with modern realities? William Van Alstyne, in Clashing Visions of a “Living” Constitution critiques (nay, mocks) several schools of living constitutionalism and sets out what he believes is the one true path to a living constitution. This essay is lively, insightful, irreverent and makes an important, if not wholly novel, set of points. It reminds me anew why I have recommended Van Alstyne’s “critical guides” to Marbury and McCardle to Constitutional Law students for years.

The essay (originally a lecture) opens with musings on confirmation hearings for Supreme Court justices and the proper scope of judicial constitutional review. Acknowledging that there are many schools of constitutional interpretation, Van Alstyne looks at various schools associated with the notion that the United States Constitution is a “living” constitution. He examines non-interpretivists’ (non-original interpretivists’s? ) efforts to “free us from the despair of textual uncertainty” and “the tyranny of-and-the-futility-of endlessly-contestable history.”

Van Alstyne focuses particular scorn on one non-interpretivist, Bruce Ackerman, who famously elucidated a de facto constitutional amendment outside the process provided in Article V. Noting that various amendments to the U.S. Constitution as originally understood could not support the interpretations given them by the Supreme Court, Ackerman (in Van Alstyne’s prose):

at once went on, forcefully, to declare that the Court’s decisions could nonetheless be rightly seen as actually resting on solid and secure foundations, namely, foundations of “nontextual amendments” or, to give credit (where such credit is surely due!), to what one may—in my own view—call “Ackerman” amendments, and, accordingly, all those who enlist in this school of constitutional jurisprudence are perhaps best described either as “Ackerlytes” or even, perhaps, as “Ackolytes” (but surely not so churlishly, perhaps, as mere “Ackermaniacs”)…. [Changes brought about through the appointment process by a President who was thereafter reelected] serve as “real amendments.”  And so, accordingly, it would be inappropriate for any later Supreme Court to go back [on such amendments] … [T]his is the way—or at least one equally valid way—in which you keep the Constitution “alive.”

What rapier-like prose! Could Pope have done better? But, lest I digress, we should return to the basic argument.

Constitutionalists, says Van Alstyne, fall into two basic camps: “obligationists” and “opportunists.” Discussing the lively but well-worn example of Hugo Black, Van Alstyne says the former read (and re-read!) the Constitution’s text.  Obligationist judges take seriously the Article VI oath to “support and defend this Constitution, not some other.” Though they may differ on interpretations of a constitutional phrase, they are committed to a non-living interpretive task, the living constitution left by them with the people through the Article V amendment process. Opportunists, conversely, whether from the right or the left, interpret “suitably adaptable clauses” expansively, ignoring clauses not aligned to their desires.

Using the metaphor of visible cambian rings that record a tree’s growth, Van Alstyne says amendments to the US Constitution register changes in society. A healthy society should display these changes in formal amendments, not through sleight-of-hand and scarcely visible reinterpretations by unelected judges (whether or not their appointing presidents are reelected). Is our society healthy? Not by this measure, for an absence of cambian rings signals petrification. Today, he hypothesizes, there is a “negative synergy” for new textual amendments because the public is unwilling to entrust new constitutional texts to opportunist judges who might expand upon the meaning of any such public commitment.

The failed Equal Rights Amendment, which provided an opportunity for one such authentic constitutional cambian ring, is illustrative. Opponents, with some justification, argued that this amendment, expansively interpreted in ways wholly unintended, might remake cultural norms (including dress differences), weaken military muscle, and undermine institutions like the family, motherhood, and marriage. By contrast, argues Van Alstyne, the 19th Amendment, which “gave” women the right to vote, was a reflection that women had already been voting in a majority of the states at the time of the amendment. The “stealth” 27th Amendment, ratified from 1789 to 1992 by far more dead than living Americans, is hardly a ring, but at least it is “of no particular harm.”

Providing “an illustration central to the theme of this lecture in a contemporary setting,” Van Alstyne hypothesizes a federal statute that reduces jury size for federal court criminal cases to seven persons (from the current twelve). Functionalist supporters might say this reform would save costs or reduce the number of hung juries, perhaps helping to take criminals off the street. Functionalist opponents might say that the problem of costs to the criminal justice system comes from the proliferation of crimes, not the number of criminals, and that reduction of jury size violates the “personhood” of the defendant.

Were one to peruse the text of the 6th Amendment, one would find a “right to trial by jury” but no jury size specification. Does failure to specify mean any size goes? Would a speech by James Madison introducing the Bill of Rights in the first Congress that said “any size is fine” close the case (even though notes of the speech may not be accurate and others may have disagreed with Madison, either in Congress or in state ratification discussions)? Since there was no such speech, might one look to the Article III provision–“the trial of all crimes except in impeachment shall be by jury,” to the debates at the Constitutional Convention, and to the ratification debates to see if anything was said about jury size. In the Virginia ratification convention, it turns out, there was discussion of the point, and Madison (reportedly) said that “jury” meant “12” as a technical term going back to Blackstone. This interpretation, apparently, was acceptable to skeptics.

How does this relate to confirmation of judges, the point of departure for Van Alstyne’s essay? The point, he says (as forcefully as Ackerman pushes nontextual amendments), is that the people will be loathe to turn any new amendment over to judges for interpretation unless those judges are obligationists. If judges (and constitutional law scholars) take as their mission to fashion the world into their constitutions rather than this constitution, new cambian rings will not be forthcoming. Confirmation processes will remain political cat fights between opportunistic senators of the left and right, and Congress itself will continue to be lazy concerning its own constitutional constraints.

Having for several years taught a seminar on constitutional amendment, I second these observations, as well as this parting concern: “That during the decades of my own (misbegotten?) most active academic years, we may have so far gotten accustomed to the ‘exogenous’ Constitution that the amendment process has itself begun to recede down a rabbit hole … and the country may frankly be not really better—but significantly less—well off on that account.”

Interestingly, the Supreme Court, on the one occasion when it considered what it means to be “attached to the principles of the Constitution of the United States,” concluded (albeit with some dissent), that this did not mean attached to rights of contract, compensation for property taken, free speech, freedom of religion, bearing arms, (unlimited) other rights, states having all powers not (narrowly) delegated, equal protection, or due process. Rather, said the Justices in Schneiderman v. United States, it means attachment to the Article V process of authentic, difficult, super-majoritarian, and peaceful change.  Now that’s a living constitution!

Tracing the Roots of Inequalities: Why Scholars Need to Widen their Nets

In Critical Directions in Comparative Family Law: Genealogies and Contemporary Studies of Family Law Exceptionalism, authors Janet Halley (Harvard) and Kerry Rittich (Toronto) offer a compelling way to think about the doctrinal areas which for so many of us are handy ways of  defining our area of scholarship.  The problem is that these “areas” are often less than helpful when trying to define the legal context of equality problems, and they are a positive danger when we move on to consider law reform options. Halley and Rittich take on these problems as they relate to “family law”.

Let me start by saying that even on its own terms, this article is fundamentally about equality questions. Halley and Rittich are clear that family law is about “distributional outcomes” (P. 755) and that the legally constituted family is closely linked to market distributions, even if those links are often masked. They argue that the family should be recognized as an “economic unit” and not only as an “affective unit”.  The authors encapsulate this idea in their use of the term, “economic family,” signaling that they would put “the family and the market, family law and contract, back into contiguity” (P. 758), resisting the claim that the “economic character of the family” has disappeared in modern and postmodern times. Key to this resistance is accepting that the household is (still) a critical economic unit.

As the authors develop this idea, they ask how what is commonly referred to as “family law” governs the ongoing negotiations which characterize a household–“marriage or divorce; deciding how much to invest in the education of children; tolerating domestic violence or deciding to escape it….who will take out the garbage.” (P. 761.)  At this point, they turn to delineating the “background rules”–not “family law” as defined in law school courses, but the other rules, those “artificially segregated” from family law and defined by different headings. (P. 761.)  These legal backgrounds are labeled, in a user-friendly taxonomy, Family Law (FL)1, FL2, FL3 and FL4.

These categories describe the relationship of various kinds of law or rules to the “economic family” or “household”. FL1 is the law school course in Family Law. But, “if you wanted to understand how law contributes to the ways in which the actual family and household life is lead by actual people, you would never stop there.” (P. 761.) The authors take us next to FL2, comprising the explicitly “family” targeted provisions of various other forms of law (including immigration and bankruptcy law). FL3 moves further into the background, “the myriad legal regimes that contribute structurally but silently to the ways in which family life is lived” (P. 762.) These contributions might be intentional, unintentional, helpful or harmful. Rittich and Halley cite “occupancy limits in landlord/tenant law that give more or less protection to incumbents; employment rules that permit dismissal on the part of the employer  “at will” as part of this zone. Finally, FL4 consists of informal norms, since recognizing the impact of these ideas means recognizing that they often trump formal laws in terms of impact on the organization of a household.

Here are the four things I get from this. Firstly, the critique advanced here of family law as a “liberal” idea is one which explodes a variety of accepted ways of doing things (or, as the authors write, “give the lie to the apparent naturalness of what we are doing now”). Secondly, we can all think about how our particular area of study (assuming it isn’t family law proper) might be affecting households. For instance, co-author Kerry Rittich, in another article in the same volume, does this in relation to development policy and legal reform, tracing the path from market reform to the transformation of families and reallocations of “resources and power” within households. (P. 765.) These effects will be, according to this article, profoundly distributional and therefore are easily part of the equation for equality scholars. Thirdly, at the same time, we can imagine developing similar genealogies for other areas of law (Equality Law 1, 2, 3 and 4?), genealogies which push us to be more critical (small c or big C) about the multiple layers of distributional effects that equality law has, and more generally about the scope of our fields. Finally, the model is extremely helpful, if daunting, when attempting to engage in law reform efforts. It means that we can look outside the obvious points of intervention–but it also means that we have to consider a very complicated set of interactions and results from any proposed change. Anything that makes “unintended consequences” a bit more predictable is helpful, and many law reform campaigns could do with a more critical eye. This model offers us a way to see that things are more complicated or maybe even simpler than we might at first think.

These four things are all outside the “urgent” need which drove this article, the need for a better model for comparative family law analysis. There is so much in this short (24 pages) article. Each time I read it I find something new.

Subconscious Impact

L. Song Richardson, Arrest Efficiency and the Fourth Amendment, 95 Minn. L. Rev. __ (forthcoming 2011), available at SSRN.

He was a widely respected leader in his class, courted by some of Washington DC’s top law firms. Though a student, he already had a book of potential business top sports lawyers salivated over, plus a post working in the Senate for the summer. He was the kind of student who listened carefully when others spoke rather than speak often, but when he did speak people listened because the insights were often illuminating.

On the roadways, however, he was just another black man, driving as carefully as possible because he was a black man on the freeway. He was stopped anyway for unknown reasons, ordered out of the car, frisked like a criminal on the side of the road, and waved on his way when the roving search yielded nothing. He wondered what recourse he had to realize the protection of the criminal procedure rights we were studying, the standards that say you cannot be stopped without reasonable articulable suspicion of a crime, that you cannot be frisked without reasonable articulable suspicion that you pose a danger to officers.

His account was immensely moving to me and his fellow students. He never allowed himself to show frustration or anger. He always broadcast careful grace and gravitas, even in this account. But you could feel it. In fact, we all felt frustrated and angry for him. How could this happen to him–to be demeaned in this way, to live with de facto differential rights?

A few months later, I was moderating a panel on race and the criminal justice system at the University of Washington and listening to another person with grace and gravitas speak. He was the kind of law enforcement officer I have had the privilege of working with before, who entered a very difficult field because he passionately wanted to be one of the good guys, there for you in your time of distress. He was describing how, for police officers who strive hard and genuinely believe they are dispensing evenhanded justice, it can be deeply frustrating to be besieged by frequent accusations that they and their professional comrades are racists, are biased, and are the ones responsible for the gross racial disparities in who is targeted and incarcerated in the criminal justice system. It makes them hunker down. It may even make some officers resentful rather than receptive when undergoing training about dealing with diversity and race because the training seems like just another accusation.

When you have worked and teach in the passionately polarized domain of criminal law and procedure, and have seen stark problems entrenched in part because good people from various vantages feel besieged and beset, it makes you yearn for ways we can understand each other. How can we work with each other, without hackles raised, so that we can do more than merely face off? The status quo, desperately in need of reform, is left entrenched for lack of consensus. And why do we keep doing this, despite knowing that something is deeply wrong in a system with severe racial disparities? Is averting our gaze and enduring hurt and accusations the best that law can do?

This is why I loved the rich and fascinating body of literature on implicit social cognition from the moment I read Charles R. Lawrence’s The Id, the Ego, and Equal Protection in Reva Siegel’s constitutional law class as a 1L law student. As Jerry Kang and Mahzarin Banaji, two leaders in the law and implicit bias literature explain, “[t]he science of implicit social cognition examines those mental processes that operate without conscious awareness or conscious control” that influence our evaluation of others. A host of studies have replicated the finding of strong implicit bias against outgroup members that may clash with our consciously avowed and desired beliefs. The law and implicit bias literature has burgeoned and the insights from social psychology have been immensely productive in legal scholarship and criminal justice scholarship.1

L. Song Richardson’s Arrest Efficiency and the Fourth Amendment is an excellent forthcoming article that applies the insights from the research on implicit bias to the Fourth Amendment legal regime and policing. I had the pleasure of hearing a fabulous talk based on the article by Song at the LatCrit Conference hosted at American University Washington College of Law in 2009 and am delighted that the article will be hitting the presses soon. The article begins with a puzzle when it comes to “arrest efficiency.” The available data on “hit rates”–the rate at which police find illegal contraband or other evidence in a stop and search–show either higher success rates in searches of whites or at least equal success rates between searches of whites and blacks. Yet data indicates that police target blacks at higher rates than whites for stops and frisks. Song’s article examines the phenomenon in the context of Terry stop and frisks on the street. In a future article she will examine the phenomenon in the context of traffic stops.

Song argues that part of the reason why police persist in higher rates of less efficient searches of blacks is because of implicit bias. Song masterfully marshals the research from the social psychology literature showing how the “stereotype of blacks, especially young men, as violent, hostile, aggressive, and dangerous” permeates social perception, often at the subconscious level, to the detriment of life opportunities and civil liberties. She explains that police would be more efficient and accurate when they stop whites because they then tend to base their suspicion on more accurate indicia of suspicious activity.

Song argues that Fourth Amendment doctrine and its traditional deference to police perception needs to take into consideration the large body of evidence on how implicit bias can skew perception even without explicit bias. She argues that courts should ask for better empirical support for police inferences about suspicious activity. Song also argues that to render stops more accurate courts should hold that race is irrelevant in justifying a Terry seizure.

Applying insights from the research on implicit social cognition to policing is salutary. For the officers who hunker down when faced with what feels like another accusation, I hope we can translate implicit bias research in a way that fosters receptivity toward understanding and ameliorating subconscious impact. Perhaps it may help to avoid the term bias, which could shut down the will to listen because it may sound like another personalized accusation. The power of implicit social cognition research is to depersonalize blame, showing how subconscious impact may be a cultural problem, and how people who genuinely believe themselves to be acting nobly may exert subconscious impact. As lawyers, particularly criminal lawyers, we are all too good at pointing fingers. But to progress, perhaps the better approach is to emerge from the posture of fierce polarization and defensiveness and find ways to more accurately see and understand each other.

 


  1. Gary Blasi, Advocacy Against the Stereotype: Lessons from Cognitive Social Psychology, 49 UCLA L. Rev. 1241; Joshua Correll, The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals, 83 J. Pers. & Soc. Psych. 1314 (2002); Scott W. Howe, The Futile Quest for Racial Neutrality in Capital Selection and the Eighth Amendment Argument for Abolition Based on Unconscious Racial Discrimination, 45 Wm. & Mary L. Rev. 2083 (2004); Sheri Lynn Johnson, Unconscious Racism and the Criminal Law, 73 Cornell L. Rev. 1016 (1988); Jerry Kang, Trojan Horses of Race, 118 Harv. L. Rev. 1489 (2005); Cynthia Lee, The Gay Panic Defense, 42 U.C. Davis L. Rev. 471 (2008); Rory K. Little, What Federal Prosecutors Really Think: The Puzzle of Statistical Race Disparity Versus Specific Guilt, and the Specter of Timothy McVeigh, 53 DePaul L. Rev. 1591 (2004); Jeffrey J. Pokorak, Probing the Capital Prosecutor’s Perspective: Race of the Discretionary Actors, 83 Cornell L. Rev. 1811 (1998); Yoav Sapir, Neither Intent nor Impact: A Critique of the Racially Based Selective Prosecution Jurisprudence and a Reform Proposal, 19 Harv. BlackLetter L.J. 127 (2003).