法律顾问网欢迎您访问!法律顾问网力图打造最专业的律师在线咨询网站.涉外法律顾问\知识产权法律顾问\商务法律顾问 法律顾问、委托电话:13930139603,投稿、加盟、合作电话:13932197810 网站客服:点击这里联系客服   法律文书 | 在线咨询 | 联系我们 | 设为首页 | 加入收藏
关键字:

律师咨询电话13930139603

首 页 | 法治新闻 | 民法顾问 | 刑法顾问 | 普法常识 | 法律援助 | 社团顾问 | 商法顾问 | 律师动态 | 公益讼诉 | 执行顾问 | 经典案例 | 法律法规

国际贸易

知识产权

税收筹划

公司事务

土地房产

建筑工程

合同纠纷

债权债务


劳动争议


医疗纠纷


交通事故


婚姻家庭
商法顾问 国际贸易 | 银行保险 | 证券期货 | 公司法律 | 司法鉴定 | 合同纠纷 | 网络法律 | 经济犯罪 | 知识产权 | 债权债务 | 房地产  
法律英语  
独家:剑桥美国法律史 二
作者:石家庄国际贸易律师编辑   出处:法律顾问网·涉外www.flguwen.com     时间:2010-11-19 16:12:00

the cambridge history of law in america
volume ii
The Long Nineteenth Century (1789–1920)
Law stands at the center of modern American life. Since the 1950s, American historians
have produced an extraordinarily rich and diverse literature that has vastly
expanded our knowledge of this familiar and vital yet complex and multifaceted
phenomenon. But few attempts have been made to take full account of law’s American
history. The Cambridge History of Law in America has been designed for just
this purpose. In three volumes we put on display all the intellectual vitality and
variety of contemporary American legal history.We present as comprehensive and
authoritative an account as possible of the present understanding and range of
interpretation of the history of American law. We suggest where future research
may lead.
In the long century after 1789 we see the crystallization and, after the Civil
War, the reinvention of a distinctively American state system – federal, regional
and local; we see the appearance of systematic legal education, the spread of the
legal profession, and the growing density of legal institutions. Overall, we learn
that in America law becomes a technique of first resort wherever human activity,
in all shapes and sizes, meets up with the desire to organize it: the reception
and distribution of migrant populations; the expulsion and transfer of indigenous
peoples; the structure of social life; the liberation of slaves and the confinement
of freed people; and the great churning engines of continental expansion, urban
growth, capitalist innovation, industrialization.We see how law intertwines with
religion, how it becomes ingrained in popular culture, and how it intersects with
the semi-separate world of American militarism and with the “outside” world of
other nations.
The Cambridge History of Law in America has been made possible by the generous
support of the American Bar Foundation. Volumes I and III cover the history of
law in America, respectively, from the first moments of English colonizing through
the creation and stabilization of the republic; and from the 1920s until the early
twenty-first century.
Michael Grossberg is the Sally M. Reahard Professor of History and a Professor of
Law at Indiana University. His research focuses on the relationship between law
and social change, particularly the intersection of law and the family.
Christopher Tomlins is Senior Research Fellow at the American Bar Foundation
in Chicago. His research encompasses the relationship among labor, colonization,
and law in early America; the conceptual history of police in Anglo-American law
and politics; and the place of historical materialism in legal theory.
Cambridge Histories Online © Cambridge University Press, 2008
Cambridge Histories Online © Cambridge University Press, 2008
the cambridge history
of law in america
volume ii
The Long Nineteenth Century (1789–1920)
Edited by
MICHAEL GROSSBERG
Indiana University
CHRISTOPHER TOMLINS
The American Bar Foundation, Chicago
Cambridge Histories Online © Cambridge University Press, 2008
cambridge university press
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S˜ao Paulo, Delhi
Cambridge University Press
32 Avenue of the Americas, New York, ny 10013-2473, usa
www.cambridge.org
Information on this title: www.cambridge.org/9780521803069
c Cambridge University Press 2008
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without
the written permission of Cambridge University Press.
First published 2008
Printed in the United States of America
A catalog record for this publication is available from the British Library.
Library of Congress Cataloging in Publication Data
The Cambridge history of law in America / edited by Michael Grossberg,
Christopher Tomlins.
p. cm.
Includes bibliographical references and index.
isbn 978-0-521-80306-9 (hardback)
1. Law – United States – History. I. Grossberg, Michael, 1950– II. Tomlins,
Christopher L., 1951– III. Title.
kf352.c36 2007
349.73–dc22 2007017606
isbn 978-0-521-80306-9 hardback
Cambridge University Press has no responsibility for
the persistence or accuracy of urls for external or
third-party Internet Web sites referred to in this publication
and does not guarantee that any content on such
Web sites is, or will remain, accurate or appropriate.
Cambridge Histories Online © Cambridge University Press, 2008
contents
Editors’ Preface page vii
1 Law and the American State, from the Revolution to the
Civil War: Institutional Growth and Structural Change 1
mark r. wilson
2 Legal Education and Legal Thought, 1790–1920 36
hugh c. macgill and r. kent newmyer
3 The Legal Profession: From the Revolution to the Civil War 68
alfred s. konefsky
4 The Courts, 1790–1920 106
kermit l. hall
5 Criminal Justice in the United States, 1790–1920:
A Government of Laws or Men? 133
elizabeth dale
6 Citizenship and Immigration Law, 1800–1924: Resolutions
of Membership and Territory 168
kunal m. parker
7 Federal Policy, Western Movement, and Consequences
for Indigenous People, 1790–1920 204
david e. wilkins
8 Marriage and Domestic Relations 245
norma basch
9 Slavery, Anti-Slavery, and the Coming of the Civil War 280
ariela gross
10 The Civil War and Reconstruction 313
laura f. edwards
v
Cambridge Histories Online © Cambridge University Press, 2008
vi Contents
11 Law, Personhood, and Citizenship in the Long Nineteenth
Century: The Borders of Belonging 345
barbara young welke
12 Law in Popular Culture, 1790–1920: The People
and the Law 387
nan goodman
13 Law and Religion, 1790–1920 417
sarah barringer gordon
14 Legal Innovation and Market Capitalism, 1790–1920 449
tony a. freyer
15 Innovations in Law and Technology, 1790–1920 483
b. zorina khan
16 The Laws of Industrial Organization, 1870–1920 531
karen orren
17 The Military in American Legal History 568
jonathan lurie
18 The United States and International Affairs, 1789–1919 604
eileen p. scully
19 Politics, State-Building, and the Courts, 1870–1920 643
william e. forbath
Bibliographic Essays 697
Notes on Contributors 821
Index 823
Cambridge Histories Online © Cambridge University Press, 2008
editors’ preface
In February 1776, declaiming against the oppressive and absolute rule of
“the Royal Brute of Britain,” the revolutionary pamphleteer Tom Paine
announced to the world that “so far as we approve of monarchy . . . in
America the law is king”! Paine’s declaration of Americans’ “common
sense” of the matter turned out to be an accurate forecast of the authority
the legal order would amass in the revolutionary republic. Indeed, Paine’s
own fiery call to action was one of the stimuli that would help his prediction
come true. We know ourselves that what he claimed for law then
mostly remains true now. Yet, we should note, Paine’s claim was not simply
prophecy; it made sense in good part because of foundations already laid.
Long before 1776, law and legal institutions had gained a place of some
prominence in the British American colonies. The power and position of
law, in other words, are apparent throughout American history, from its
earliest moments. The three volumes of The Cambridge History of Law in
America explain why Paine’s synoptic insight should be understood as both
an eloquent foretelling of what would be and an accurate summation of what
already was.
The Cambridge History of Law in America belongs to a long and proud
scholarly tradition. In March 1896, at the instigation of FrederickWilliam
Maitland, Downing Professor of the Laws of England at Cambridge University,
and of Henry Jackson, tutor in Greek at Trinity College, the syndics
of Cambridge University Press invited the University’s Regius Professor
of Modern History, Lord John Dalberg Acton, to undertake “the general
direction of a History of theWorld.” Six months later Acton returned with
a plan for a (somewhat) more restrained endeavor, an account of Europe and
the United States from The Renaissance to The Latest Age. Thus was born The
Cambridge Modern History.
Acton’s plan described a collaborative, collectively written multivolume
history. Under general editorial guidance, each volume would be
divided among “specially qualified writers” primed to present extensive and
vii
Cambridge Histories Online © Cambridge University Press, 2008
viii Editors’ Preface
authoritative accounts of their subjects.1 They were to imagine themselves
writing less for other professional historians than for a more general audience
of “students of history” – anyone, that is, who sought an authoritative,
thoughtful, and sophisticated assessment of a particular historical subject or
issue. Acton envisioned a history largely clean of the professional apparatus
of reference and citation – texts that would demonstrate the “highest pitch
of knowledge without the display,” reliant for their authority on the expertise
of the authors chosen to write them. And although it was intended that
the History be the most complete general statement of historical knowledge
available, and to that extent definitive, Acton was not interested in simply
reproducing (and thus by implication freezing) what was known. He desired
that his authors approach the task critically, strive for originality in their
research, and take it on themselves to revise and improve the knowledge
they encountered.2
Acton did not live to see even the first volume in print, but between
1902 and 1911 The Cambridge Modern History appeared in twelve substantial
volumes under the editorial direction of Adolphus Ward and Stanley
Leathes. The History quickly found a broad audience – the first volume, The
Renaissance, sold out in a month. Other Cambridge histories soon followed:
The Cambridge History of English Literature, which began to appear under
Ward’s editorship in 1907; The Cambridge Medieval History (1911–36); The
Cambridge History of American Literature (1917–21); The Cambridge Ancient
History (1923–39); The Cambridge History of the British Empire (1929–67);
The Cambridge History of India (1922–60), and more. All told, close to a
hundred Cambridge histories have been published. More than fifty are currently
in print. Cambridge histories have justly become famous. They are
to be found in the collections of libraries and individuals throughout the
world.
Acton’s plan for The Cambridge Modern History invoked certain essentials –
an ideal of collective authorship and a commitment to make expertise accessible
to a wider audience than simply other specialists. To these he added
grander, programmatic touches. The History would be “an epic,” a “great
argument” conveying “forward progress . . . upward growth.” And it would
provide “chart and compass for the coming century.” Such ambitions are
1 When, early on, Acton ran into difficulties in recruiting authors for his intimidating
project, Maitland gently suggested that “his omniscient lordship” simply write the whole
thing himself. Acton (we note with some relief) demurred. There is humor here, but also
principle. Collective authorship is a practice ingrained in the Cambridge histories from
the beginning.
2 Our account of Acton’s plan and its realization gratefully relies throughout on Josef
L. Altholz, “Lord Acton and the Plan of the Cambridge Modern History,” The Historical
Journal, 39, no. 3 (September 1996), 723–36.
Cambridge Histories Online © Cambridge University Press, 2008
Editors’ Preface ix
characteristic of Acton’s moment – the later nineteenth century – when in
Britain and Continental Europe history still claimed an educative mantle
“of practical utility,” the means rather than science (or law) to equip both
elites and ordinary citizens “to deal with the problems of their time.” It
was a moment, also, when history’s practitioners could still imagine filling
historical time with a consistent, standardized account – the product, to be
sure, of many minds, but minds that thought enough alike to agree on an
essential common purpose: “men acting together for no other object than
the increase of accurate knowledge.” Here was history (accurate knowledge)
as “the teacher and the guide that regulates public life,” the means by which
“the recent past” would yield up “the key to present time.” Here as well,
lest we too quickly dismiss the vision as na¨ıve or worse, was the shouldering
of a certain responsibility. “We have to describe the ruling currents, to
interpret the sovereign forces, that still govern and divide the world. There
are, I suppose, at least a score of them, in politics, economics, philosophy
and religion. . . . But if we carry history down to the last syllable of recorded
time, and leave the reader at the point where study passes into action, we
must explain to him the cause, and the growth, and the power of every great
intellectual movement, and equip him for many encounters of life.”
Acton’s model – a standard general history, a guiding light produced
by and for an intellectually confident elite – could not survive the shattering
effects of two world wars. It could not survive the democratization of
higher education, the proliferation of historical scholarship, the constant
emergence of new fields and subdisciplines, the eventual decentering of
Europe and “the West.” When, amid the rubble and rationing of a hastily
de-colonizing post–World War II Britain, Cambridge University Press’s
syndics decided a revised version was required – a New Cambridge Modern
History for a new day – their decision acknowledged how much the world
had changed. The revised version bore them out. Gone was Acton’s deep
faith in history’s authority and grandeur. The general editor, G. N. Clark,
wrote, “Historians in our self-critical age are aware that there will not
be general agreement with their conclusions, nor even with some of the
premises which they regard as self-evident. They must be content to set out
their own thought without reserve and to respect the differences which they
cannot eradicate” – including, he might have added (but perhaps there was
no need) the many fundamental differences that existed among historians
themselves. Cambridge histories no longer aspired to create standardized
accounts of the way things had been nor to use the past to pick the lock on
the future. The differences in perspective and purpose that a less confident,
more self-critical age had spawned were now the larger part of the picture.
Yet the genre Acton helped found has now entered its second century. It
still bears, in some fashion, his imprint. The reason it has survived, indeed
Cambridge Histories Online © Cambridge University Press, 2008
x Editors’ Preface
prospered, has less to do with some sense of overall common purpose than
the more modest but nevertheless essential precept of continued adherence
to certain core principles of design simply because they have worked: individual
scholars charged to synthesize the broad sweep of current knowledge
of a particular topic, but also free to present an original interpretation aimed
at encouraging both reflection and further scholarship, and an overall architecture
that encourages new understandings of an entire subject or area of
historical scholarship. Neither encyclopedias nor compilations, textbooks
nor works of reference, Cambridge histories have become something quite
unique – each an avowedly collective endeavor that offers the single best
point of entry to the wide range of an historical subject, topic, or field;
each in overall conceptual design and substance intent not simply on defining
its field’s development to date but on pushing it forward with new
ideas. Critique and originality, revision and improvement of knowledge –
all remain germane.
Readers will find that The Cambridge History of Law in America adheres to
these core goals. Of course, like other editors we have our own particular
ambitions. And so the three volumes of this Cambridge history have been
designed to present to full advantage the intellectual vitality and variety of
contemporary American legal history. Necessarily then – and inevitably –
The Cambridge History of Law in America dwells on areas of concern and interpretive
debates that preoccupy the current generation of legal historians.
We do not ignore our predecessors.3 Nor, however, do we attempt in the
body of the History to chart the development of the field over their time and
ours in any great detail. Readers will find a more substantial accounting of
that development in the bibliographic essays that accompany each chapter,
but as editors we have conceived our job to be to facilitate the presentation
of as comprehensive and authoritative a rendition of the present understanding
of the history of American law as possible and to suggest where
future research may lead.
Cambridge histories always define their audiences widely; ours is no
exception. One part of our intended audience is scholarly, but hardly confined
to other legal historians; they are already the best equipped to know
something of what is retailed here. So to an important extent we try to look
past legal historians to historians at large. We also look beyond history to
scholars across the broad sweep of law, the humanities, and the social sciences
– indeed to any scholar who may find a turn to law’s history useful (or
simply diverting) in answering questions about law and society in America.
3 See, for example, the graceful retrieval and reexamination of themes from the “imperial
school” of American colonial historians undertaken by Mary Sarah Bilder in Volume I,
Chapter 3.
Cambridge Histories Online © Cambridge University Press, 2008
Editors’ Preface xi
A second part of our audience is the legal profession. Lawyers and judges
experience in their professional lives something of a practical encounter
with the past, although the encounter may not be one they would recognize
as “historical.” As John Reid has written, “The lawyer and the historian have
in common the fact that they go to the past for evidence, but there the similarity
largely ends.” Here lawyers and judges can discover for themselves
what historians do with evidence. In the process, they will also discover
that not inconsiderable attention has been paid to their own lives and experiences.
Legal historians have always known how important legal thought
and legal education are in the formation of the professional world of the law,
and both feature prominently in this History. Here the profession encounters
the history of its activities and of the medium it inhabits from a standpoint
outside itself.
The third segment of our intended audience is the general public. Our
purposes in this encounter are not Acton’s.We do not present this History as
the means to educate a citizenry to deal with the problems of the moment.
(Indeed, it is worth noting that in America law appropriated that role to
itself from the earliest days of the republic.) Like G. N. Clark, today’s
historians live in self-critical times and have lower expectations than Lord
Acton of what historical practice might achieve. That said, readers will find
that this History touches on many past attempts to use law to “deal with”
many past problems: in the America where law is king, it has been law’s fate
to be so employed. And if their accounts leave some of our authors critical
in their analysis of outcomes or simply rueful in recounting the hubris (or
worse) of the attempts, that in itself can be counted an education of sorts.
Moreover, as Volume III’s chapters show repeatedly, Americans continue
to turn to law as their key medium of private problem solving and public
policy formation and implementation, and on an expanding – global –
stage. In that light, there is perhaps something for us to learn from Acton’s
acknowledgment that the scholar-expert should not abandon the reader “at
the point where study passes into action.” We can at the very least offer
some reflection on what an encounter with the past might bring by way of
advice to the “many encounters of life” lying ahead.
In reaching all three of our intended audiences, we are greatly assisted
by the pronounced tendency to “demystify” and diversify its subject that
has characterized American legal history for a half-century. To some, the
field’s very title – “legal history” – will conjure merely an arcane preoccupation
with obscure terminologies and baffling texts, the doctrines and
practices of old (hence defunct) law, of no obvious utility to the outsider
whether historian or social scientist or practicing lawyer or just plain citizen.
No doubt, legal history has at times given grounds to suppose that such
a view of the discipline is generally warranted. But what is interesting
Cambridge Histories Online © Cambridge University Press, 2008
xii Editors’ Preface
in American legal history as currently practiced is just how inappropriate
that characterization seems.
To read the encomia that have accumulated over the years, one might
suppose that the demise of legal history’s obscurity was the single-handed
achievement of one man, JamesWillard Hurst, who on his death in 1997 was
described in the New York Times as “the dean of American legal historians.”
Indeed, Hurst himself occasionally suggested the same thing; it was he who
came up with the aphorism “snakes in Ireland” to describe legal history in
America at the time he began working in the field in the 1930s. Though not
an immodest man, it seems clear whom he cast as St. Patrick. Yet the Times’
description was merited. Hurst’s lifework – the unpacking of the changing
roles of American law, market, and state from the early nineteenth to the
early twentieth centuries – set the agenda of American legal historians
from the 1950s well into the 1980s. That agenda was a liberation from
narrower and more formalistic preoccupations, largely with the remote
origins of contemporary legal doctrine or with the foundations of American
constitutionalism, that had characterized the field, such as it was, earlier
in the century. Most important, Hurst’s work displayed some recognition
of the multidimensionality of law in society – as instrument, the hallmark
with which he is most associated, but also as value and as power. Hurst,
in short, brought legal history into a continuing dialogue with modernity,
capitalism, and the liberal state, a dialogue whose rich dividends are obvious
in this History.
Lawyers have sometimes asked aggressively anachronistic questions of
history, like – to use an apocryphal example of Robert Gordon’s – “Did the
framers of the Constitution confer on the federal government the power
to construct an interstate highway system?” Hurstian legal history did not
indulge such questions. But Hurstians did demonstrate a gentler anachronism
in their restriction of the scope of the subject and their interpretation
of it. Famously, for Hurst, American legal history did not begin until the
nineteenth century. And when it did begin it showed a certain consistency
in cause and effect. As Kermit Hall summarized the view in 1989, “Our
legal history reflects back to us generations of pragmatic decision making
rather than a quest for ideological purity and consistency. Personal
and group interests have always ordered the course of legal development;
instrumentalism has been the way of the law.”4 The Hurstian determination
to demystify law occasionally reduced it to transparency – a dependent
variable of society and economy (particularly economy) tied functionally to
social and economic change.
4 Kermit L. Hall, The Magic Mirror: Law in American History (New York, 1989), 335.
Cambridge Histories Online © Cambridge University Press, 2008
Editors’ Preface xiii
As a paradigm for the field, Hurstian legal history long since surrendered
its dominance. What has replaced it? In two words, astonishing variety.
Legal historians are aware that one cannot talk or write about economic
or social or political or intellectual history, or indeed much of any kind of
history, without immediately entering into realms of definition, prohibition,
understanding, practice, and behavior that must imply law to have
meaning. Try talking about property in any of those contexts, for example,
without implying law. Today’s legal historians are deeply engaged across
the full range of historical investigation in demonstrating the inextricable
salience of law in human affairs. As important, the interests of American
historians at large have never been more overtly legal in their implications
than now. To take just four popular areas of inquiry in American history –
citizenship and civic personality, identity, spatiality, and the etiology of
social hierarchy and subordination – it is simply impossible to imagine
how one could approach any of these areas historically without engaging
with law, legal ideology, legal institutions, legal practices, and legal discourse.
Legal historians have been and remain deeply engaged with and
influenced by social history, and as that field has drifted closer and closer to
cultural history and the historical construction of identity so legal history
has moved with it. The interpretive salience of race and ethnicity, of gender
and class is as strong in contemporary legal historical practice as in any
other realm of history. Add to that the growing influence of legal pluralism
in legal history – the migration of the field from a focus on “the law” to
a focus on the conditions of existence of “legality” and the competition of
many alternative “legalities” – and one finds oneself at work in a field of
immense opportunity and few dogmas.
“Astonishing variety” demonstrates vitality, but also suggests the benefits
of a judicious collective effort at authoritative summation. The field
has developed at an extraordinary rate since the early 1970s, but offers no
work that could claim to approach the full range of our understanding of the
American legal past.5 The Cambridge History of Law in America addresses both
5 The field has two valuable single-author surveys: Lawrence M. Friedman’s A History of
American Law (New York, 1973; 3rd ed. 2005) and Kermit Hall’s The Magic Mirror.
Neither approaches the range of what is on display here. The field also boasts volumes
of cases and commentary, prepared according to the law teaching “case book” model,
such as Stephen B. Presser and Jamil S. Zainaldin, Law and Jurisprudence in American
History: Cases and Materials (St. Paul, MN, 1980; 6th ed. 2006) and Kermit Hall, et al.,
American Legal History, Cases and Materials (New York, 3rd ed., 2003). There also exist
edited volumes of commentary and materials that focus on broad subject areas within
the discipline of legal history; a preponderance deal with constitutional law, such as
Lawrence M. Friedman and Harry N. Scheiber, eds., American Law and the Constitutional
Order: Historical Perspectives (Cambridge, MA, 1978; enlarged ed. 1988). Valuable in
Cambridge Histories Online © Cambridge University Press, 2008
xiv Editors’ Preface
the vitality of variety and its organizational challenge. Individually, each
chapter in each volume is a comprehensive interrogation of a key issue in a
particular period of American legal history. Each is intended to extend the
substantive and interpretative boundaries of our knowledge of that issue.
The topics they broach range widely – from the design of British colonizing
to the design of the successor republic and of its successive nineteenthand
twentieth-century reincarnations; from legal communications within
empires to communications among nation-states within international law
to a sociology of the “legalization” that enwraps contemporary globalism;
from changes in legal doctrine to litigation trend assessments; from clashes
over law and religion to the intersection of law and popular culture; from
the movement of peoples to the production of subalternship among people
(the indigenous, slaves, dependents of all kinds); and from the discourse
of law to the discourse of rights. Chapters also deal with developments
in specific areas of law and of the legal system – crime and criminal justice,
economic and commercial regulation, immigration and citizenship,
technology and environment, military law, family law, welfare law, public
health and medicine, and antitrust.6
Individual chapters illustrate the dynamism and immense breadth of
American legal history. Collectively, they neither exhaust its substance nor
impose a new interpretive regimen on the field. Quite the contrary, The
Cambridge History of Law in America intentionally calls forth the broad array
of methods and arguments that legal historians have developed. The contents
of each volume demonstrate not just that expansion of subject and
method is common to every period of American legal history but also that
as the long-ascendant socio-legal perspective has given way to an increasing
diversity of analytical approaches, new interpretive opportunities are rife
everywhere. Note the influence of regionalism in Volume I and of institutionalism
in Volume II. Note the attention paid in Volume III not only to
race and gender but also to sexuality. The History shows how legal history
their own right, such volumes are intended as specific-purpose teaching tools and do not
purport to be comprehensive. Finally, there are, of course, particular monographic works
that have proven widely influential for their conceptual acuity, or their capacity to set
a completely new tone in the way the field at large is interpreted. The most influential
have been such studies as James Willard Hurst, Law and the Conditions of Freedom in
the Nineteenth-Century United States (Madison, WI, 1956), and Morton J. Horwitz, The
Transformation of American Law, 1780–1860 (Cambridge, MA, 1977).
6 Following the tradition of Cambridge histories, each chapter includes only such footnotes
as the author deems necessary to document essential (largely primary) sources. In place
of the dense display of citations beloved of scholarly discourse that Acton’s aesthetic
discouraged, each author has written a bibliographic essay that provides a summary of
his or her sources and a guide to scholarly work on the subject.
Cambridge Histories Online © Cambridge University Press, 2008
Editors’ Preface xv
has entered dialogue with the full array of “histories” pursued within the
academy – political, intellectual, social, cultural, economic, business, diplomatic,
and military – and with their techniques.
The Cambridge History of Law in America is more than the sum of its
parts. The History’s conceptual design challenges existing understandings
of the field.We divide the American legal past into three distinct eras and
devote a complete volume to each one: first Early America, then The Long
Nineteenth Century, and last The Twentieth Century and After. The first volume,
Early America, examines the era from the late sixteenth century through the
early nineteenth – from the beginnings of European settlement through the
creation and stabilization of the American republic. The second volume,
The Long Nineteenth Century, begins with the appearance of the United States
in the constituted form of a nation-state in 1789; it ends in 1920, in the
immediate aftermath of World War I, with the world poised on the edge
of the “American Century.” The final volume, The Twentieth Century and
After, concentrates on that American century both at home and abroad
and peers into the murk of the twenty-first century. Within each of these
broad chronological divisions occurs a much more detailed subdivision
that combines an appreciation of chronology with the necessities of topical
specialization.
Where appropriate, topics are revisited in successive volumes (crime and
criminal justice, domestic relations law, legal thought, and legal education
are all examples). Discussion of economic growth and change is ubiquitous,
but we accord it no determinative priority. To facilitate comparisons and
contrasts within and between eras, sequences of subjects have been arranged
in similar order in each volume. Specific topics have been chosen with an eye
to their historical significance and their social, institutional, and cultural
coherence. They cannot be walled off from each other, so readers will notice
substantive overlaps when more than one author fastens on the same issues,
often to create distinct interpretations of them. History long since ceased to
speak with one voice. In this History, readers are invited into a conversation.
Readers will notice that our chronology creates overlaps at the margins
of each era. They will also notice that some chapters focus on only particular
decades within a specific era7 or span more than one era.8 All this is
7 Chronologically specific topics – the American Revolution and the creation of the republic
in Volume I, the Civil War in Volume II, the New Deal era in Volume III – are treated
as such. Chapters on the legal profession in Volumes II and III divide its development at
the Civil War, as do those, in Volume II, on the state and on industrial organization.
8Volume II’s chapter on the military deals with both the nineteenth and twentieth centuries,
as do Volume III’s chapters on agriculture and the state and on law and the
environment. The latter chapter, indeed, also gestures toward the colonial period.
Cambridge Histories Online © Cambridge University Press, 2008
xvi Editors’ Preface
intentional. Historians construct history by placing subjects in relation to
each other within the continuum of historical time. Historians manipulate
time by creating periods to organize the placement of subjects. Thus, when
historians say that a subject has been “historicized,” they mean it has been
located in what they consider its appropriate historical-temporal context or
period. Slicing and dicing time in this fashion is crucial to the historian’s
objective of rendering past action coherent and comprehensible, but necessarily
it has a certain arbitrariness. No matter how familiar – the colonial
period, the Gilded Age, the Progressive period, and so forth – no historical
period is a natural division: all are constructs. Hence we construct three
“eras” in the interests of organizational coherence, but our overlaps and the
distinct chronologies chosen by certain of our authors allow us to recognize
different temporalities at work.
That said, the tripartite division of these volumes is intended to provide
a new overall conceptual schema for American legal history, one that is
broad and accommodating but that locates legal history in the contours of
American history at large. Maitland never forgot that, at bottom, just as
religious history is history not theology, legal history is history not law.
Notwithstanding law’s normative and prescriptive authority in “our” culture,
it is a phenomenon for historical inquiry, not the source of an agenda.
And so we take our cue, broadly, from American history. If it is anything,
American history is the history of the colonization and settlement of the
North American mainland, it is the history of the creation and expansion
of an American nation-state, and it is the history of that state’s place in
and influence on the world at large. The contents and the organization of
The Cambridge History of Law in America speak to how law became king
in this America and of the multitudinous empire of people and possibilities
over which that king reigned. Thus we address ourselves to the endless
ramifications, across more than four centuries, of the meaning of Tom
Paine’s exclamation in 1776.
The Cambridge History of Law in America could not have been produced
without the support and commitment of the American Bar Foundation,
Cambridge University Press, and our cadre of authors.We thank them all.
The American Bar Foundation housed the project and, together with the
Press, funded it. The Foundation was there at the creation: it helped initiate
the project by sponsoring a two-day meeting of an ad hoc editorial consulting
group in January 2000. Members of that group (Laura Edwards, Tony
Freyer, Robert Gordon, Bruce H. Mann, William Novak, Stephen Siegel,
Barbara Young Welke, and Victoria Saker Woeste) patiently debated the
editors’ initial thoughts on the conceptual and intellectual direction that the
History should follow and helped identify potential contributors. Since then,
Cambridge Histories Online © Cambridge University Press, 2008
Editors’ Preface xvii
the project has benefited from the support of two ABF directors, Bryant
Garth and his successor Robert Nelson, and the sustained and enthusiastic
interest of the Foundation’s Board of Directors during the tenure of
four Board presidents: Jacqueline Allee, M. Peter Moser, the late Robert
Hetlage, and David Tang.We owe a particular debt of gratitude to Robert
MacCrate for his early support and encouragement. As all this suggests, the
American Bar Foundation’s role in the production of The Cambridge History
of Law in America has been of decisive importance. The part the Foundation
has played underlines its standing as the preeminent research center for
the study of law and society in the United States and its long tradition of
support for the development of American legal history.
Cambridge University Press has, of course, been central to the project
throughout. We are grateful to the syndics for their encouragement and
to Frank Smith and his staff in New York for their assistance and support.
Frank first suggested the project in 1996. He continued to suggest it for
three years until we finally succumbed. During the years the History has been
in development, Frank has accumulated one responsibility after another at
the Press. Once we rubbed shoulders with the Executive Editor for Social
Sciences. Now we address our pleas to the Editorial Director for Academic
Books. But Frank will always be a history editor at heart, and he has maintained
a strong interest in this History, always available with sage advice
as the project rolled relentlessly onward. He helped the editors understand
the intellectual ambitions of a Cambridge history. Those who have had the
privilege of working with Frank Smith will know how important his advice
and friendship have been to us throughout.
Finally, the editors want to thank the authors of the chapters in these
volumes. A project like this is not to every author’s taste – some took
to it more easily than others. But together the sixty authors who joined
us to write the History have done a magnificent job, and we are deeply
grateful to every one. From the beginning our goal was not only to recruit
as participants those whom all would identify as leading figures of our field
but also to include those who, we were confident, would be leading figures
of its next generation.We are delighted that so many of each were willing.
We acknowledge also those who were unable for one reason or another to
see an initial commitment through to the end: their efforts, too, helped us
define and establish the project. And obviously, we owe a particular debt to
those others who came later to take the places of the fallen.
To oversee a project in which so many people have at one time or another
been involved has seemed on occasion like being the mayors of a village.
People arrive and (much less frequently, thank goodness) depart. Those who
settle in for the duration become a community of friends and neighbors.
Over time, one learns much from one’s friends and neighbors about the joys
Cambridge Histories Online © Cambridge University Press, 2008
xviii Editors’ Preface
and vicissitudes of life. One learns who (and whose family) may be ailing,
and who is well. One learns of hurts and difficulties; one revels in successes.
And one may learn, as we did so sadly in August 2006, of an untimely
death. Notwithstanding the demands of his immensely successful career in
academic administration, our colleague Kermit Hall never laid down his
historian’s pen and was an enthusiastic participant in this project. He died
suddenly and unexpectedly. His contributions to the field have been great,
and he is greatly missed.
Throughout, the many authors in this project have responded courteously
to our editorial advice. They have reacted with grace and occasional humor
to our endless demands that they meet their deadlines. Sometimes they even
sent their manuscripts too. Most important, they have striven to achieve
what we asked of them – the general goals of a Cambridge history and the
specific goals of this history, as we have described them in this preface. Their
achievements are evident in the pages of each volume. In an individualistic
intellectual culture, the scholarship on display here demonstrates the
possibilities inherent in a collective intellectual enterprise. In the end, of
course, the editors, not the authors, are responsible for the contents of these
volumes. Yet, it is the authors who have given the History its meaning and
significance.
Michael Grossberg
Christopher Tomlins
Cambridge Histories Online © Cambridge University Press, 2008
1
law and the american state, from the
revolution to the civil war: institutional
growth and structural change
mark r. wilson
From Tocqueville in the 1830s to scholars in the twenty-first century, most
observers have found the state in the antebellum American republic elusive
and complex. As any student of American history knows, the new
nation that emerged from the Revolutionary War was not ruled by uniformed
national officials. In place of a king the United States had popular
sovereignty and the law; instead of strong central authorities it had federalism
and local autonomy; lacking administrative bureaucracy, it relied
on democratic party politics. In the Constitution, the new nation wrote a
blueprint for government that called for separation rather than conglomeration
of powers. It would prove remarkably successful in endowing the
American state with both flexibility and durability, as Madison and other
founders had desired.
The state in the early United States did not look like an entity approaching
the Weberian ideal-type of the modern state: an organization capable
of enforcing a successful monopoly of violence over a given territory, ruled
through a legal-administrative order. But for all its apparent distinctiveness,
the state in the early United States, no less than its counterparts in
Europe and Asia, performed the fundamental tasks of any state: managing
its population, economy, and territory. The history of how it did so suggests
that the American state in the early nineteenth century was more substantial
and energetic, especially at the national level, than many have suggested.
As Tom Paine famously put it, the Revolution created a new America, in
which law was king. But we should be wary of overemphasizing the importance
of the law in early American governance.We should instead embrace
a broad conception of the law, in which the Constitution, statute law, and
judge-made law all figure as parts of a larger legal order that also included
coercive law enforcement and administration. Certainly, we cannot understand
the state in the early United States without considering the Constitution
and the courts, as well as federalism and party politics. But these institutions
did not alone comprehend the American state between the Revolution
1
Cambridge Histories Online © Cambridge University Press, 2008
2 Mark R. Wilson
and the Civil War. Along with the structural characteristics that made it
distinctive from a global perspective, the early American state – like other
states – performed major administrative feats that required guns and even
bureaucracy. Often overlooked by students of comparative politics, history,
and law, these less exceptional dimensions of the early American state were
crucial in the formation of the new nation and its survival through the
Civil War.
Generalizing about the early American state poses special challenges,
but also promises significant rewards. As recent political theorists have
emphasized, writing in general terms about any state tends to exaggerate
its coherence. In the case of the United States in particular, any general
discussion of “the state” must recognize the complexities induced by the
occurrence of state action at three levels of governance: not just national, but
state and local too. Here I attempt to avoid confusing these different levels of
state authority by treating them as distinct subjects whose relationships and
relative powers changed over time. Nevertheless, one should not be deterred
from considering what broad conclusions one can reach by examining the
general character of the work of public authorities (whether national, state,
or local) as such. Complexity for its own sake does not get us very far. While
necessarily crude, broader claims may be unusually fruitful when it comes
to the state in the early United States, precisely because its complexity is
already so well understood.
Whereas the conventions of historical and social-scientific writing may
have imbued many states with an artificial coherence, in the case of the early
United States we face the opposite problem. That is, the early American state
is understood to have been so exceptionally weak, decentralized, or otherwise
unusual that it defies the conventions of analysis applied to contemporary
European states. One finds this “exceptionalist” paradigm of American
distinctiveness promoted assiduously after World War II, most obviously
by Louis Hartz in The Liberal Tradition in America (1955). A more refined
version of the argument was advanced by James Willard Hurst in his Law
and the Conditions of Freedom in the Nineteenth-Century United States (1956).
Hurst explained that the early United States was remarkable not for any
“jealous limitation of the power of the state,” but rather because it was a
new kind of state that worked in positive fashion to achieve “the release
of individual creative energy.”1 Hurst comprehended Tocqueville’s most
astute observations about the paradoxical capacity of liberal states to do
more with less better than did Hartz, indeed better than many others since.
But like Tocqueville, Hurst implied that the American state was abnormal.
1 James Willard Hurst, Law and the Conditions of Freedom in the Nineteenth-Century United
States (Madison, 1956), 7.
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 3
Decades after Hurst, more recent authorities on the early American state
have broken much new ground, but mostly they still accept American distinctiveness.
Above all, the decentralization of early U.S. political authority,
described (and praised) at such great length by Tocqueville, continues to figure
centrally. Before the late nineteenth century, the United States was a state
of “courts and parties”: those two institutions alone served to coordinate
a radically decentralized political and economic system. Some of the best
new histories of the early American state have outdone Tocqueville in their
assumptions about the hypersignificance of local governance. In the history
of American political economy, meanwhile, the several states continue to
figure as the central subjects, just as they did in the classic monographs
on Pennsylvania and Massachusetts written by Hartz and the Handlins in
the mid-twentieth century. The leading legal historian Lawrence Friedman
summarized the message of a half-century of scholarship on state institutions
and political economy in the antebellum United States as follows: “Nobody
expected much out of the national government – or wanted much.” The
national government “was like the brain of a dinosaur: an insignificant mass,
of neurons inside a gigantic body.”
The impotence of national authority and incoherence of state action in
the United States through the Civil War era are part of a well-established
story. But that does not make them correct. Here I take a different direction.
In doing so, I build on the work of a handful of scholars – among
them Richard R. John, Ira Katznelson, and Bartholomew Sparrow – whose
research recommends reconsideration. In their effort to chart the dynamics
of the complex American political system, I argue, students of the early
American state have overlooked the most important single characteristic of
the early United States: its astounding growth. In comparison with European
states, the early American state was confronted with problems arising
from unusually rapid demographic, economic, and territorial expansion.
Between 1790 and 1870, the national population increased from 4 million
people to 40 million. The economy grew roughly twice as fast: between
1820 and 1870 alone, national product increased by a factor of eight. Perhaps
most remarkable of all, the territory over which the early American
state presided expanded from 864,000 square miles in 1800 to nearly 3 million
square miles in 1850. From a gaggle of colonies hugging the Eastern
seaboard in 1776, by the time of the CivilWar – less than ninety years later –
the United States had become the peer in population, economic output, and
territorial reach of France, Britain, and Russia.
The early American state was less top-heavy than those others. In 1860,
when all three states had similar numbers of inhabitants, central state
expenditures in Britain and France were roughly five times what they were
in the United States. Nonetheless, along with its tremendous growth in
Cambridge Histories Online © Cambridge University Press, 2008
4 Mark R. Wilson
population, economy, and territory, the early United States saw a remarkable
expansion of state institutions. By 1870, twenty-four new states had
joined the original thirteen, and hundreds of new towns and counties had
been created. National government had undergone significant expansion
and specialization. By 1849, the original executive departments of State,
War, and Treasury had been joined by three more cabinet-level departments:
Navy, Post Office, and Interior. In Congress, a variety of specialized standing
committees had appeared in both houses by the 1810s; the number of
House members had tripled between the 1790s and the 1870s, from 102
to 292. In 1836, Congress reorganized the patent system by establishing
a new Patent Office, which became an important arbiter of technological
innovation. Even the federal judiciary, set in its structure for the most part
in 1789, saw a newcomer by the end of this era: the Court of Claims,
established in 1855 and empowered during the CivilWar.
Institutional expansion allowed the early American state to manage its
population, economy, and territory – the three fields of greatest concern to
all modern states. Here I use these three related fields as the means to organize
a multidimensional account of the early American state. My account
confirms some long-established notions and extends – or challenges –
others. For example, students of American history will not be surprised
to learn that early American governmental institutions failed to deliver on
the most radical and egalitarian promises of the Revolution. But what happens
when we probe beyond the obvious racial and sexual inequalities of
early America to consider matters of causation and chronology? In its symbolic
and legal construction of the national population, the early American
state deliberately segmented its population along a color line. Furthermore,
state construction of whiteness and its cognates became more energetic over
time.
In the field of political economy, the pattern of chronological change was
more complex. Here, a non-linear narrative, which considers the activities
of various levels of American government, helps us reconcile a basic dispute
among political and legal historians of the early United States. Both sides in
this dispute have managed to assemble powerful evidence: on the one hand,
of considerable state promotion and regulation; on the other, of impressive
growth – not only in America, but around the Atlantic world – in capitalist
enterprise. But we rely too heavily on evidence from the 1830s and early
1840s for broad characterizations of the development of the market economy
during the whole antebellum era. If we consider more carefully the final
years of the antebellum period and if we look beyond the various states
to both local and national initiatives, we find that the oft-discussed trend
toward private enterprise during the latter part of this era was actually quite
weak.
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 5
In the governance of population and economy, the national state shared
the stage with the various states and localities. In the governance of territory,
on the other hand, the national state – which contemporaries frequently
called “the General Government,” if not “the Union” or simply “the United
States” – was the leading player. It was the national state, through treaties
and military operations, which claimed vast new territories during this
period. And it was the national state that created and administered the
laws and policies that transformed much of this territory into land. The
country’s greatest landowner and realtor, the national state transformed
the landscape and the lives of the millions of people who settled beyond
the original thirteen states by extending the common law of property
over the continent and creating administrative agencies necessary to divide
vast spaces into manageable commodities. By the middle of the nineteenth
century, territorial governance and consolidation stood as the early American
state’s central accomplishment and central problem. That this field of
governance touched the lives of the entire population, and not only a minority
in the far West, became especially evident by the end of this period,
when disastrous new territorial policies in the 1850s led directly to the
Civil War.
Taking fuller measure of the early American state leads us to an unexpected
conclusion: that the early national state, dismissed by many observers
then and since as extraordinarily weak and irrelevant, was in fact the most
innovative and influential level of governance in the multitiered American
political and legal order. Between 1861 and 1865, the national state
extended its influence significantly, but this extension was built on an
already considerable foundation. The emergence of a powerful national state
in America did not occur during or after the Civil War, but before.
I. POPULATION
Historians and legal scholars lead us to consider the early American state’s
management of its population in terms of two hypotheses. First, a variety of
state institutions worked to individualize the populace; over time the state
came to recognize and have a more direct relationship with the individual
human beings residing in its territory, including those who lacked full citizenship
rights. Second, the early American state increasingly sorted the
population according to discriminatory racial categories, which simultaneously
expanded the boundaries of a favored social class identified as white
and increasingly denigrated those persons who fell outside the boundaries
of this category.
Any discussion of the early American state’s activities in the field of
population may logically begin with a consideration of the Constitution and
Cambridge Histories Online © Cambridge University Press, 2008
6 Mark R. Wilson
the census. Although the racialization of the population had certainly been
proceeding for decades in British North America before the Revolution, the
language of the Constitution suggests that the infant American state was not
yet devoted to full-blown white supremacy. The Constitution’s most direct
sorting of the population is found in Article I, in which it describes the
rules for determining the apportionment of the House. Here, the Constitution
differentiates among three social categories: “free persons,” “Indians
not taxed,” and “all other persons.” For apportionment purposes, as is well
known, the number of people in the last of these categories – a euphemism
for slaves – was multiplied by three-fifths; members of the second category
were excluded altogether. The Constitution refers to neither sex nor color.
Thus, while it certainly provides tacit recognition and even support for
slavery, the basic blueprint for the new national state uses condition of
servitude, rather than race, as a social sorting device.
By contrast, the census, which should be understood as one of the institutions
of the early American state with the greatest symbolic power, used the
term “white” from the beginning. The first U.S. national census, required by
the Constitution, was conducted in 1790, a decade before the first national
censuses of Britain and France (although after the pioneering efforts of
Sweden). It divided the population into “white,” “other free,” and “slave.”
The white population was further divided into three categories: females, and
males over and under the age of 16. By 1820, the census had dropped the
adjective “other” for “colored.” In subsequent decades, increasingly complex
census schedules would continue to divide the population according to
the same handful of basic variables: color, sex, age, condition of servitude,
and place of residence. In 1830, it began to enumerate persons described
as deaf, dumb, and blind; in 1840, it counted “insane and idiots” as well.
In 1850, the census added a new racial subcategory, “mulatto,” which was
left to field enumerators to interpret. (In 1850, more than 11 percent of
the people falling under the larger category of “colored” were placed in this
new subcategory.)
As sectional tensions increased, census regional and racial data were
paraded for a variety of political purposes. When poorly designed 1840
census forms led enumerators in some Northern states to register hundreds
of non-existent “insane and idiot” African Americans, some Southerners
seized on the false data as evidence of the salutary effects of slavery. Another
wrongheaded interpretive leap, which spoke to the increasing dedication to
the idea of white supremacy within the boundaries of the state during this
period, came from the census itself. In 1864, as he presented the final official
population report from 1860, long-time census chief Joseph Kennedy
hailed figures showing that the nation’s free white population had grown
38 percent over the preceding decade, in contrast to 22 percent growth
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 7
among slaves and 12 percent for free blacks. Disregarding the inconvenient
fact that the free black population was on a pace to double in size over
the next century, Kennedy announced that the data indicated an ongoing
“gradual extinction” of “the colored race.”
Along with this apparently increasing emphasis on racial hierarchy and
difference, the development of the census over time suggested a more general
shift in the relationship between state and population in antebellum
America, toward individualization. As we shall see, this was evident in the
development of family law across the various states. At the census, the key
innovation occurred during a massive expansion of data collection in 1850,
when enumerators first recorded the names of individuals other than household
heads. Pushing toward a new level of social knowledge, the census
forged a direct relationship with named individuals, including women and
children. Here, as elsewhere, the state’s willingness to have its relationship
to persons mediated by a patriarchal or corporate head was declining. At the
same time, there was necessarily a corresponding increase in bureaucratic
capacity. While the 1840 census was processed inWashington by a clerical
force of only about 20, the 1850 tally required 170 clerks. According to
its leading historian, this made the Census Office, at its peak, “the largest
centralized clerical operation of the federal government at the time.” There
were no comparable operations in the private sector during this era.
More important than its bureaucratic achievements was the symbolic
work that the census did. Again, racial sorting had been going on throughout
the colonial period (both in popular culture and in law); it was certainly
not pioneered by the census or any other post-Revolutionary state institution.
But through its administrative and legal institutions, the early
American state encouraged the reproduction of a national social order in
which racial hierarchies became more important over time, rather than less.
Through the census and other legal and administrative institutions, the
early American state encouraged its populace to think in terms of whiteness
and non-whiteness in a way that the Constitution did not.
While colonial developments made it likely that the new national state
would continue to emphasize racial categories in the definition of its population,
other available categories were eschewed. Most important among
these was religion. Here, in contrast to its operation with regard to race,
the symbolic power of early national state institutions was used against the
entrenchment of poisonous social divisions. The census that so diligently
classified according to sex and race avoided interrogation of religious identity,
even in its detailed, individualized schedules of 1850. This need not
have been the case. Before the Revolution, seven of the thirteen colonies had
state-supported churches; in Europe, of course, established religion was the
rule. But the immediate post-Revolutionary period proved one in which
Cambridge Histories Online © Cambridge University Press, 2008
8 Mark R. Wilson
disestablishment was especially attractive. Many American leaders were true
Enlightenment men whose qualifications as Christians were dubious. Many
members of fast-growing non-established churches, such as Baptists and
Presbyterians, found the end of established Congregationalist and Anglican
churches an attractive prospect. Virginia led the way with a 1786 law
“for Establishing Religious Freedom” that banned government assistance
to any church and established a policy of tolerance toward non-Christians.
Soon after, the Constitution, which made no reference to a deity at all,
proscribed religious tests for federal officeholders; the First Amendment,
of course, prohibited the federal government from religious establishment.
By 1802, when President Jefferson wrote a letter to a Baptist congregation
in Danbury, Connecticut, referring to “a wall of separation between Church
and State” erected by the Constitution, the national state’s refusal to define
its population according to religious categories was clear.
Over time, and despite a marked rise in popular Christian enthusiasm
during the first decades of the nineteenth century, the early American state
moved further away from the religious sphere. To be sure, the Constitution
had never banned state-supported churches or religious tests at the state
level.2 Massachusetts did not abandon establishment until 1833. The early
national state lent indirect assistance to religious authorities in a number of
ways, such as offering tax exemptions for churches and providing military
chaplains – two measures opposed by the strictest of disestablishmentarians,
including James Madison. And in People v. Ruggles (1811), a New York case,
leading American jurist James Kent upheld the blasphemy conviction of
the defendant, who had reportedly said, “Jesus Christ was a bastard and his
mother must be a whore.” Such speech, Kent ruled, was “in gross violation
of decency and good order.”3
The generation that followed Kent, however, was less willing to use
state power to defend Christianity. By the 1840s, when one Pennsylvania
judge mocked the idea of a “Christian state” in America, blasphemy
convictions were exceedingly rare. The direction of change was clear: the
whole country moved steadily toward the standard established first by protoleration
colonies like Pennsylvania and then by the new national state and
state governments such as Virginia in the immediate post-Revolutionary
period. Certainly, churches and their members could have great political
influence, and they often lobbied successfully for legal change to support
2 In a 1947 case involving the use of state funds to transport children to parochial schools,
the Supreme Court approved such use in a 5–4 decision, but Justice Hugo Black’s majority
opinion claimed – erroneously, it seems clear – that the establishment clause applied to
the various states, as well as the federal government. Everson v. Board of Education, 330
U.S. 1 (1947).
3 People v. Ruggles, 8 Johns. (N.Y) 290 (1811).
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 9
temperance or other reform causes. But even when it came to public policy
decisions in which Christians might have been expected to prevail easily
via democratic politics, the effective secularism of the state – rooted, it is
worth noting again, at least as much in anti-establishment and anti-clerical
sentiment as in what might be called modern secular thought – proved surprisingly
robust. In 1830, Congress failed to satisfy hundreds of petitioners
who demanded the end of Sunday mail deliveries, which caused many post
offices to remain open on Sundays. In the vigorous debates on this issue,
Senator Richard M. Johnson of Kentucky, a post office committee chair and
future U.S. vice president, not only defended the Sunday mails as a necessary
element of an efficient national communications system, but went so
far as to refer to the equal rights of Jews and pagans. He warned that his
opponents were flirting with “religious despotism.” Although some Sunday
mail routes disappeared in the coming years (the last post office open on
Sunday was closed in 1912), Johnson’s victory over the petitioners in 1830
stands as a notable example of the early national state’s unwillingness to
protect favored segments of the population according to religion.
When it came to race, the reverse was true. From the beginning, but
increasingly over time, statutes, constitutions, and court decisions promoted
the formation of a privileged class of white men. In some areas, at
least, early actions by the national state encouraged the subsequent extension
of white privilege by state lawmakers. Unlike the Constitution, early
Congressional statutes encouraged Americans to associate whiteness with
full citizenship. In its 1790 Naturalization Act, Congress offered full citizenship
to “all free white persons” with two years of residence in the United
States. The Militia Act of 1792 required every “free able-bodied white male
citizen” to participate in military service. In the coming decades, as new
state constitutions denied suffrage and other civil rights to free blacks,
some proponents of these measures would justify the racial discrimination
by claiming that their absence from the ranks of the militia demonstrated
that blacks were never full citizens.
The rising legal inequalities between white and black developed simultaneously
with growing egalitarianism among whites. During the first half
of the nineteenth century, tax or property requirements for suffrage disappeared
in state after state. Decades ahead of England, the United States
experienced the rise of a popular politics. The presidential election of 1840
saw a total of 2.4 million votes cast; just sixteen years earlier, John Quincy
Adams had managed to become president with fewer than 109,000 votes.
Well before the CivilWar, then, universal white male suffrage had become
the rule. Full citizenship was now a function of race and sex; it did not
depend on birth, wealth, religion, or nationality.
Some would have had it otherwise. Throughout the period, there was
plenty of popular anti-Catholicism, from the published diatribes of the
Cambridge Histories Online © Cambridge University Press, 2008
10 Mark R. Wilson
inventor Samuel Morse to major mob actions in Boston and Philadelphia.
From the heyday of the Federalists to the rise of the Know Nothings in
the 1850s, political nativism was easy to find and sometimes succeeded
in creating new legislation. But all in all, U.S. immigration and citizenship
law remained remarkably open to European men. With the Naturalization
Act of 1790, Congress provided for citizenship after two years’
residence, an inclusive and open system that at least indirectly challenged
the sovereignty of European states by encouraging their subjects to depart.
Although the residential standard soon became five years, efforts to establish
much more restrictive systems were defeated on several occasions. Throughout
the period, the national government and the various states both regulated
immigration through a variety of laws, including the federal Passenger
Acts that limited the numbers of arrivals by setting tonnage requirements
and the states’ efforts to force shipmasters to accept liability for potential
social welfare spending on the newcomers. But these rules did not prevent
some 2.5 million people, mostly Irish and German, from coming to
the United States during the decade starting in 1845 – one of the largest
waves of immigration in all of American history. Overall, the governmental
institutions that these people encountered in the United States tended to
promote white solidarity, rather than divisions among Europeans. Even as
the Know Nothings won short-term victories in New England, for example,
many Midwestern and Western states were allowing non-naturalized
white aliens to vote.
While the circle of white citizenship expanded, the legal denigration of
those outside it also increased. This was true even for slaves, in the sense
that the well-established institution of slavery, which seemed in the immediate
post-Revolutionary period to be on the defensive, became more legally
entrenched over time. Before the 1810s, proponents of emancipation had
reason for optimism. In 1782, the Virginia legislature legalized manumission,
which had been banned in the colony earlier in the century; other
Southern states also allowed masters to free their slaves. Meanwhile, in the
North from 1790 to 1804 the states abolished slavery altogether, though
often with gradual emancipation plans. In 1807, when Congress banned
slave imports, the vote in the House was 113 to 5. During the first quartercentury
after the Revolution, then, the early American state did relatively
little to promote slavery in an active way, although Southern slave owners
were always extraordinarily well represented in all three branches of the
national government.
By the antebellum years, by contrast, many Americans became convinced
that a variety of governmental organizations, including Congress and the
federal courts, were acting positively in favor of slavery. To be sure, there was
some evidence to the contrary. For much of the 1840s and 1850s, the U.S.
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 11
Navy operated an African Squadron, which cooperated with a more active
British naval force in an effort to interdict the slave trade. And many
Northern states had enacted personal liberty laws, which challenged the
interstate privileges of slave owners ordained in the Constitution and the
Fugitive Slave Act of 1793. But even before 1850, when Congress enacted a
stronger fugitive slave law, most of the evidence suggested that slavery was
gaining legal support. In 1820, South Carolina banned owners from freeing
any slave during the owner’s lifetime; by the 1850s, most Southern states
had blocked manumission completely. To the dismay of the members of
the American Anti-Slavery Society, established in 1833, Congress adopted
a “gag rule” in 1836 that officially tabled any petitions on the subject of
slavery. Six years later, in Prigg v. Pennsylvania (1842), the U.S. Supreme
Court upheld the 1793 Fugitive Slave Act, ruling the 1826 Pennsylvania
personal liberty law unconstitutional. (Undaunted, the state responded by
passing a new personal liberty statute.)Newdevelopments during the 1850s
would give Northerners even more reason to think that a minority in the
slave South was using the state to promote slavery against the wishes of a
national majority.
Even more than developments in the law and politics of slavery, the
changing legal status of free blacks best demonstrated the early American
state’s growing devotion to organizing its population in a racial hierarchy.
By the end of the antebellum period, most Northern states had joined
Southern states and the federal government in making whiteness a qualification
for full citizenship. This marked a distinct change from the post-
Revolutionary years, when the laws of eleven states allowed free black men to
vote. Although we should not romanticize race relations in the Early Republic,
these early suffrage laws suggest that in the aftermath of the Revolution
race was not fully coupled to citizenship. (The relationship between citizenship
and suffrage was no less complicated.) This would soon change,
as popular discourse and law both became increasingly racist. As Harriet
Martineau observed in her 1837 book Society in America, the Revolutionary
War general, the Marquis de Lafayette, had expressed great “astonishment
at the increase of the prejudice against color” when he returned to the
United States in 1824.4 By that time, many states had reversed their previous
policies by explicitly denying the vote to free blacks. Even slave states
became stricter in this area: it was not until 1834 and 1835, respectively,
that Tennessee and North Carolina passed laws ending black suffrage. In
the 1820s, as it moved to give the vote to white men regardless of wealth,
New York imposed a new $250 property requirement on black men. In
4 Harriet Martineau, Society in America [1837], ed. Seymour Martin Lipset (Gloucester,
MA: Peter Smith, 1968), 123.
Cambridge Histories Online © Cambridge University Press, 2008
12 Mark R. Wilson
1838, Pennsylvania – where Tocqueville had noted only a few years earlier
that the “tyranny of the majority” created a kind of de facto disfranchisement
– made whiteness an official qualification for voting. Ohio’s new 1851
constitution did the same; so did Oregon’s original constitution in 1857.
Meanwhile, the majority of states passed laws prohibiting free blacks from
entering them at all. By the eve of the Civil War, only five New England
states, in which lived only 4 percent of the free black population, failed to
link whiteness and suffrage.We should not exaggerate the novelty of Chief
Justice Roger Taney’s decision in Dred Scott v. Sandford (1857), declaring
that those outside the “white race” had no citizenship rights in the United
States. In some ways, this was merely the logical extension of the principles
that both Northern and Southern states had been adopting over the
preceding decades. Three years earlier, Congressman John Dawson of Pennsylvania
had already declared that the “word citizen means nothing more
and nothing less than a white man.”5
From census methods to suffrage laws, most governmental institutions
in the field of population and personal status enforced distinctions of sex as
well as race. In part because these two categories overlapped, however, the
state’s changing relation to women followed a different trajectory than it
did with persons designated non-white. While women were never allowed
full citizenship rights, they were increasingly provided with legal rights
that brought them into a more direct relationship with the state, just as
the individualized 1850 census schedules implied. This is not to overlook
the considerable inequalities imposed by the state throughout this era,
which were thoroughly criticized at Seneca Falls in 1848 and in a wave
of subsequent conventions for women’s rights. Indeed, when it came to
suffrage, there were grounds here too for a narrative of declension: in New
Jersey, propertied single women had enjoyed the vote from the Revolution
until 1807, when they were disfranchised even as the vote was extended to
a wider circle of men.
While the champions of woman suffrage would not begin to triumph
until well after the CivilWar, in other areas the antebellum state began to
treat women more as individual subjects. This was evident in both property
law and family law. Under the traditional coverture doctrine, husbands were
allowed full legal control over the property brought to the relationship by
their wives, who in the eyes of the state had no independent economic status.
But starting with Mississippi in 1839, married women’s property laws
proliferated. By 1865, twenty-nine states had enacted laws allowing wives
more control over property. While conservative courts continued to favor
husbands in property cases, this was still a significant change. Immediately
5 Congressional Globe 33rd. Cong., 1st Sess., Vol. 28 (28 February 1854), 504.
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 13
before the Civil War, Massachusetts and New York went one step further
by passing laws allowing married women control over their wages. When
it came to divorce and child custody, there was also a clear trend toward
liberalization. While fathers continued to be favored by the courts until the
end of the era, mothers were increasingly seen by the courts as deserving
consideration in child custody cases.
There were many reasons for the changing legal status of women during
these years, which surely included the efforts of early feminists, as well
as the long-run revolutionary potential of Revolutionary rhetoric. But the
rise of whiteness as a social and political marker also contributed to the
change. Although the hierarchy with which the early American state came
to imagine its population clearly privileged white men above all others,
white women enjoyed at least a residual effect of the growing association
between race and legal rights. In this sense, race trumped even sex, to say
nothing of alternative social categories such as religion, in the politics of
population in the early United States.
II. ECONOMY
The role of the early American state in the economic sphere is a subject that
has engaged scholars for several generations. It was also, of course, a matter
of great concern to the Americans who lived during the years from the Revolution
during the CivilWar. National politics, as well as those at the state
and local levels, often turned on debates over the state’s proper economic
role. From Jefferson and Hamilton to the Jacksonian Democrats and the
Whigs, leading statesmen and major political parties identified themselves
by articulating specific programs of political and economic policy; much
of the work of courts and legislatures pertained directly or indirectly to
this issue. To most observers, it was evident that commerce and industry
in the new nation promised unprecedented growth, as well as disorder. But
Americans’ differing understandings of the proper balance between energy
and stability (to use the language of the Federalist) and the proper distribution
of power in the economic sphere made political economy a contentious
subject.
Historians have debated three distinct narratives of the development of
early national political economy and law. The first stresses the growing
tendency of legislators and courts to abandon traditional regulations and
common law doctrines in a way that facilitated the development of private
capitalist enterprise. The second, largely in reaction to the first, emphasizes
the continuing robustness of government regulation and republican moral
economy. A third narrative, less linear than the first two, uses the history
of federal and state policy on transport infrastructure to describe a rise and
Cambridge Histories Online © Cambridge University Press, 2008
14 Mark R. Wilson
fall of government promotion and administration of enterprise during this
period.
Each of these three narratives is valuable. Together they tell us a great
deal about the direct and indirect activities of the early national state in the
field of economy. Each, however, projects a story that is excessively linear
and rather narrow. Histories that stress the continuity of regulation and the
traditionalism of courts successfully demonstrate the defects of a narrative
in which law increasingly serves entrepreneurial ends, but turn a blind
eye to clear evidence of trends in the direction of deregulation. Studies
that concentrate on the crucial subject of internal improvements, on the
other hand, exaggerate the rise of privatization in the late antebellum era
by assuming, mistakenly, that trends in the 1830s and 1840s continued
into the last decade of the period. Nor, in any case, was all the world Ohio
and Pennsylvania; nor were internal improvements the only important field
for state enterprise. Histories that point to a decline of state enterprise and
state promotion sit uneasily with the record of state activity in Southern and
Western states and with the work of national and local government. While it
is indisputable that competitive capitalism and private capital had become
more important over the course of this period, government enterprise and
state promotion remained an essential part of the early American political
economy, all the way into the Civil War.
As several generations of historians have taken great pains to establish, the
early United States should not be understood as some kind of libertarian
laissez-faire paradise. The state was a major economic actor during the
antebellum period, not only as a promoter of internal improvements and
other enterprises that might have been left to the private sector but also
as a regulator. Municipal regulation enforced by local and state courts was
particularly vigorous, much of it lasting through the end of the period. The
early American state did not leave the problems of local road building, fire
protection, pollution, and public health to private markets. Instead, local
officials and judges drew up and enforced elaborate lists of regulations,
which they saw as legitimate manifestations of state police power necessary
to maintain harmony and order. For every statute or court decision that
served to promote capitalist enterprise during this era, evidently there was
another that bolstered traditional arrangements or even demanded more
public responsibility from private entrepreneurs.
For anyone laboring under the illusion that political economy and law in
the early United States were either overwhelmingly laissez faire or unambiguously
dedicated to advancing the interests of leading merchants and
industrialists, accounts of considerable and continuing regulation serve as
an especially important corrective. But they fail to tell the whole story. To
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 15
be sure, late antebellum cities regulated, just as their colonial predecessors
did. Courts often served as a conservative force in early America, just as
Tocqueville said they did. But the era was shaped by powerful historical
tides that ate away at older arrangements. Even at the municipal level, the
regulatory environment changed dramatically. Take, for example, one of the
most important everyday manifestations of state power: the regulation of
food markets. In the 1790s, many cities and towns confined produce and
meat sales to exclusive state-owned markets; they also fixed prices for bread.
By the 1830s and 1840s, these measures were dropping away as food marketing
became increasingly privatized, first illegally, and then under legal
sanction. In New York City, the common council responded in 1821 to
years of pressure from bakers by substituting standard loaf weights for fixed
prices. By the 1850s, New York mayor FernandoWood openly rejected any
vestiges of “the practice of the old cities of Europe,” hailing the privatization
of meat marketing as a superior system. This was just one important
indicator of the decline of traditional state regulation.
Outside the field of municipal regulation, the direction of state policy
ran even more clearly in favor of competition and innovation. Business
corporations, for instance, became increasingly common and less bound to
public oversight and public purposes. In the 1790s, most corporations were
non-profit organizations; they were widely understood as highly regulated
public or semi-public entities. But by the middle of the nineteenth century,
several states had passed general incorporation laws, which allowed
businesses to incorporate legally by applying to state legislatures for special
charters. Meanwhile, courts increasingly supported state charters of new
corporations that competed with older ones, which had previously enjoyed
a monopoly. As it claimed broad federal powers over commerce in Gibbons v.
Ogden (1824), the Supreme Court had ruled against a steamboat monopoly
chartered by New York State. But a more direct blow to the old monopolists
came from the Taney court in the case of Charles River Bridge v.Warren Bridge
(1837), which upheld a Massachusetts court ruling that rejected exclusive
franchise in favor of competition.
In property law, state courts moved to favor development over stasis. This
was evident in judges’ changing attitudes toward the use of streams and
rivers, which became increasingly important – especially in the Northeast –
as potential sources of industrial power. In colonial New England, farmers
and iron makers had struggled over water use, with each side winning
significant victories from the legislature. But in the nineteenth century,
courts became increasingly sympathetic to the arguments of industrialists,
who claimed that the economic benefits of a new mill outweighed the costs
to farmers and fishermen downstream. The courts’ changing understanding
Cambridge Histories Online © Cambridge University Press, 2008
16 Mark R. Wilson
of this field was evident in the Massachusetts case of Cary v. Daniels (1844),
where the court stressed the public benefits of economic development over
traditional usages and rights.
In the fields of contract and labor law, the state moved away from a conservative
paternalism and toward a liberal political economy that imagined
a market consisting of countless dyads of freely associating individuals.
This was not at all the case, clearly, when it came to slavery. But elsewhere,
courts came to favor competition, mobility, and efficiency. This doctrine
could benefit employees, who by the eve of the Civil War had become
the majority of the American labor force. By the 1830s, for example, an
employee who wished to leave a job in the middle of the term stipulated in
a contract would almost certainly not be compelled by a court to serve out
his or her term. Instead, he or she would face a monetary penalty of forfeited
wages, which in many cases might be preferable to compelled service or
jail. And the courts’ growing interest in promoting economic competition
could sometimes even work in favor of labor unions. In the Massachusetts
case of Commonwealth v. Hunt (1842), the state’s highest court overruled
a lower court’s ruling that a union of boot makers was illegal under the
common law doctrine of criminal conspiracy. Unions and even the closed
shop were permissible, ruled the Massachusetts high court.
But even the most worker-friendly decisions of antebellum courts left
plenty of room for anti-union rulings in subsequent cases. By the 1870s
certainly, courts were routinely ruling against unions. More broadly, in
the context of an ongoing process of industrialization in which economic
power was increasingly concentrated, the move away from concerns about
equity in contract and labor law served in many cases to favor employers
over employees. While customers or passengers were often successful in
winning tort cases against businesses, employees – who were understood
to have agreed to at least a temporary condition of subordination – fared
less well in the courts. In the well-known Massachusetts case of Farwell v.
Boston & Worcester Railroad Co. (1842), for instance, the court ruled against
an employee whose hand was crushed in a workplace accident. Such cases
demonstrated that, while the changing legal environment promoted the
development of an increasingly flexible labor market, employees’ formal
privileges and powers in the workplace often failed to extend much beyond
their ability to quit.
While state and federal courts tended increasingly to favor mobility,
competition, and innovation in many fields of the law, state and federal
legislatures also acted deliberately to promote economic growth. Here,
there was considerable disagreement about the means by which government
– and which level of government – should act. This debate played out
most spectacularly in the fields of banking, communications, and internal
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 17
improvements, which were among the most important political issues of
the day at the local, state, and national levels. While the development of the
political economy of banking and transport infrastructure did not proceed
in a linear fashion, between the Revolution and the Civil War there had
been a notable rise and fall of direct government administration in these
fields; in communications, the change was less dramatic, but moved in the
same direction.
Banking
In banking, of course, one of the most important developments was President
Andrew Jackson’s campaign against the Bank of the United States,
which led to the rise of “free banking” in the states. Chartered by Congress
in 1791, the first national bank was a semi-public institution, in which the
United States held a 20 percent ownership share. In 1811, this first bank
was allowed to die, by a one-vote margin in the Senate. Five years later, after
a war in which a national bank was sorely missed, Congress chartered the
Bank of the United States anew, again with the federal government owning
a one-fifth share. Easily the largest bank and largest business corporation
in the country, the Bank had considerable indirect power over the money
supply. It also had a large public profile. Protected from state-level taxation
by the Supreme Court’s decision in McCulloch v. Maryland (1819), the Bank
was an embodiment of federal and Federalist power, well after the death
of Hamilton and the rise of the Jeffersonian majority. Owned largely by
private investors – many of them overseas – and often promoting deflation
through conservative reserve policies, it was a prime target for attacks by
populists and soft money men. Jackson, who issued a surprising challenge
to the Bank in his first presidential message to Congress in 1829, went
to open war against it in 1832, when he vetoed a bill that would have
renewed its charter. Attacking the Bank of the United States as a monster
that oppressed the common man, Jackson won a landslide victory in the
elections that fall. Then, by moving U.S. Treasury funds into twenty-three
state-chartered “pet banks,” Jackson ended the national state’s support for
the nation’s most powerful financial institution. In 1836, Congress refused
to renew its charter.
The death of the Bank of the United States demonstrated the Jacksonians’
ideological commitment to the decentralization of economic power.
Decentralization was certainly not the same thing as laissez-faire or antidevelopmentalism.
In banking, as with corporations more generally, the
early American state came to favor a policy of competition via low barriers
to entry. Beginning with Michigan in 1837 and New York in 1838, a
total of eighteen states passed “free banking” laws in the antebellum era,
Cambridge Histories Online © Cambridge University Press, 2008
18 Mark R. Wilson
allowing the formation of banks without special charters from the legislature.
These banks were still subject to state regulation, which normally
required that any notes they issued be backed by government bonds. By the
late antebellum era, then, the national state had little control over money
supply. There was no national currency, not even of the limited sort that
the Bank of the United States had effectively provided in the 1820s, and
Treasury funds were strictly segregated from the banking system. Equally
important, the state had little symbolic presence in this field. Awash in a
bewildering array of bank notes issued by institutions all over the country,
the United States was not yet bound together by the greenback.
Communications
In the field of communications, the early United States provided considerable
direct and indirect subsidies through a world-class postal service
and liberal press laws.With the Post Office Act of 1792, Congress created
what would quickly become a giant state enterprise; for the next eight
decades the postal system was rivaled only by the military in its reach and
cost. (Unlike the military, the postal system came close to paying for itself:
although it absorbed $230 million in U.S. funds before the Civil War, it
brought in $171 million.) By 1828, there were about 8,000 post offices in
the United States, serving an area of 116,000 square miles and delivering
14 million letters and 16 million newspapers a year. Considerably larger
than the postal systems of Britain and France, to say nothing of Russia, this
national state enterprise dwarfed any governmental institution at the state
or local level. And its influence clearly went well beyond its sheer economic
size. To the extent that the early United States came to be bound together
culturally during these years, across regional and state boundaries, it was
due in large part to the communications network managed by the Post
Office Department.
Certainly, the American state was especially active in giving its subjects
access to information. Thanks to postal subsidies and low taxes on publishers,
by the 1830s, per capita circulation of newspapers in the United States
was triple that in Britain. But in communications, as in banking, it was
possible to see a retreat of the state during the antebellum period. Telegraphy,
originally sponsored by the national government, became a private
concern in the 1840s.
Internal Improvements
In the field of internal improvements, historians have charted a similar rise
and fall of direct government promotion at both the national and state levels.
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 19
Here again, the Jacksonians worked to reduce the national state’s presence
in the economic field. Before it crystallized as the “American System” identified
in the 1820s with Henry Clay and President John Quincy Adams,
a policy of major national state assistance to transport infrastructure had
been advocated by several leading American statesmen, including Albert
Gallatin and John C. Calhoun. In 1817, Calhoun urged Congress, “Let
us . . . bind the Republic together with a perfect system of roads and canals.
Let us conquer space.” The support inWashington for such a policy, always
shaky, crested in the 1820s. In 1822, President Monr,oe signed a bill that
provided for the extension into Ohio of the National Road, which had been
originally authorized in Congress in 1806 and began in earnest after theWar
of 1812. Then, with the General Survey Act of 1824, Washington tapped
the Army Corps of Engineers – really the only group of formally trained
engineers in the country – to work on internal improvements projects. Over
the next decade and a half, the military engineers surveyed fifty railroads.
Meanwhile, President Adams, who called not only for more federal aid to
canals but also for a national university and the adoption of the metric
system, went well beyond what Congress was willing to support. His successor,
Jackson, signaled his rejection of the American System with an 1830
veto for an extension of the National Road into Kentucky, as well as with
his war against the Bank of the United States. Although federal internal
improvements spending continued to be high under Jackson’s watch, there
was a significant shift in resources toward the western part of the country,
which received large appropriations for roads and river improvements. Not
until 1837, with the economy in recession and President Van Buren in
office, was there a sharp drop in federal spending in this field. All in all,
from 1790 to 1860, the federal government distributed about $43 million
in direct outlays for internal improvements, plus another $77 million in
indirect grants, including land grants and a major distribution to the states
in 1836 of the Treasury surplus.
State-level outlays on internal improvements during these years were even
higher. And here too, historians have found it easy to construct a narrative
of early action followed by retreat. While the states did invest in turnpikes,
railroads, and other infrastructure projects, they did the most with canals.
From 1815 to 1860, of the $188 million spent on canals in the United
States, about three-quarters of the money came from governments, mostly
at the state level. The Erie Canal, begun in the 1810s and completed in
1825 for about $7 million, was a spectacular success that led other states to
emulate New York’s example. After Jackson replaced Adams in the White
House in 1829, it became clear that the states could not expect much aid
for canals fromWashington. The states responded with massive borrowing
to finance their canal projects, many of which faced more difficult terrain
Cambridge Histories Online © Cambridge University Press, 2008
20 Mark R. Wilson
and lower anticipated revenues than the Erie Canal. By 1840, the various
states had accumulated $200 million in debts, a thirteen-fold increase on
their debt burden of twenty years before. In 1841–43, a total of eight states
and one territory defaulted, enraging the British investors who held most
of the debt. Over the next decade and a half, eighteen states altered their
constitutions to limit state outlays and indebtedness. The canal era was
over.
Or so it has seemed. By using a chronological frame running from the
1810s through the 1840s, and by concentrating on the fields of banking and
internal improvements, it is easy to describe a narrative of the rise and fall
of state enterprise in the early United States. But this story should be questioned.
Even in the field of internal improvements, government continued
to be quite active. In the 1850s, a Democrat-majority Congress passed a new
river and harbor bill, authorized four separate surveys for the transcontinental
railroad, and provided large land grants to two railroads in Alabama as
well as a 2.5 million-acre grant to the Illinois Central Railroad – to become
one of the nation’s leading lines. Many of the various states, like the national
government, continued to invest in transport infrastructure. In 1859, New
York spent more than $1.7 million, or half the state budget, on its canals.
True, only about a quarter of the $1 billion invested in U.S. railroads by
1860 came from public sources, whereas close to three-quarters of canal
funds came from government; but in total the actual public moneys spent
on railroads were about as much as the canal outlays. In the late antebellum
era, several Southern states promoted railroads with considerable energy.
Whereas Pennsylvania spent about $39 million on canals and only about
$1 million on railroads before 1860, Virginia’s outlays were $14 million
for canals and $21 million for railroads. In Georgia, where the Western &
Atlantic line was fully state owned, public funds accounted for half of the
$26 million invested in railroads by 1860. Across the antebellum South,
more than half of all investment in railroads came from government.
Most public spending on railroads came from local governments, rather
than the states. State support for internal improvements did not disappear
after 1840, in other words, but shifted away from the state governments
toward the local level. In Pennsylvania alone, local governments raised about
$18 million for railroads. In 1840, local government debts associated with
internal improvements stood at about $25 million; by 1860, they had risen
to $200 million – the same amount that the states had owed at the height
of the canal finance crisis.
Outside the field of internal improvements, other activities of local government
also suggest deficiencies in a narrative of a rise and fall of state
enterprise during this era. When it came to police and education, two of
the most important areas of practical state activity, there was no trend in
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 21
the direction of privatization, but rather the opposite: a significant increase
in state enterprise. During the 1840s and 1850s, the country’s largest cities
abandoned informal, voluntary watch systems for large, professional, uniformed
police forces. In 1855, Philadelphia counted 650 full-time police
officers, organized into sixteen districts. This was a powerful new governmental
institution, which embodied a rather sudden shift away from a less
formal administration of municipal criminal justice, in which politicians
and private citizens had formerly exercised considerable discretion.
Even more impressive was the continuing expansion of state enterprise in
the field of education. Federal land policy, which provided the various states
with nearly seventy-eight million acres for the support of public schools,
helped the United States join Prussia during this era as a world leader in
public education. But the most important work was done by state and
local governments. At the state level, there was a significant increase over
time in school administration and spending. Starting with Massachusetts
in 1837, many states created boards of education, which regulated local
efforts. In the South as well as the North, education took up an increasing
share of state budgets: during at least some years in the 1850s, spending
on schools and universities accounted for at least a quarter of all state
expenditures in Alabama, Connecticut, Louisiana, Michigan, New Jersey,
North Carolina, Pennsylvania, Tennessee, andWisconsin. Overall, the fraction
of state budgets devoted to education rose from an average of 4 percent
in the 1830s to 14 percent in the 1850s. Even more governmental
activity in the field of education occurred at the local level, where public
enterprise became much more important over time, rather than less. In
New York City, one key shift occurred in 1842 when the city established an
elected Board of Education, taking the business of public schooling away
from the voluntary associations that had previously overseen it. By 1850,
the public schools were teaching 82 percent of New York City pupils; just
two decades earlier, nearly two-thirds of the students had been taught in
private institutions. By the eve of the CivilWar, when from Massachusetts
to Alabama more than half of white children attended school, public schools
were quickly growing in number and offering more days of instruction out
of the year.
By the eve of the CivilWar, local governments had thus embraced public
enterprise to a very significant extent. This fact clashes with any narrative of
the development of antebellum political economy that attempts to use the
history of national and state-level internal improvements policy to suggest
that by the late 1840s state enterprise was dead as an idea and a practice. It
was not. Nor was it the case, despite some significant innovations in courtmade
property and contract law, that the early American state became
progressively more devoted overall to promoting private enterprise. Local
Cambridge Histories Online © Cambridge University Press, 2008
22 Mark R. Wilson
governments’ large investments in modern police forces and large new
public school systems are among the more important pieces of evidence to
the contrary.
Such local activities serve to confirm many traditional accounts of the
early American state. But they were not the whole story. Contrary to what
many historians of this era have suggested, the various states were overshadowed
before the Civil War not only by local governments but also by
the national state.
III. TERRITORY
In his 1889 comparative legal treatise on The State, Woodrow Wilson
declared that “the great bulk of the business of government still rests with
the state authorities” (meaning the various states), implying that it had
always been so. For later observers, tracing American political development
from the nineteenth century though theWorldWars, New Deal, and Great
Society, it was even easier to describe an earlier political order dominated by
state and local government, which gave way only in the twentieth century.
There was something to this view: the nineteenth century never saw the
emergence of the kind of national state that existed in the United States in
the late twentieth century – the kind that absorbs fully 20 percent of total
national income in peacetime. Still, theWilsonian assumption ignores the
considerable evidence pointing to the great power and influence of the early
national state. Perhaps the most notable change in the United States during
this period, it is worth repeating, is the tripling in size of its territory, to a
land area of nearly three million square miles. This territory was gained by
the diplomatic, military, and legal activities of the national state; it was also
managed by the national state for many years thereafter. Even in the early
twenty-first century, nearly a third of the land area of the United States is
controlled directly by federal agencies. Traditional understandings of the
early American state assume, rather than establish, the insignificance of
the national government. They simply fail to recognize the importance of
territorial acquisition and management to the national state’s growth and
consolidation.
One basic fact about the early American state, often overlooked, is that
the economic footprint of the combined states was considerably smaller
than that of the national government, and also less than local government.
Even at the height of the canal era, the combined expenditures of all the
states amounted to only about two-thirds of federal outlays; more often,
they came to only one-third. Combined local government expenditures,
which are difficult to measure, appear to have been greater than those of
the states, but still slightly below U.S. outlays. In other words, not only in
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 23
the twentieth century but also in the nineteenth, the federal government
outspent its state and local counterparts.
Nearly all U.S. revenues during this era came from customs duties; in a
few years, land sales were also significant. Where did the money go? Well
over half of it went to the largest of all enterprises, public or private, in
early America: the U.S. postal system and the U.S. military. For nearly
every year of the first half of the nineteenth century, the military alone
absorbed close to three-quarters of federal spending. We must understand
that although the economic and military footprint of the early American
state was smaller than that of its European counterparts, it, like them, was
nonetheless at heart an organization that concentrated coercive power with
an eye to territorial domination.
In terms of land area, the infant United States was already an outsized
national state relative to those in Europe, even before the Louisiana Purchase
and the MexicanWar. The territory over which this network operated grew
tremendously during these years in two giant leaps and several smaller steps.
The Louisiana Purchase of 1803, of course, was the first giant territorial
expansion. This event, like the War of 1812, must be understood in the
context of the giant conflict then taking place on the European continent
among national states then considerably wealthier and more powerful than
the United States. At war with most of his neighbors, Napoleon had an
immediate need for the $15 million that Jefferson happily paid for lands that
stretched from New Orleans up to and beyond the Yellowstone River in the
northwestern plains. The Napoleonic Wars were also the most important
force behind the War of 1812, in which the United States managed to
emerge with its sovereignty and territorial boundaries intact, despite British
troops’ burning of the new national capital atWashington.
In the years leading up to the War of 1812, events on the Atlantic that
were of relatively little concern to the European belligerents took on high
importance in the new American nation, which was sensitive about affronts
to its sovereignty – even if many of them derived from American merchants’
efforts to profit by supplying both sides of the war in Europe. From 1798 to
1800, the United States engaged in an undeclared naval war with France.
After a settlement was reached with France, offenses by the British took center
stage. From the American perspective, these offenses were considerable:
in the decade before 1812, Britain captured more than 900 American ships
and impressed as many as 10,000 U.S. citizens into the British navy. In
1807, in one of the incidents that most enraged the American public, the
British ship Leopard fired on the American ship Chesapeake, causing twentyone
U.S. casualties, before British sailors boarded the American vessel to
haul off four alleged deserters. This famous violation of U.S. sovereignty
was met in Washington with a disastrous new trade policy: the Embargo
Cambridge Histories Online © Cambridge University Press, 2008
24 Mark R. Wilson
Act of 1807, which cut U.S. exports by 80 percent without doing much
to affect British behavior. Five years later, a Congress divided along party
lines declared war on Britain, which after years of fighting the giant French
armies now faced a return to the transatlantic logistical nightmare that it
had known a generation before. Even after the French collapse in early
1814, Britain chose not to pursue another extended conflict in North
America, in part because of successful American resistance. Two weeks before
the most celebrated American military victory of the conflict, Andrew
Jackson’s defeat of the British at New Orleans in January 1815, a treaty was
signed.
Naturally, theWar of 1812 stressed the American state and changed its
relationship with the people living within its boundaries. During the war
itself, the national state struggled to manage the economic mobilization,
a task made especially difficult by the recent death of the first Bank of the
United States and the refusal of Federalist bankers to assist the war effort.
For the tens of thousands of men who moved into the armed forces, as well
as for many of their friends and relatives on the home front, the war provided
a new connection to the national state that was incarnated in symbols
– banners and patriotic songs. But for the development of the American
state, the immediate aftermath of theWar of 1812 was at least as important
as the conflict itself. When the war was over, many U.S. military institutions
were expanded and thoroughly reorganized, taking a form that they would
hold through the end of the century. As Secretary ofWar from 1817 to 1825,
John C. Calhoun created a new staff system, demanding much higher levels
of organization and accountability. The army supply bureaus that would
later fuel American troops in the Mexican War and Civil War, including
the Quartermaster’s Department, Subsistence Department, and Ordnance
Department, were rooted most directly in the Calhoun-era reforms. Meanwhile,
the U.S. Military Academy at West Point, created in 1802 under
President Jefferson, was reformed after theWar of 1812 under a new superintendent,
Captain Sylvanus Thayer. Now modeling itself after France’s
L’Ecole Polytechnique, West Point became the nation’s first engineering
school. As we have noted, several dozen of its graduates would be detailed
for work on civilian internal improvements projects under the General Survey
Act of 1824. By 1860, West Point graduates comprised more than
three-quarters of the army officer corps. The officer corps stood out in early
America as an unusually professionalized group with an unusually practical
higher education.
The U.S. Navy also saw expansion and reform. The navy’s equivalent to
West Point, the U.S. Naval Academy at Annapolis, was created in 1845.
Meanwhile, the navy was reorganized according to a bureau system that
resembled that of the army. No less than the army, the navy extended its
reach during this era. By the 1840s, it had separate squadrons operating in
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 25
the Mediterranean, the Pacific, theWest Indies, the East Indies, the South
Atlantic, and off the coast of Africa. Still no match for the giant British
fleet, the U.S. Navy nevertheless came during these years to have a global
reach. One sign of its growing influence came in the early 1850s, when
Commodore Matthew C. Perry led a U.S. naval force that compelled Japan
to open its ports to the West.
Throughout this era, military institutions and installations were among
the most important manifestations of the American state. Largely through
its military, the national state served as an extraordinarily important actor
in the fields of high-technology manufacturing, exploration, and overseas
trade. Innovations in small-arms manufacture, including the development
of interchangeable parts, were pushed forward by the army’s two national
armories, at Harpers Ferry, Virginia, and Springfield, Massachusetts. Like
the army, the navy, which ran its own construction yards in ports up and
down the Atlantic seaboard, employed a mixed military economy that
combined contracting with large-scale state enterprise. One of the most
important state institutions of the late antebellum era was the army’s Corps
of Topographical Engineers, authorized by Congress in 1838 as a fullfledged
sister to the Corps of Engineers. Over the years that followed, the
Topographical Engineers became a leading source of territorial knowledge.
Serving the technical purposes of the state, this knowledge also became popular.
The reports of the 1842–45 journeys of the team of one Topographical
Engineer, John C. Fr´emont, became best sellers. After the Mexican War,
the Topographical Engineers literally created the boundaries of the United
States, with their surveys of the new borders with Mexico and Canada. During
the 1850s, the army engineers built thirty-four new roads in the far
West. They also conducted four major surveys for a new Pacific railroad.
The military was never far from state-supported scientific efforts during
this era; such efforts in turn accounted for a considerable proportion of all
scientific knowledge generated in the early United States. By one estimate,
close to a third of all scientists in antebellum America worked directly
for government. At the state level, support for science came largely in
the form of government-sponsored geological surveys, which helped chart
the riches of Pennsylvania coal and California gold. More important for
early American science was the national government, which funded leading
scientific enterprises, such as the U.S. Coast Survey and Naval Observatory.
The most important American global exploration effort of the era, the U.S.
Exploring Expedition (or “Ex Ex”) of 1838–42, used six ships and nearly $1
million in federal funds; among its accomplishments was the co-discovery,
with French and British ships, of the continent of Antarctica.
This ongoing institutional expansion and influence on the part of the military
echelon of the national state were not matched by the military activities
of the various states. Many states effectively reneged on the constitutional
Cambridge Histories Online © Cambridge University Press, 2008
26 Mark R. Wilson
and statutory military obligations established just after the Revolution. In
theory, the states should have maintained viable public militias through
conscription, upholding the non-regular reserve side of the much-hailed
American “dual military” tradition. In practice, state militias withered away
during the early nineteenth century. During the 1840s, seven states ended
compulsory service altogether. While voluntary militia companies sometimes
expanded to take their place, this was still an important development
away from a federal military system and toward a more fully nationalized
military.
One of the central tasks of the U.S. Army, of course, was to serve the
early American state’s management of Native Americans. It did so not
only through active military operations but also routine administration.
Significantly, the Bureau of Indian Affairs (also called the Office of Indian
Affairs) was established in 1824 as a division of the War Department,
by order of Secretary of War Calhoun. This formalized the existing War
Department oversight of “Indian agents,” the U.S. officers authorized by
Congress to oversee trade and other aspects of U.S. policy toward Native
Americans in the early nineteenth century. Starting in 1796, Congress
demanded that the Indian trade be conducted through official government
“factories,” or trading posts, which effectively regulated an important part
of the American economy. The factory system ran until 1822, when the
private fur trade lobby convinced Congress to kill it. But well after this, the
War and Treasury Departments continued to oversee a different aspect of
economic exchange on the frontier: the payment of annuities, which were a
common feature of U.S. treaties with various tribes. By 1826, these annuities
amounted to $1 million a year, about 6 percent of all federal outlays. When
Congress streamlined the Indian service in 1834, army officers became
even more responsible for the distribution of annuities, to which was added
regulation of the liquor trade and other basic tasks of administration. Fifteen
years later, in 1849, the work of Indian affairs was moved out of the War
Department and into the new Interior Department. Only in the last decade
of this whole era, in other words, did the U. S. military lose direct oversight
of all aspects of Indian affairs.
Along with routine administration, of course, the military enforced the
Indian policies of the early American state with naked coercion. This was
certainly the case in the aftermath of the Indian Removal Act of 1830. Over
time, the United States became less willing to recognize groups of Indians
within its territory as independent sovereign states. Supporting the drive of
European-American settlers for more land, the early American state turned
increasingly to force to meet this end. None of this was new in 1830. For
instance, the 1795 Treaty of Greenville, in which the United States formally
acquired the southern two-thirds of Ohio in exchange for $20,000 cash and
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 27
a $9,500 annuity, followed a military victory by RevolutionaryWar general
AnthonyWayne. This victory reversed a crushing defeat suffered in 1791 by
a European-American force led by Arthur St. Clair, the territorial governor.
During the 1810s, two future U.S. Presidents, William Henry Harrison
and Andrew Jackson, won victories over Shawnee and Creek forces in the
Indiana and Mississippi territories.
Despite all this early military activity, however, there was still an important
shift in state policy between the Revolution and the Civil War away
from treating Native Americans as sovereign or even semi-sovereign entities.
In the cases of Johnson v. M’Intosh (1823) and Cherokee Nation v. Georgia
(1831), the Supreme Court held that Indian tribes lacked full sovereignty.
In Worcester v. Georgia (1832), the Supreme Court appeared partially to
reconsider. But the state of Georgia and President Jackson, who wanted
the vast Cherokee lands for white settlers, simply ignored the ruling. By
1840, some 60,000 members of the southeastern Indian tribes had been
forcibly resettled in the new Indian Territory (now Oklahoma). From an
earlier policy of treaty-making backed by military force, the American state
had moved toward one of direct coercion and control. The vast majority of
Native Americans, who were not U.S. citizens, were turned into stateless
peoples living under imperial rule.
While the Indian removals of the 1830s and the annexation of Texas and
Mexican War of the following decade stand as powerful evidence of the
early American state’s appetite for territorial domination and expansion,
this hunger had limits. This was true especially when it came to dealing
with the European powers, with which the United States continued to
forge diplomatic rather than military solutions to potential territorial disputes.
Many military officers who served along frontier flashpoints, as well
as Congress and the State Department, were wary of violating the existing
international order of state sovereignty. It was through an 1819 treaty that
the United States took over Florida from Spain, and despite many calls
for U.S. control of Cuba, the island remained in Spanish hands until the
end of the century. The Monroe Doctrine of 1823 warned European powers
against additional territorial colonization in the Western hemisphere,
but the U.S. quietly acceded to British annexation of the Falkland Islands
in 1833. An equally important non-war occurred in the 1840s in the far
northwest, where President James Polk, among others, claimed to seek an
expanded U.S. territory that would reach above the 54th parallel. But in
1846, Congress agreed to a boundary along the 49th parallel, the line that
Britain had proposed more than two decades before. And while American
private citizens violated the sovereignty of foreign states by launching filibusters
in Central America and elsewhere, they failed to gain U.S. approval.
In each of these cases, it appears that many governmental institutions and
Cambridge Histories Online © Cambridge University Press, 2008
28 Mark R. Wilson
officers tended to restrain, rather than promote, the territorial expansion
through military action demanded by many settlers, newspaper editors, and
elected officials.
The one great territorial acquisition of the immediate antebellum era, of
course, did come from military conquest. By 1848, Tocqueville’s prediction
of Anglo-American continental hegemony, made only a decade before, had
been realized rather abruptly by the Treaty of Guadalupe Hidalgo, ending
the Mexican War. The vast preponderance of land in what would be the
continental United States was now under the direct and exclusive authority
of the national state. By 1850, the nation counted 1.2 billion acres of public
land.With the giant territorial leaps of 1803 and 1848, the management of
vast physical spaces became far more important for the early American state
than it had been in the day of President Washington. The state’s greatest
resource, territory was also the state’s greatest challenge.
Throughout the period, the national state used property law and land
policies, in addition to its postal and military institutions, as a way of
managing territory. These policies, which must be understood as among
the most important facets of state action in early America, altered the
nature of the physical spaces over which the state claimed hegemony. An
economical means of territorial consolidation, they suggested the potential
power and efficacy of a new, liberal form of statecraft. They also led to the
fracturing of the state itself, in a terrible civil war. All of this demonstrated
the relative importance of national state policy and administration.
Even before the Louisiana Purchase, the infant American state had struggled
with the problem of territorial management. After the Revolution,
many of the American states ceded to the Union their claims to lands on
their western frontiers. Cession of claims, it was hoped, would bolster the
legitimacy and fiscal health of the new national state while reducing interstate
conflict. This was a significant enhancement of national state power.
The first Congresses then passed critical legislation that would shape the
American landscape and the American polity for decades to come. The
Northwest Ordinance, enacted in 1787, created a standard mechanism – in
advance of the ratification of the Constitution – for the political consolidation
of western territories. This measure established a three-stage process
for the formation of new states, through which U.S.-appointed territorial
governors would serve until replaced by full-fledged state governments. The
basic blueprint for the expansion of American federalism, the Northwest
Ordinance applied to the territory that between 1803 and 1848 would enter
the Union as the states of Ohio, Indiana, Illinois, Michigan, andWisconsin.
(The remainder of the original territory became part of Minnesota, which
achieved statehood in 1858.) While the actual paths taken by many of the
new territories to statehood departed somewhat from the original plan, in
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 29
every case the national state had tremendous influence over the early political
development of theWest. Not especially wild, theWest was organized
from the beginning by law, from Congressional statutes to the workings of
local justices of the peace and county courts, which spread the common law
and other old English institutions across the American continent.
No less important than the Northwest Ordinance was the Land Ordinance
of 1785, with which the Confederation Congress established procedures
for the transformation of territory into land through a national
rectilinear surveying system. While it is possible to overstate the extent to
which the early American state consolidated its rule by thus enhancing the
legibility of the landscape, there can be no doubt that this was a field in
which the national state exerted powerful influences over the U.S. spatial
and economic order. Under the 1785 law, the basic unit became the township,
a square six miles long and six miles wide, which created a total of
thirty-six “sections” of one square mile (640 acres) each. Four sections per
township were reserved for the use of the United States, and one to provide
moneys for public schools. Over time, U.S. land policy was modified in a
way that tended to promote faster settlement. At first, the United States
sold only whole sections, but the minimum dropped steadily, until in 1832
it was possible to buy as little as a sixteenth of a section (40 acres). Across
much of the Midwest, the landscape had been transformed by a proliferation
of square-shaped family farms of 80 or 160 acres, as well as much
larger estates. In 1820, the minimum per-acre price, which would become
a sort of national institution in itself, was set at $1.25, down from the
$2.00 level established in 1790. In 1854, a longstanding Jacksonian land
policy initiative was instituted by Congress with a Graduation Act, which
allowed reduction of price on unsold public lands to as little as $0.125, or
one-tenth the normal minimum. Thus well before the Homestead Act and
Morrill Act were passed by the Republican-dominated Congress during the
Civil War, national state policy favored both rapid settlement and the use
of public lands to fund education.
The massive project of converting territory into land was managed in
large part by one of the most important of early American state institutions,
the General Land Office. Established in 1812 under the Treasury
Department, the Land Office was faced immediately with a major jump in
land sales, promoted in part by the acquisition of new lands formerly held
by Native Americans, by treaty and by force, during the War of 1812. By
1818, the Land Office’s Washington headquarters employed twenty-three
clerks, one of the largest clerical forces of the day. Overseeing a minor mountain
of paperwork, Land Commissioner Josiah Meigs found himself signing
his name on roughly 10,000 documents a month. Two decades later, in
1837, there were sixty-two district land offices across the country, along
Cambridge Histories Online © Cambridge University Press, 2008
30 Mark R. Wilson
with seven surveying districts. By then, the Land Office’s surveyors ranked
among the leading government contractors of the day; its district registers
and receivers, who earned commissions on land sales, were – no less than
territorial judges and justices of the peace – some of the most powerful men
in the territories. In 1835–36, one of the great land booms of the century,
the national state was selling off between 1 million and 2 million acres a
month. Along with the postal and military departments, the Land Office
was another national state institution conducting economic enterprise on a
scale far larger than any private sector institution.
To some degree, certainly, the land business may be understood as a
kind of negative state enterprise, in which immense national resources were
quickly privatized. In the half-century from 1787 to 1837 alone, the United
States sold 75 million acres. But the notion of privatization takes account of
only one side of early American statecraft in this field. As early as the 1790s,
Washington and Jefferson understood that, by promoting settlement on its
frontiers, the American state might achieve a more thorough consolidation
of territory than it could ever hope for through direct military action and at
far less expense. After the Louisiana Purchase, the paramilitary dimension of
the state’s pro-settler land policy became even more important. Occasionally
this dimension became explicit, as in the so-called Armed Occupation Act
of 1842, which granted 160 acres to any civilian who agreed to settle and
fight for five years in Florida, where the Seminoles were continuing to mount
the most successful military resistance to Jackson’s removal policy.
The military dimension of early land policy was also evident in the association
during this era between military service and government land grants.
During the Revolutionary War, several states, as well as the federal government,
promised land grants to soldiers. For veterans of that conflict, the
compensation in land was eventually complemented by cash pensions. In
the years following the Pension Act of 1818, pensions for Revolutionary
War veterans regularly accounted for more than 10 percent of all federal
outlays. Men who served in subsequent antebellum conflicts did not receive
federal cash pensions and got land alone. Soldiers in the War of 1812
received more than 29,000 warrants, involving 4.8 million acres. During
the MexicanWar, in 1847, Congress passed the Ten Regiments Act, which
compensated just one year of military service with 160 acres of land located
anywhere in the public domain. Soon after the Mexican War, veterans of
the War of 1812 convinced Congress to award them more land as a sort
of quasi-pension. Together with the Ten Regiments Act, new Congressional
statutes in 1850, 1852, and 1855 generated a total of 552,511 land
warrants for veterans, involving 61.2 million acres. The explicitly paramilitary
dimension of this element of U.S. land policy and settlement can be
exaggerated, since many veterans never moved west but simply sold their
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 31
warrants to brokers; furthermore, plenty of land was available outside the
military warrant system. But these land grants can be seen as an important
early form of militarily inflected national social policy, as well as a major
part of antebellum land policy. Favored initially as a cheap enticement to
enlistment, the military warrants took on a new significance over time as
they served increasingly as a manifestation of the national state’s acceptance
of its special obligations to a certain class of citizens.
During the 1850s, even as Congress was granting unprecedented
amounts of land to military veterans, the national state’s territorial policies
became the center of a political crisis that led directly to the Civil War.
This well-known chapter in American history was written as a result of the
intersection of the fields of population, political economy, and territory that
have been discussed above.
While the numbers of Northerners dedicated to the abolition of slavery
were not nearly enough to win a national election or back a major war effort,
many more Northerners objected to the changes in U.S. territorial policy in
the 1850s, in which the American state openly endorsed slavery as a national
institution. During the MexicanWar the U.S. House had twice passed the
so-called Wilmot Proviso, which, taking the Northwest Ordinance as a
model, would have prohibited slavery in the vast new territories then being
seized from Mexico. Blocked repeatedly in the Senate by John C. Calhoun –
once a leading nationalist state-builder following theWar of 1812, now the
country’s leading spokesman for states’ rights – theWilmot Proviso divided
the country and the national political parties sharply along regional lines.
Apparently a desert wasteland, with the exception of the Pacific Coast
and the California gold fields, the massive new territorial acquisition that
came from the Mexican War created great stresses on the American state.
In the famous Compromise of 1850, Congress agreed to admit California
as a new free state, but allowed the settlers of the large new Utah and New
Mexico territories to decide whether to permit slavery. For any Americans
familiar with maps of the continent, this evidently challenged a thirty-yearold
policy in which it appeared that slavery would be banned in western
territories located north of an imaginary line extending westward from
Missouri’s southern border. In 1854, the Kansas-Nebraska Act more directly
cancelled the territorial policy on slavery enacted in the Compromise of
1820, by allowing “popular sovereignty” to decide the issue in the Kansas
territory, which lay well above the 36◦30 parallel.
The new policy proved to be a disaster. Pro-slavery and anti-slavery
settlers flooded into Kansas, where they prepared rival constitutions and,
on more than one occasion, killed one another. In 1857, following the
Supreme Court’s Dred Scott decision, President Buchanan endorsed the proslavery
Lecompton constitution. At the same time, concerns about Mormon
Cambridge Histories Online © Cambridge University Press, 2008
32 Mark R. Wilson
theocracy in Utah territory led Buchanan to order a major U.S. army march
westward from Kansas. Military logistics were already the biggest item in
the federal budget. Buchanan’s Utah campaign only heightened the fiscal
strains associated with managing the new territories. When the economy
entered a severe recession at the end of 1857 and a new Utah Expedition
was mounted in 1858 to reinforce the first one, fiscal difficulties increased
markedly. The Utah dispute was settled peaceably, but the expeditions
drained the Treasury and bankrupted the nation’s leading military contractor.
After he conducted a vain and illegal effort to assist the contractor, the
Secretary of War was forced out. By the end of the 1850s, disputes over
U.S. territorial policy had not only reshaped party politics along sectional
lines, they had also undermined many of the early American state’s most
important institutions.
CONCLUSION
The CivilWar tested and transformed the American state. But it did so to a
lesser extent than one might have expected, in part because of the antebellum
developments described here. In the fields of population, economy, and
territory, many of the same state institutions that had been so important
between the Revolution and the Civil War continued to be key nodes of
state action during the war years of 1861–1865 and beyond. While the
war gave rise to many changes in American government, those innovations
were shaped and in the long run constrained by the antebellum state order.
The secession of Southern states in 1860–61 challenged the territorial
integrity of the nation that had been expanding over the previous eighty
years. The North’s willingness to fight suggested that territorial integrity
was important to many Americans. It was no accident that the war started
not over a conflict between two of the various states, but rather with the
crisis at Fort Sumter, part of the continental network of military installations
maintained by the national state. To fight the war, the North drew on the
officer corps and national military bureaucracies that had been schooled
and refined during the antebellum expansion of continental empire. The
South, which was able to tap part of the same officer corps, created military
organizations virtually identical to those of the North.Whenthe Union won
the war after four years, a single national state regained territorial mastery.
Postbellum territorial consolidation, which concentrated to a remarkable
degree not on the South but on theWest, followed antebellum precedents.
In the field of political economy, the Civil War mobilization challenged
governments in both North and South. While the two sides’ economic
capacities were far apart, the differences in their mobilization styles
should not be exaggerated. Certain aspects of the Confederate mobilization,
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 33
including state enterprise in ordnance manufacture and regulation of prices
and labor markets, appear to resemble the kind of state-managed efforts
that would be seen in the World Wars of the twentieth century. But there
was also a remarkable lack of central coordination in the South, evident
in its chaotic fiscal policy and the resistance of individual states to central
authority. In the North, by contrast, the national state quickly took many
supply and fiscal concerns out of the hands of the various states. And while
the North had the luxury of a large, diverse economic base, filled with thousands
of potential private contractors, it – no less than the South – created
a mixed war economy. In several of the largest war industries, including
those that supplied small arms, ammunition, uniforms, and ships, stateowned
and operated facilities manufactured a quarter or more of the goods
consumed by the Union armies. The North’s supply system was overseen
largely by career military officers, rather than businessmen. It was financed
by a new national income tax and the unprecedented popular war bond
drive. Thus while the Northern state lacked many of the powerful wartime
administrative mechanisms that the United States would create during the
WorldWars – boards to control prices, allocate raw materials, and renegotiate
contracts – it nevertheless played a substantial managerial role in the
war economy of 1861–65.
One of the most important effects of the CivilWar was to remind Americans
of the potent authority of government, which from 1861 to 1865
demanded hundreds of thousands of soldiers and hundreds of millions of
dollars. Although only about 10 percent of the nearly three million Southern
and Northern men who served as soldiers were formally drafted under
new conscription laws, many more were pulled into the armies by bonuses
paid by national, state, and local governments. (In the North alone, bonuses
totaled roughly $500 million, compared with about $1 billion in soldiers’
regular pay.) During the war, many county governments, especially, found
themselves borrowing unprecedented sums to provide extra compensation
to soldiers and their families. In the decades that followed the war, the
national state led the way in providing yet another form of additional
compensation: military pensions. Anticipated by antebellum precedents,
the Civil War pension system reached an entirely new scale. By the early
1890s, the United States was paying pensions to nearly one million Union
veterans, absorbing more than 40 percent of the national state’s income. The
Pension Bureau inWashington, which employed more than 2,000 people,
then qualified, according to its chief, as “the largest executive bureau in the
world.”
Accompanying the wartime expansion of the state that came with the
mobilization of men and materiel was the rise of the kind of activist, prodevelopmental
national state that some Whigs had dreamed of during the
Cambridge Histories Online © Cambridge University Press, 2008
34 Mark R. Wilson
antebellum period. During the war years, the U.S. Congress enacted a high
tariff, issued large land grants for Pacific railroads and state colleges, and
created the Department of Agriculture. Another important wartime innovation,
symbolically and substantively, was the greenback – a new national
currency that replaced the bewildering array of notes that had been issued
by banks across the country during the antebellum period. The new paper
money was circulated through a new national banking system, yet another
creation of the Republican-dominated Congress. While the national bank
network did not have the controlling authority that would be created a
half-century later in the Federal Reserve system, and while banks chartered
by the various states continued to be important parts of the American economy,
the war marked a distinct break away from the radically decentralized
Jacksonian financial system. The state’s wartime financial requirements,
met almost entirely at home rather than in Europe, also fueled the growth
ofWall Street, which became increasingly interested in the activities of the
Treasury.
While the CivilWar partially transformed the American political economy,
it was in the field of population that it had – in the short and long run,
if not in the medium run – its most revolutionary effects. The Thirteenth,
Fourteenth, and Fifteenth Amendments to the Constitution banned slavery,
created a new category of national citizenship in which African Americans
were included, and appeared to proscribe racial discrimination at the ballot
box. Briefly, the United States during the 1860s and 1870s saw an
extraordinary political revolution occur, as African Americans became not
only voters but also important leaders at all levels of government across
the South. By the end of the century, however, African Americans would
lose much of what they had appeared to gain just after the Civil War. Due
in part to the counterrevolutionary activities of Southern whites, their loss
also came about as a result of Northerners’ sh,allow commitment to Reconstruction
– surely the consequence of the enduring institutionalized racism
that had prevailed across the nation for generations before the war, a racism
assiduously encouraged by the state at all levels.
In 1867, Illinois Congressman Lewis Ross harkened back to “the earlier
and better days of the country, when the Democratic party was in power,”
when “we had a Government resting so lightly on the shoulders of the
people that they hardly knew they were taxed.” For Ross and his party
during Reconstruction, and for others in subsequent years who wanted
to limit the powers of the national state, it was important to promote an
understanding of American political and legal history in which government
(especially central government) had always been puny and punchless. But
that understanding is simply incorrect. It owes as much to the fantasies of
anti-statists – including white supremacists in Ross’s day and champions
Cambridge Histories Online © Cambridge University Press, 2008
Law and the American State, from the Revolution to the Civil War 35
of “free enterprise” in the twentieth century – as it does to the historical
record.
Taxes were indeed relatively low in the early United States, but the
powers and achievements of the state were considerable. Slavery and white
privilege, while antedating the Revolution, were reproduced energetically
by new laws. Popular suspicion of concentrated governmental power may
have been widespread, as the success of the Jeffersonians and Jacksonians
suggested, but all levels of American government raised large sums for
public works. Many critical industries and services, including transport,
communications, education, scientific research, and security, were managed
on a large scale by public, as well as private, authorities. Far from anarchic,
the trans-Mississippi West, no less than the East, was explored, surveyed,
and maintained by governmental organizations and laws.
Even acknowledging all this evidence of a robust state in the early United
States, some may maintain that the state was still insignificant in relative
terms. A cursory examination suggests that, even in comparison to the most
powerful European states of the era, the state in the early United States was
not especially impotent or anomalous. In the realm of political economy,
much of the nationalization and heavy regulation undertaken by European
states that diverged from American practice began in the second half of the
nineteenth century, not in the first. Similarly, it was largely in the second
half of the century that the modern British and French empires took shape;
before 1850, the consolidation of the U.S. continental empire suggested that
American achievements in military conquest and territorial administration
were no less considerable than those of other leading powers, even if they
cost less. Finally, the early American state was evidently at least as energetic
as its European peers in measuring its population and discriminating legally
among different classes of persons.
When it comes to government, there was no original age of American
innocence. To the extent that the American state can be understood today as
exceptional relative to its peers around the world, it owes its distinctiveness
more to the developments that would come after 1865 than to its early
history.
Cambridge Histories Online © Cambridge University Press, 2008
2
legal education and legal thought,
1790–1920
hugh c. macgill and r. kent newmyer
The years from 1790 to 1920 saw the transformation of American society
from an agrarian republic of 4 million people huddled on the Atlantic
seaboard to a continental nation of some 105 million people, recognized as
the dominant financial and industrial power in the world. Legal education
(and legal culture generally) responded to and reflected the historical forces
behind this radical transformation. In 1790, aspiring lawyers learned law
and gained admission to practice by apprenticing themselves to practicing
lawyers. Law office law unavoidably tended to be local law. By 1920, 143
law schools (most affiliated with universities) dominated – indeed, all but
monopolized – legal education and were close to controlling entry into the
profession. Through their trade group, the Association of American Law
Schools, and with the support of the American Bar Association, they had
by the beginning of the 1920s created the institutional mechanisms for
defining, if not fully implementing, national standards for legal education.
In legal education as in many other areas of American society, institutionalization
and organization were the keys to power, and power increasingly
flowed from the top down.
The normative assumptions of this new educational regime emanated
from the reforms first introduced by Dean Christopher Columbus Langdell
at Harvard Law School in 1870. Langdell’s ideas were stoutly resisted, initially
even at Harvard. They were never implemented anywhere else in pure
form, and they were rooted more deeply in tradition than Langdell acknowledged.
Nevertheless, his institutional and pedagogic innovations became
the common denominator of modern legal education. Langdell’s success
owed much to the congruence of his ideas with the version of legal science
prevailing in the late nineteenth century. No less important, it responded to
the changing nature of legal practice in the new corporate age: a shift from
courtroom to board room, from litigating to counseling, from solo and small
partnership practice to large law firms. More generally, Langdell’s reforms
at Harvard were symbiotically connected to the demographic, intellectual,
36
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 37
political, and economic forces of modernization at work as the nineteenth
century ended. Our goal, hence, is to describe and analyze legal education
as it responded to (and influenced) these transformative changes.
I. THE COMMON LAW FOUNDATION: LEGAL
EDUCATION BY APPRENTICESHIP
No single factor had greater impact on American legal education than the
transplantation of the English common law to America. Sizeable portions
of English law had to be modified or jettisoned to fit American circumstances,
but what remained as bedrock was the adversary system of dispute
resolution. In this common law system, a lawyer was a litigator. It followed
that the primary objective of legal education – first in England and then in
America – was to teach lawyers the art of arguing cases in court. From the
outset, practicing law took precedence over theorizing about it.
What better way of learning the practical skills of lawyering than by
studying those who practiced them on a daily basis? Apprenticeship training
was essentially learning by doing and by observing, and in both the
burden rested mainly on the student. Even after law-office training was
supplemented by a few months in a proprietary or university law school, an
opportunity increasingly available by the middle decades of the nineteenth
century, legal education remained largely autodidactic.
Apprenticeship was the dominant form of legal education in British
North America from the outset, although a few sons of the well-to-do,
chiefly from the Southern colonies, attended one of the four English Inns of
Court. English legal education carried considerable cachet even though by
the eighteenth century, when American students began to appear in London,
the Inns had deteriorated into little more than exclusive eating clubs. They
left no mark on legal education in the United States, except to generate a
negative reaction to anything suggesting a national legal aristocracy. Even
in England, real instruction in law took place in the chambers of barristers
and solicitors.
The rules governing apprenticeship training in America, like those governing
admission to practice, were established by the profession itself – by
judges in conjunction with local associations of lawyers. In new states and
territories, where the profession itself was ill defined, the rules were fewer
and less likely to be enforced. In most states, students were required to
“read” law in the office of a local lawyer of good standing. Three years of
reading appears to have been the norm, though time spent at one of the
early law schools counted toward the requirement. Fees paid by apprentices,
set informally by the bar, generally ranged between $100 and $200, but
in practice the amount and means of payment were up to the lawyer. The
Cambridge Histories Online © Cambridge University Press, 2008
38 Hugh C. Macgill and R. Kent Newmyer
level of literacy expected of apprentices probably excluded more aspirants
than the schedule of fees, which was flexible and often laxly enforced.
Students were admitted to the bar after completing the required period
of reading and passing a perfunctory oral examination, generally administered
by a committee of lawyers appointed by the local court. Occasionally
an effort might be made to make the examination a real test, as when the
famed Virginia legal educator George Wythe opposed, unsuccessfully, the
admission of Patrick Henry (who became a leader of the Richmond bar).
Since apprentices were sons of people known in the community and known
to their mentors, the examining committee was unlikely to offend a colleague
by turning down his prot´eg´e. Few students who fulfilled the terms
of apprenticeship, had a nodding acquaintance with Blackstone’s Commentaries,
and were vouched for by their sponsors failed to pass. Admission to
appellate practice as a rule came automatically after a prescribed period of
practice in the trial courts.
Immediately prior to the Civil War even these minimal standards were
subject to dilution. In Lincoln’s Illinois, for example, the price of a license
for one lucky candidate was a dinner of oysters and fried pigs’ feet. As
Joseph Baldwin put it in The Flush Times of Alabama and Mississippi (1853),
“Practicing law, like shinplaster banking or a fight, was pretty much a free
thing. . . . ” The popularity of “Everyman His Own Lawyer” books during
this period makes the same point. Admission to practice was less a certification
of the applicant’s knowledge than an opportunity for him to learn
on the job.
Compared with legal education in eighteenth-century England or
twentieth-century United States, law-office education was strikingly egalitarian.
Even at its most democratic, however, the system was not entirely
open. To women and black Americans, it was not open at all, exclusions so
rooted in the local culture (like apprenticeship itself ) that no formal rules
were required to enforce them. Though not based on class distinctions,
the system operated to favor the sons of well-connected families. Fees were
beyond the reach of most working-class young men; for those who could
afford them, it was advantageous to read with the best lawyers, in the best
offices, with the best libraries. Most of GeorgeWythe’s students atWilliam
and Mary, for example, were from Virginia’s ruling class. Their Northern
counterparts who could study at Harvard with Joseph Story and Simon
Greenleaf also had a leg up on their competition. Access to the profession,
and success within it, depended on being literate, articulate, and disciplined
– qualities difficult to develop for those on the margins of American
society. Still, judging by the large number of lawyers who achieved eminence
without benefit of social advantage, professional status had less to do
with pedigree than with success in the rough-and-tumble of circuit-riding
and courtroom competition.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 39
Though comparatively open to achievement, apprenticeship was also
open to abuse. Often the most able jurists did not have the time to devote
to their apprentices: consider for example the complaint of one of James
Wilson’s students that “as an instructor he was almost useless to those who
were under his direction.”1 Many lawyers had neither the knowledge nor the
ability required to teach others. At the worst, they simply pocketed student
fees and exploited their apprentices as cheap labor for copying contracts,
filing writs, and preparing pleas. The learning-by-suffering approach was
justified on the grounds that students were actually mastering the rudiments
and realities of practice. In fact, even this modest goal was not always
reached; witness the confession of John Adams that, after completing his
apprenticeship, he had no idea how to file a motion in court.
The chief weakness of law-office education did not lie in the practical
matters of lawyering, however, but in its failure to teach law as a coherent
system – or, as contemporaries liked to say, as a science. James Kent’s
description of his apprenticeship in Poughkeepsie, New York, in the 1780s
identified the problem and the solution. Kent received no guidance from
Egbert Benson, attorney general of NewYork, to whom he had been apprenticed
by his father. Unlike his officemates, however, who spent much of their
time drinking, Kent plunged into Blackstone’s Commentaries on his own.
Mastery of Blackstone brought order out of the chaos of case law and, as he
later claimed, launched him on the road to success. Kent repaid the debt
by writing his own Commentaries on American Law, a work designed to do
for American lawyers in the nineteenth century what Blackstone had done
for him in the eighteenth.
For the great mass of American law students who lacked Kent’s discipline
and thirst for knowledge, the apprenticeship system did not deliver a comprehensive
legal education. Neither, however, did it exclude them from
practice. Indeed, apprenticeship education, like the common law itself,
fit American circumstances remarkably well. A system that recognized
no formal class distinctions and placed a premium on self-help resonated
with American egalitarianism. Even the local character of law-office training,
a serious weakness by the late nineteenth century, had its uses in the
Early Republic because it guaranteed that legal education would respond
to the diverse, and essentially local, needs of the new nation. What Daniel
Webster learned in the law office of Thomas W. Thompson in Salisbury,
New Hampshire, for example, prepared him to serve the needs of farmers
and merchants in the local market economy of the hinterland. His later
education, in the Boston office of Christopher Gore, with its well-stocked
library, was equally suited to practice in the state and federal courts of
that major commercial center. Gore’s students could also learn by watching
1 Quoted in CharlesWarren, History of the American Bar (Cambridge, MA, 1912), 167.
Cambridge Histories Online © Cambridge University Press, 2008
40 Hugh C. Macgill and R. Kent Newmyer
Boston’s leading lawyers in action, whether in the Supreme Judicial Court
of Massachusetts, the federal district court of Judge John Davis, or Justice
Joseph Story’s U.S. Circuit Court. A legal education for students in Richmond
in the 1790s similarly included the opportunity to observe appellate
lawyers like John Marshall and John Wickham argue cases before Judge
Edmund Pendleton and Chancellor GeorgeWythe. The law they learned –
English common law and equity adjusted to plantation agriculture and
chattel slavery, operating in an international market – suited the needs of
the Old Dominion.
Whether in Salisbury or Boston, New York or Poughkeepsie, Richmond,
Baltimore, or Philadelphia, apprenticeship training adapted itself to American
circumstances, even as those circumstances changed. By failing to
teach legal principles, the system at least avoided teaching the wrong ones.
Circumstance more than deliberate planning assured that American legal
education in its formative years, like American law itself, remained openended,
experimental, and practical.
II. THE AMERICAN TREATISE TRADITION
Apprenticeship education received a bracing infusion of vitality from the
spectacular growth of American legal literature. Through theWar of 1812,
American law students educated themselves by reading mainly English treatises.
What they read varied from region to region and indeed from law office
to law office, but the one work on every list was Sir William Blackstone’s
four-volume Commentaries on the Laws of England. Published in 1764, the
work was quickly pirated in the American colonies. It went through many
American editions, beginning with that of St. George Tucker, published
in Richmond in 1803, which was tailored to American circumstances and
annotated with American cases. A staple of legal education until the 1870s,
Blackstone’s Commentaries did more to shape American legal education and
thought than any other single work.
Blackstone’s permeating influence was ironic and paradoxical. A Tory
jurist, he celebrated Parliamentary sovereignty at the very time Americans
were beginning to challenge it. His subject was English law as it stood at
mid-eighteenth century, before the modernizing and destabilizing effects
of Lord Mansfield’s new commercial doctrines had been felt. Even as a
statement of English law circa 1750 the Commentaries were not entirely
reliable. In any case, English law was not controlling in the courts of the
new republic.
Despite these limitations Blackstone remained the starting point of legal
education and legal thought in America from the Revolution to the Civil
War. Law teachers could select the portions of the four volumes that fit
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 41
their particular needs and ignore the rest, a case in point being Henry
Tucker’s Notes on Blackstone’s Commentaries (1826), prepared specifically for
the students at his law school in Winchester, Virginia. For book-starved
apprentices everywhere, the work was an all-purpose primer, serving as
dictionary, casebook, a history of the common law, and guide to professional
self-consciousness. Above all, the carefully organized and elegantly written
Commentaries imparted to students and established lawyers alike a vision of
law as a coherent body of rules and principles – what Samuel Sewall, advising
his student Joseph Story, called “the theory and General doctrines” of the
law. By providing a rational framework, Blackstone helped law students
bring “scientific” order out of case law and offered relief from the numbing
tasks of scrivening. With English law rendered by a Tory judge as their
guide, American students set out to chart the course of American legal
science.
To aid them in mapping the terrain, apprentices were advised to keep a
commonplace book – a homemade digest of alphabetically arranged legal
categories including relevant case citations, definitions, and other practical
information. Students often supplemented Blackstone by consulting such
works as Matthew Bacon’sANew Abridgement of the Laws (1736), which went
through several American editions before being replaced by Nathan Dane’s
nine-volume Abridgement of American Law (1826–29). Dane was to Bacon
what Kent was to Blackstone; both American transmutations appeared at
the end of the 1820s. Among other synthetic works consulted by American
students during the late eighteenth and early nineteenth centuries were
Thomas Wood’s Institutes of the Laws of England (1722), the forerunner to
Blackstone; Rutherford’s Institutes of Natural Law (1754–56); and John
Comyn’s Digest (1762–67). Until they were replaced by American treatises
in the 1820s and 1830s, continental works in English translation also
were frequently consulted for specific doctrines and for general ideas about
law. Among the most widely used, especially in regions where maritime
commerce made the law of nations relevant to practice, were works by Hugo
Grotius, Jean Jacques Burlamaqui, Samuel Pufendorf, and Emmerich de
Vattel. Under Joseph Story’s direction, Harvard built a great collection of
civil law treatises on the assumption that the common law could profit by
an infusion of rationality and morality from the civil law tradition. As it
turned out, the practical-minded law students at Harvard were much less
interested in comparative law than was their famous teacher.
In search of practical, workaday principles of law, students could choose
from a surprisingly wide range of specialized treatises – again English at
first, but with American works soon following. Although their reading
was apt to be limited to the books available in the office where they studied,
there were some standard subjects and accepted authorities. At the
Cambridge Histories Online © Cambridge University Press, 2008
42 Hugh C. Macgill and R. Kent Newmyer
end of the eighteenth century and in the first decades of the nineteenth,
serious students were advised to read Hargrave and Butler’s edition of the
venerable Coke upon Littleton, a seventeenth-century work so arcane that
it brought the most dedicated scholars to their knees. Fearne’s Essay on
Contingent Remainders and Executory Devises in its various editions was the
classic authority on wills and estates in both England and America. For
equity, students had to rely on English treatises until the publication in
the 1830s of Story’s commentaries on equity and equity jurisdiction. Given
that the formal writ system of pleading survived well into the nineteenth
century, practical guides to pleading and practice were essential. One of the
most widely used was Chitty’s three-volume The Practice of Law in All of
Its Departments, published in an American edition in 1836. Justice-of-the-
Peace manuals, on the English models set by John Dalton and Giles Jacob,
were standard fare in every part of the country.
Case law was central to legal education from the beginning. Prior to the
early 1800s, when printed reports of American court decisions first made
their appearance, students had to rely on English reports. As a “guide to
method and a collection of precedents,” Kent particularly recommended
those of Sir Edward Coke, Chief Justice Saunders (in the 1799 edition),
and Chief Justice Vaughn. For equity, Kent urged students to consult the
Vesey and Atkyns edition of the opinions of Lord Hardwicke. The library
of the Litchfield Law School included two dozen sets of English reporters.
Once they became available, American judicial decisions gradually displaced
English case law as sources of authority, but English decisions continued
to be studied and cited for the legal principles they contained until
late in the nineteenth century, by no less an authority than C. C. Langdell,
the founder of the case method. Attention to English and American reports
reminds us of the practical-minded, non-theoretical nature of American
legal thought and education during the formative period.
Law students were also expected to understand the ethical obligations of
the profession, a theme presented in Blackstone’s Commentaries and echoed in
countless law books and lawyers’ speeches during the course of the century.
What students made of this uplifting professional rhetoric is difficult to say,
but clearly the emphasis on the morality of law and the ethics of practice
was useful to a profession still in the process of defining and justifying itself.
As it turned out, the failure of the apprenticeship system to instill a sense
of professional identity was an impetus for the law school movement and
the rebirth of bar associations in the 1870s and 1880s.
Much more threatening to the apprenticeship system was the exponential
growth of printed American judicial decisions – “the true repositories of the
law,” as Story called them. Federal Supreme Court reports, available from the
beginning, were soon followed by those of the several federal circuit courts.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 43
State reports, beginning with Kirby’s Connecticut reports in 1789, became
the norm by the first decade of the nineteenth century. By 1821, Story
counted more than 150 volumes of state and federal reports that lawyers
needed to consult – enough, he feared, to overwhelm the profession. Each
new state added to the problem, as did the growing complexity and quantity
of litigation in the wake of the commercial and corporate revolution that
began before the CivilWar. In 1859, speaking at the dedication of the law
school at the first University of Chicago, David Dudley Field estimated that
American lawyers faced no less than two million common law “rules.”2
The struggle to organize this burgeoning body of case law helped shape
legal education. Before printed reports, the problem for students was the
inaccessibility of judicial decisions; as published reports proliferated, the
problem became one of extracting sound principles from them. Commonplacing,
a primitive approach to the problem, gave way to the use of
English treatises footnoted to American decisions, on the model of Tucker’s
Blackstone. These were gradually superseded by domestic treatises, Dane’s
Abridgment and Kent’s Commentaries being the most ambitious. Oliver
Wendell Holmes, Jr.’s famous twelfth edition of Kent, published in 1873,
contained an index of largely American cases that ran to 180 pages of small
print. Charles Warren believed that Angell on Watercourses (1824), with
“96 pages of text and 246 pages of cases,” may have been the first American
casebook.3 Joseph Story, who suggested the case emphasis to Angell,
also saw to it that Harvard maintained a complete run of all American
and English reports. Extracting principles from this ever-expanding body
of decisions, which was the function of treatise writers, also was the chief
objective of Langdell’s case method. Working in this mode reinforced the
belief that law was autonomous, with a life of its own beyond the efforts of
lawyers and judges to make sense of it.
As authoritative expositions of legal principles, treatises were the primary
means of organizing case law in the nineteenth century. The publishing
career of Justice Joseph Story, the most prolific treatise writer of the
century, is exemplary. Story’s A Selection of Pleadings in Civil Actions (1805)
appeared only one year after Massachusetts began to publish the decisions of
its highest court. By his death in 1845, Story had published commentaries
on all the chief branches of American law (except for admiralty), each of
them focused on principles. By bringing a measure of system and accessibility
to his topics, Story pursued the ever-receding goal of a nationally
2 David Dudley Field, “Magnitude and Importance of Legal Science,” reprinted in Steve
Sheppard, ed., The History of Legal Education in the United States: Commentaries and Primary
Sources (Pasadena, CA, 1999), 658.
3Warren, History of the American Bar, 541.
Cambridge Histories Online © Cambridge University Press, 2008
44 Hugh C. Macgill and R. Kent Newmyer
uniform common law. Updated regularly in new editions, Story’s volumes
were standard reading for law students and practicing lawyers into the
twentieth century. Abraham Lincoln, himself a successful corporate lawyer,
said in 1858 that the most expeditious way into the profession “was to
read Blackstone’s Commentaries, Chitty’s Pleading, Greenleaf’s Evidence,
Story’s Equity and Story’s Equity Pleading, get a license and go to the
practice and still keep reading.”4
Lincoln’s comment highlights two major characteristics of apprenticeship
training: first, it was largely a process of self-education that continued
after admission to practice; and second, self-education consisted mainly in
reading legal treatises. The period from 1830 to 1860 in particular was
“one of great activity and of splendid accomplishment by the American law
writers.”5 Merely to list some of the most important of their works suggests
the variety of material available to law students and lawyers. Angell and
Ames’s The Law of Private Corporations (1832) was the first book on corporate
law. Story’s treatises – Bailments (1832), Agency (1839), Partnership
(1841), Bills of Exchange (1843), and Promissory Notes (1845) – made new
developments in commercial law available to students and lawyers all over
the country and remained authoritative for several generations. Greenleaf’s
Evidence (3 vols., 1842–53), recommended by Lincoln, had an equally long
life. Parsons’s highly regarded book on contracts, published in 1853, went
through nine editions and was followed by several treatises on commercial
paper. Hilliard’s Real Property (1838) quickly replaced previous books on
that subject. Angell on Carriers (1849) was followed by Pierce’s even more
specialized American Railway Law (1857). Treatises on telegraph, insurance,
copyright, trademark and patent law, and women’s property rights
literally traced the mid-nineteenth-century contours of American economic
modernization.
And so it went: new books on old subjects, new books on new subjects.
Thanks to the steam press, cheap paper, new marketing techniques, and the
establishment of subscription law libraries in cities, these books circulated
widely. New treatises gave legal apprenticeship a new lease on life. So did
university law lectureships and private and university-based law schools,
both conceived as supplements to apprenticeship training. The treatise
tradition, which did so much to shape law-office education, also greatly
influenced the substance and methods of instruction in early law schools.
4 Terrence C. Halliday, “Legal Education and the Rationalization of Law: A Tale of Two
Countries – The United States and Australia,” ABFWorking Paper #8711. Presented at
the 10th World Congress of Sociology, Mexico City, 1982.
5 Charles Warren, History of the Harvard Law School (New York, 1908), I, 260.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 45
III. AMERICAN LAW SCHOOLS BEFORE 1870
Langdell’s reforms at Harvard Law School in the 1870s are generally seen
as the beginning of modern American legal education, but Harvard under
Langdell was built on a foundation laid by Story. As Supreme Court justice
and chief judge on the New England circuit, with close personal connections
to the leading entrepreneurs of the region, Story was attuned to the
economic transformation of the age. As Dane Professor, he was in a position
to refashion legal education to fit the needs of the market revolution.
Dynamic entrepreneurs operating in the nascent national market needed
uniform commercial law if they could get it, consistency among state laws
if they could not. At the least, they needed to know the rules in each of the
states where they did business. The emergence in the nineteenth century
of a national market economy generated many of the same opportunities
and challenges presented by globalization in the twenty-first. The question
was whether lawyers trained haphazardly in local law offices could deliver.
Could they master the new areas of law that grew from technological and
economic change? And, even with the help of treatises, could they extract
reliable, uniform principles from the ever-growing body of decisional law?
Increasingly the answer was no, which explains the remarkable expansion
of free-standing and university-based law schools in the antebellum
period.
Public law schools connected with established colleges and universities –
ultimately the dominant form – traced their origins to university law lectureships.
The model was the Vinerian professorship at Oxford, of which
Blackstone was the most famous incumbent. The first law lectureship in
the United States was established at the College of William and Mary in
1779 by Governor Thomas Jefferson. Others followed at Brown (1790),
Pennsylvania (1790), King’s College (Columbia) (1794), Transylvania
University in Kentucky (1799), Yale (1801), Harvard (1815), Maryland
(1816), Virginia (1825), and New York University (1835).
These lectureships addressed the perceived failure of the apprenticeship
system to teach law as a system of interrelated principles. Their success
defies precise measurement. Aspiration and execution varied widely, and
they were all directed principally at college undergraduates. Judging by the
number of his students who later distinguished themselves, GeorgeWythe
atWilliam and Mary had considerable influence. On the other hand, James
Wilson’s lectures at Pennsylvania, James Kent’s at Columbia, and those
of Elizur Goodrich at Yale failed to catch on. Isaac Parker’s lectures as
Royall Professor at Harvard inspired little interest, but his experience led
him to champion the creation of a full-fledged law school there in 1817.
Cambridge Histories Online © Cambridge University Press, 2008
46 Hugh C. Macgill and R. Kent Newmyer
The efforts of David Hoffman, a prominent Baltimore lawyer, to do the
same at the University of Maryland were unsuccessful, partly because his
vision of a proper legal education was too grandiose and partly because
American law was changing more quickly than he could revise his lecture
notes. Nonetheless, his Course of Legal Study (1817) was the most influential
treatise written on the subject of legal education prior to the CivilWar, and
it bore witness to the deficiencies of apprenticeship education. These early
lectureships pioneered the later development of public, university-based
law schools.
Private, proprietary law schools also flourished during the years before
the Civil War. The prototype of many that followed was the law school
founded in 1784 by Judge Tapping Reeve in Litchfield, Connecticut. Reeve,
a successful law-office teacher, was joined by a former student, James Gould,
who headed the school on Reeve’s death in 1823. In contrast to the haphazard
and isolated nature of most apprenticeship arrangements, Litchfield was
full-time learning and serious business. During their required fourteen
months in residence, students took notes on daily lectures organized on
Blackstonian lines. Directed treatise reading was supplemented by moot
courts and debating societies. Above all, Reeve and Gould taught legal
science. Gould believed that the scientific approach demanded that law,
especially the common law, be taught “not as a collection of insulated positive
rules, as from the exhibition of it, in most of our books . . . but as a system
of connected, rational principles. . . . ” At its peak in 1813, the school had
55 students in residence; by the time of its demise in 1833 it had graduated
more than 1,000 students, drawn from every state in the union, including
many who went on to eminence in law and politics, Aaron Burr and John
C. Calhoun among them.
Litchfield was the model for a dozen or more proprietary schools in seven
states, and there were other home-grown variations as well. In Virginia,
for example, there were several private law schools during the antebellum
period. Although none attained the longevity of Litchfield, they attracted
a considerable number of students. By 1850 there were more than twenty
such schools around the country. Even then, however, they were being
outdistanced by the larger and better financed university-based law schools.
The last proprietary school on the Litchfield model, in Richmond Hill,
North Carolina, closed in 1878.
The concept of a full-time law school affiliated with an established university
took on new life at Harvard in 1815, when Isaac Parker, Chief Justice
of the Supreme Judicial Court, was appointed the Royall Professor, to lecture
on law to Harvard undergraduates. The full-time law school began two
years later with the appointment of Asahel Stearns as resident instructor.
Stearns was simultaneously teacher, adviser, librarian, and administrator;
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 47
in addition to being overworked, he was plodding and narrow. Parker was
enthusiastic about the new school, but his superficial lectures failed to
attract students. Only in 1828, when Justice Joseph Story was appointed
Dane Professor, did Harvard Law School come into its own. Under the
leadership of Story, Nathan Dane, and Josiah Quincy, Jr., the newly invigorated
school set out to train lawyers who would facilitate the Industrial
Revolution then underway in New England. Story also hoped that Harvard
law students, trained in his own brand of constitutional nationalism, would
rescue the Republic from the leveling forces of Jacksonian democracy.
Several factors account for the success of the school, starting with Dane’s
generous endowment (from the proceeds of his nine-volume Abridgement of
American Law). The growing reputation of Harvard in general was advantageous
to its law school, as were the cordial relations between Story and
Quincy, president of Harvard. As Dane Professor, Justice Story attracted
able students from across the nation. A growing student body meant rising
income from fees.With fees came a library and a part-time librarian. Under
Story’s guidance, the law school began to acquire the materials necessary
for the scientific study of American law. A complete and up-to-date run
of federal and state reports and a comprehensive collection of American,
English, and continental treatises laid the foundation for what Harvard
advertised as the best law library in the world. Years later, Langdell would
celebrate the library as the laboratory for the study of law. Story built the
laboratory.
With the appointment of Simon Greenleaf as a full-time resident professor
in 1833, the school was up and running. Greenleaf handled the daily
administration of the school and much of the teaching. Story focused on
the scholarship he was required to produce under the terms of the Dane
endowment. In their many editions, his commentaries became standard
texts not only for students at Harvard, but for judges and lawyers across
the nation, and for the apprentices who studied with them.
Measured by the demand for Story’s commentaries in all parts of the
country and by the nature of the student body, Harvard Law School was a
national law school – the first in the nation. Other antebellum law schools,
independent or college based, responded more to the perceived needs of their
respective locales. Some, including the Cincinnati Law School, founded in
1833 by Timothy Walker, one of Story’s students, were modeled directly
on Harvard, but soon assumed a regional tone. Yale, by contrast, followed a
different route (one that would be widely replicated elsewhere) by absorbing
Judge David Daggett’s New Haven law school, but no pretense was made
of integrating this new initiative with the college, and it would be many
decades before Yale had a full-time instructor in law on its payroll. At
Jefferson’s insistence, the law department at the newly founded University of
Cambridge Histories Online © Cambridge University Press, 2008
48 Hugh C. Macgill and R. Kent Newmyer
Virginia aimed to reach students from Southern states with law congenial to
Southern interests, including states’ rights constitutional theory. Whatever
the dictates of their markets, all of these new law schools, whether in rural
Connecticut, the new West, or the Old South, claimed to offer systematic
legal instruction that apprenticeship training could not deliver.
The impact of the early law schools on legal education is hard to assess
because formal instruction was auxiliary to law-office training and because
most schools retained many of the practices of the apprenticeship system.
And, as one might expect, their quality varied widely. Still, it is reasonable
to assume that schools offered students better access to the growing
body of treatises and case reports than most law offices could furnish. Students
learned from each other and sharpened their skills in the moot court
competitions that were common features of school life. The fortunate student
might encounter a gifted teacher such as Theodore Dwight. His historically
oriented lectures, directed treatise reading, and “oral colloquy,”
developed first at Hamilton College in the 1850s and refined at Columbia
over three decades, was the accepted standard for first-rate law school training
prior to the Langdellian revolution of the 1870s, and for some time
thereafter.
Dwight at Columbia, like Greenleaf at Harvard and St. George Tucker
at William and Mary, was a full-time professor. But the profession of law
teacher was several decades in the future. Instruction, even at many of the
law schools, generally was offered by judges and lawyers working on a parttime
basis. Not surprisingly, they continued to teach law-office law. The
substance of law school education prior to the 1870s was intensely practical.
Scant attention was paid to legislation, legal theory, comparative law,
legal history, or any other discipline related to law. Even dedicated scholarteachers
like Story were more interested in the practical applications of
law than in investigating its nature and origins. Student opinion forced
the University of Virginia’s law department, initially committed to a relatively
broad-gauged course of study, to narrow its focus in order to maintain
enrollment. Story was forced to modify his ambitious Harvard curriculum
for the same reason. Not long after his death, his great collection of civil
law treatises was gathering dust on the shelves because students found it of
little practical use.
In law schools as in law offices legal education was chiefly concerned
with preparing students to litigate, and that meant coping with judicial
decisions. As early as 1821, Story and Dane had decried the unmanageable
bulk of case law. Increased population and the creation of new states and
territories helped turn the problem into a crisis that neither law offices nor
law schools as then constituted could manage.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 49
IV. THE 1870S: A NEW ORDER STIRS
The appointment of Langdell at Harvard in 1870, the turning point in
American legal education, was an incident in the emergence of the modern
research university. The academy, however, was hardly the only segment of
society to be affected by the broad changes that swept through America in
the decades following the CivilWar. The reunification of the nation was confirmed
by the end of Reconstruction in 1877. The Centennial Exposition of
1876 dramatized the national reach of market economics, bringing the reality
of the Industrial Revolution – mass production and consumer culture –
to millions for the first time. America celebrated free labor and individualism,
but the reality beneath the rhetoric was order at the top imposed
on chaos below. Business organizations of increasing scale were among
the principal engines of change. The nature and structure of law practice
evolved, especially in cities, in response to the changing needs of these lucrative
clients. The subordination of courtroom advocacy to the counseling of
corporations accelerated, as it became more important to avoid litigation
than to win it. Corporate practice called increasingly for legal specialists
and larger firms.
Bar associations, which had yielded in the 1830s to Jacksonian egalitarianism,
began to re-emerge. The Association of the Bar of the City of
New York was formed in 1870 in response to scandalous conduct among
lawyers during Boss Tweed’s reign and the Erie RailroadWars. In 1872 the
Chicago Bar Association was established in an effort to control the unlicensed
practice of law. By 1878 there were local or state bar associations in
twelve states. In that year, at the prompting of the American Social Science
Association, a small group of prominent lawyers convened in Saratoga
Springs to form the American Bar Association (ABA). The ABA would
follow the lead of the American Medical Association, founded in 1847 (but
attaining effective power only at the end of the century), in attempting to
define the profession, requirements for entry, and standards for professional
work.
Comparison between the lofty stature ascribed to the legal profession
by Tocqueville and the low estate to which it had fallen furnished the
more prominent members of the bar with an additional impetus to action.
If membership in the profession was open to people with no more (and
often less) than a secondary general education, who had completed no prescribed
course of professional training, and who had met risible licensing
requirements, then professional status itself was fairly open to question.
Unsurprisingly, one of the first subgroups formed within the ABA was the
Committee on Legal Education and Admissions to the Bar.
Cambridge Histories Online © Cambridge University Press, 2008
50 Hugh C. Macgill and R. Kent Newmyer
The significance of Christopher Columbus Langdell’s work at Harvard in
the 1870s is best understood in this context. In 1869 CharlesW. Eliot, an
analytic chemist from MIT, was appointed president of Harvard. Touring
Europe earlier in the 1860s, Eliot had been impressed by the scientific rigor
of continental universities. To make Harvard their peer, he would “turn the
whole University like a flapjack,”6 and he began with the medical school and
the law school. To Eliot, the education offered at both schools was so weak
that to call either profession “learned” bordered on sarcasm. He brought
both confidence and determination to the task of reform. When the head
of the medical school stated that he could see no reason for change, Eliot
replied, “I can give you one very good reason: You have a new president.”
The law school Eliot inherited, in common with the thirty others in
operation at the time, was intended to supplement apprenticeship, not to
replace it. It had no standards for admission or, other than a period in
residence, for graduation. The library was described as “an open quarry
whence any visitor might purloin any volume he chose – provided he could
find it.”7 Its degree was acknowledged to be largely honorary.
To dispel the torpor, Eliot appointed Langdell first to the Dane professorship
and then to the newly created position of dean. Langdell, an 1854
graduate of the law school, had twelve years’ experience in appellate practice
in Manhattan, which convinced him that legal reform was urgently
needed and that it should begin with legal education. Eliot’s offer gave him
a chance to implement his ideas.
Langdell saw law as a science whose principles had developed over centuries
through judicial decisions.Aproperly scientific legal education would
study those principles through the decisions in which they had evolved. The
scholar’s attention must be focused on the best decisions of the best judges,
for “the vast majority are useless, and worse than useless, for any purpose
of systematic study.” An amateur botanist, Langdell added a taxonomical
dimension: If the doctrines of the common law “could be so classified and
arranged that each should be found in its proper place, and nowhere else,
they would cease to be formidable from their number.”8
Because it was a science, all of its ultimate sources contained in printed
books, law was a fit subject for study in a modern university, especially one
designed by Charles W. Eliot. Indeed, because it was a science, it could
only be mastered by study in a university, under the tutelage of instructors
who had studied those sources systematically, working in a library “that is
6 Dr. Oliver Wendell Holmes, Sr., quoted in Warren, History of the Harvard Law School, I,
357.
7 Samuel L. Batchelder, “Christopher C. Langdell,” Green Bag, 18 (1906), 437.
8 C. C. Langdell, A Selection of Cases on the Law of Contracts (Boston, 1870), viii.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 51all to us that the laboratories of the university are to the chemists and the
physicists, all that the museum of natural history is to the zoologists, all
that the botanical garden is to the botanists.”
To put Langdell’s premises about law and its study into practice, the
Harvard Law School had to be reformed institutionally and intellectually.
Langdell inaugurated a structured and sequenced curriculum with regular
graded examinations, offered over two years of lengthened terms that would
increase to three years by 1878. Treatises would be replaced by books of
selected cases (to alleviate pressure on the library), and lectures by the classroom
give-and-take that became “the Socratic method.” Apprenticeship, fit
only for vocational training in a “handicraft,” had no place at all.
The law to be mastered was common law, judge-made law, and above
all private law – contracts, torts, property. The principles to be found in
appellate cases were general, not specific to any state or nation. To Langdell,
whose generation was the last to study law before the CivilWar, the primacy
of the common law was unquestionable. Statutes, unprincipled distortions
of the common law, had no place in scientific legal study. Since law was
entirely contained in the law reports, it was to be studied as an autonomous
discipline, unfolding according to the internal logic of its own principles,
largely unaffected by the social sciences, unrelated to social policy, unconcerned
with social justice. “Soft” subjects such as jurisprudence (law as it
might be, not law as it was) that were impossible to study scientifically
were beyond the pale. Because close study of cases, many old and English,
might tax students with no grounding in the humanities, the prior preparation
of law students assumed a new importance. Initially Langdell raised
the general education standard for law school admission to roughly the
level required for Harvard undergraduates. By the turn of the century, the
standard would become a bachelor’s degree, and gradual adoption of that
standard by leading law schools in the early part of the twentieth century
would confirm the study of law as a graduate program.
Adoption of the case method appeared to require the abandonment of
established modes of instruction and the acceptance of a new conception
of law. In fact, the new method drew heavily on antebellum legal culture:
the assumption that the law was found in judicial decisions, and that legal
science consisted in ordering cases under appropriate general principles and
relating those principles to one another in systematic fashion. This taxonomic
approach could be found in Story’s treatises or Dwight’s lectures at
Columbia. The resulting edifice led logically, if not inevitably, to deductive
reasoning starting with principles, some as broad as the infinitely disputable
concept of justice itself. Langdell stood the old conceptual order on its head,
however, by reasoning inductively from the particulars of appellate decisions
to general principles – or at least by training students to do so.
Cambridge Histories Online © Cambridge University Press, 2008
52 Hugh C. Macgill and R. Kent Newmyer
To determine which opinions were worth studying, he had to select
those – assertedly few in number – that yielded a “true rule.” A student
who had grasped the applicable principle and could reason it out could say
with assurance how a court should resolve any disputed question of common
law. The essential element of legal education was the process of teasing the
principles from the cases assigned. The instructors, however, had first to
discriminate the signals from the static, sorting through the “involved and
bulky mass” of case reports to select those whose exegesis would yield the
principle, or demonstrate the lines of its growth. To develop a criterion for
picking and choosing, they had first to have identified the principle.
In 1871 a student was no more able to make sense of the mass of reported
cases without guidance of some kind than anyone in a later century might
make of the Internet without a search engine. Langdell’s selection of cases
was that engine. It determined the scope and result of student labor as
effectively as though Langdell had taken the modest additional trouble
required to produce a lecture or a treatise rather than a collection of cases
without headnotes. To have done so, though, would have deprived students
of the opportunity to grapple directly with the opinions, the basic material
of law, and to master principles on their own, rather than take them at
second hand.
The intellectual challenge presented to students was therefore something
of a simulation, neither as empirical nor as scientific as Eliot and Langdell
liked to think it. The fundamental premise, that the common law was
built from a relatively small number of basic principles whose mastery was
the attainable key to professional competence, may have proceeded from
Langdell’s undergraduate exposure to the natural sciences, from the crisis
he encountered as a New York lawyer when the common law forms of
action gave way to the Field Code, or to the constrained inductivism of
natural theology and popular science prevalent when Langdell himself was
a student. The latter resemblance may have been in the back of Holmes’s
mind when he characterized Langdell as “perhaps the greatest living legal
theologian,” who was “less concerned with his postulates than to show that
the conclusions from them hang together.”9
The logic of the case method of instruction demanded a different kind of
instructor. The judge or lawyer educated under the old methods, no matter
how eminent, whether full- or part-time, was not fitted to the “scientific”
task of preparing a casebook on a given subject or to leading students
through cases to the principles they contained. If law was a science to be
studied in the decisions of courts, then experience in practice or on the
9 Book Notice, American Law Review 14 (1880), 233, 234 (reviewing C. Langdell,ASelection
of Cases on the Law of Contracts, 2d ed., 1879).
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 53
bench was far less useful than experience in the kind of study in which students
were being trained. Langdell needed teachers trained in his method,
unspoiled by practice – scientists, not lawyers.
In 1873, with the appointment of James Barr Ames, a recent graduate
with negligible professional experience, Langdell had his first scientist. The
academic branch of the legal profession dates from that appointment. As
Eliot observed, it was “an absolutely new departure . . . one of the most farreaching
changes in the organization of the profession that has ever been
made. . . . ” Ames, and Langdell himself, spent much of the 1870s preparing
the casebooks needed to fill the added hours of instruction. Ames, warmer
and more engaging than Langdell, also became the most effective evangelist
for the Harvard model. In 1895 he would succeed Langdell as dean.
Eliot’s resolute support of Langdell notwithstanding, he was aware of the
hostility of the bar to Langdell’s regime and of the competition from Boston
University. He saw to it that the next several appointments went to established
figures in the profession, preferably men with an intellectual bent.
The Ames experiment was not repeated until 1883, with the appointment
ofWilliam R. Keener to succeed Holmes. It had taken more than a decade
before Langdell had a majority of like-minded colleagues.
Only at Harvard did the case method exist in its pure form, and then
only before 1886. For in 1886, Harvard introduced elective courses into the
curriculum. The principles of the common law proved to be more numerous
than Langdell had anticipated or could cover in a three-year curriculum.
Since it could no longer be asserted that mastery of the curriculum constituted
mastery of the law itself, the case method of study now required
a different rationale. The method, with its intellectual discipline, came to
be justified – less controversially – as the best way to train the legal mind.
Substance had given way to process. “The young practitioner is . . . equipped
with a ‘trained mind,’ as with a trusty axe, and commissioned to spend the
rest of his life chopping his way through the tangle.”10
Even the skeptical Holmes had acknowledged, during his brief stint
on the faculty, that the method produced better students. Ames contrasted
the “virility” of case method study with the passive role of students under the
lecture method. A whiff of social Darwinism spiced the enterprise. The
ablest and most ambitious students thrived. Survival, and the degree, became
a badge of honor. Students and graduates shared a sense of participating
in something wholly new and wholly superior. Ames observed that law
students, objects of undergraduate scorn in 1870, were much admired by
the end of Langdell’s tenure. An alumni association was formed in 1886 to
promote the school and to spread the word within the bar that “scientific”
10 Alfred Z. Reed, Training for the Public Profession of the Law (New York, 1921), 380.
Cambridge Histories Online © Cambridge University Press, 2008
54 Hugh C. Macgill and R. Kent Newmyer
study under full-time academics was also intensely practical. The students
who, in 1887, founded the Harvard Law Review (which quickly became one
of the most distinctively Darwinian features of legal education) were moved
in part by the desire to create a forum for their faculty’s scholarship and a
pulpit for propagating the Harvard gospel. In this period Harvard’s enrollment,
which had dropped sharply when Langdell’s reforms were introduced,
began to recover and to climb, a sign that the severe regimen was reaching
a market. As established lawyers gained positive first-hand exposure to the
graduates trained on the new model, some of the bar’s initial hostility to
Langdell’s reforms abated. Eliot’s gamble was paying off.
Langdell’s new orthodoxy was asserted at Harvard with vigor, not to
say rigidity, even as its rationale was changing. In the 1890s, when Eliot
forced the appointment of an international lawyer on the school, the faculty
retaliated by denying degree credit for successful completion of the
course. In 1902, William Rainey Harper, president of the new University
of Chicago, requested Harvard’s help in establishing a law school. The initial
response was positive: Joseph Beale would be given leave from Harvard
to become Chicago’s Langdell. But when it was learned that Harper also
planned to appoint Ernst Freund to the law faculty, the atmosphere cooled.
Freund had practiced law in New York, but he held continental degrees
in political science, and he expected Chicago’s new school to offer courses
such as criminology, administrative law, and political theory. Harper was
informed that the Harvard faculty was unanimously opposed to the teaching
of anything but “pure law.” Harvard, wrote Beale, turned out “thoroughly
trained men, fit at once to enter upon the practice of a learned and strenuous
profession.”
“Learned,” to be sure; even “trained”; but “fit” and “strenuous” as well?
Purged and purified by the ritual of case study, lean and stripped for the
race of life? It is as though Beale saw the muscular Christianity of an earlier
day revived in the person of the new-model lawyer, trained in “pure law”
and ready to do battle with the complexities of the modern business world.
Harvard’s sense of mission had a quasi-religious pitch. Indeed, the young
lawyer coming out of Harvard found a fit in the new-model law firm as it
developed in response to the needs of corporate clients. An emerging elite
of the bar was forging a link with the emerging elite of the academy; “the
collective ego of the Harvard Law School fed the collective ego of the bar.”
By the early 1890s Harvard graduates were in demand as missionaries to
other law schools, frequently at the behest of university presidents anxious
to speed their own institutions along the scientific path – new Eliots in
search of their own Langdells. Iowa adopted the case method in 1889;
Wigmore and Nathan Abbott took it to Northwestern in 1892; Abbott
carried the torch on to Stanford.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 55
The tectonic shift occurred in 1891, when Seth Low, recently appointed
president of Columbia, forced a reorganization of the law school curriculum,
and William Keener, recruited the previous year from Harvard, became
dean. Theodore Dwight, the Columbia Law School personified, had long
resisted the case method, one of his reasons being the intellectual demands
it placed on students. The case method might be all very well for the
brightest and most highly motivated, but what of the “middle sort” of
student Dwight had taught so successfully for so long? Such softness had no
place at Harvard, which shaped its admissions policy to fit its curriculum,
not the other way around. Neither did it at Keener’s Columbia after its
conversion. The conversion of Columbia, which alternated with Michigan
as the largest law school in the country, brought Langdell’s revolution to
the nation’s largest market for legal talent.
In Keener’s hands, however, the revolution had moderated considerably.
He did not condemn all lecturing as such; he pioneered the production
of the modern book of “Cases and Materials,” which Langdell would have
anathematized; and he acquiesced in Low’s insistence on including political
science courses in the curriculum. Even so, Columbia’s conversion met
strong resistance. Adherents to the “Dwight method” formed the New
York Law School, which immediately attracted an enormous enrollment.
Conversions still were the exception, not the rule, even among university
law schools. It would be 1912 before members of the Yale law faculty could
assign a casebook without the approval of their colleagues and Virginia held
out until the 1920s.
In the early years of the new century, however, as old deans, old judges,
and old lawyers retired and died, their places all across the country were
filled by academic lawyers trained in the case method. They developed
what Ames had called “the vocation of the law professor” and advanced
from school to school along career paths that remain recognizable a century
later. Casebook publishers kept their backlists of treatises alive, covering
their bets, but the case method and the institutional structures associated
with it could no longer be dismissed by the legal profession as a local heresy
peculiar to Harvard.
Langdell had elaborated and implemented a view of law and legal education
that made it a respectable, even desirable, component of the sciencebased
model of American higher education. In a period when all of the social
sciences were struggling to define themselves as professional disciplines
and to succeed in the scramble for a place at the university table, Langdell
accomplished both objectives for law as an academic subject. Further, his
conception of the subject effectively defined legal knowledge, and his first
steps toward the creation of the law professoriate defined the class of those
who were licensed to contribute to it. With the case method, moreover, a
Cambridge Histories Online © Cambridge University Press, 2008
56 Hugh C. Macgill and R. Kent Newmyer
university law school could operate a graduate professional program at the
highest standard and, at the same time, maintain a student-faculty ratio that
would not be tolerated in most other disciplines. The fee-cost ratio made
the expenses of operating a modern law school – the purchase of books,
for example – an entirely tolerable burden. Eliot numbered the financial
success of the Harvard Law School among Langdell’s great achievements.
V. THE ACADEMY AND THE PROFESSION
Except for medicine, no other emerging academic discipline was intimately
tied to an established and powerful profession. Reform in legal education
might build momentum within the upper echelon of American universities,
but the extent to which the emerging standards of that echelon could
be extended downward depended in part on the organized bar. The postbellum
bar association movement, contemporaneous with the rise of law
schools, was shaped by a similar desire for the market leverage conferred by
professional identity and status.
In its first twenty years, the American Bar Association failed to reach a
consensus on the form or content of legal education. The Committee on
Legal Education and Admissions to the Bar presented to the ABA at its
1880 meeting an elaborate plan for legal education, prepared principally
by Carleton Hunt of Tulane. The plan called for the creation of a public
law school in each state, with a minimum of four “well-paid” full-time
instructors, written examinations, and an ambitious three-year curriculum
that owed more to Story, Hoffman, and Francis Lieber than to Langdell.
After the “well-paid” language was struck, the entire plan was defeated.
One eminent member noted that “if we go farther . . . we shall lose some
part of the good will of the legal community.”
Most members of that community, after all, had attained professional
success without the aid of a diploma. They were unlikely to see why a
degree should be required of their successors. Another delegate, mindful of
the problems that gave rise to the bar association movement but acquiescing
in the result, observed that “we must do something to exterminate the
‘rats.’”11
Chastened, the Committee waited a decade before submitting another
resolution. In the interim, the Association turned its attention to a matter
of more immediate concern: the wide variations in standards and procedures
for bar admission. The movement to replace ad hoc oral examinations
with uniform written tests administered by a permanent board of state bar
examiners, which began in New Hampshire in 1878, enjoyed the ABA’s
11 Record of the American Bar Association 2 (1880), 31, 41.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 57
support. It may incidentally have increased the demand for a law school
education, but was intended to raise standards for entry into the profession.
The founders of the ABA appear to have grasped, if intuitively, the profound
changes at work in the profession. However, it was more difficult to
agree on the role of law schools. Sporadic discussions of the potential role of
law schools in raising the tone of the bar were punctuated by skeptical comments
about Langdell’s innovations. The mythic figure of Abraham Lincoln
loomed behind all discussion of the relative value of formal schooling. How
would the Lincolns of the future find their way to greatness if schooling
were required for all? It was conceded that schooling could be substituted
for some time in a law office and might be a satisfactory alternative, but
little real energy was expended on the problems of formal education.
A standardized model for education, if it could have been implemented
nationally, would have facilitated the admission in every state of lawyers
licensed in any one of them. In the long term, it would also have improved
the administration of justice. Judges and lawyers, all trained to a similar
standard instead of being educated poorly or not at all, would develop a
shared language and culture. The mischief of litigation, appeals, reversals
and, worst of all, the endless proliferation of decisions, many of them ill
considered and inconsistent, would be ameliorated. Law school education
might, therefore, have been a large part of the answer to some of the principal
concerns of the ABA at its founding. Langdell’s hard-nosed new model
for legal education might have furnished the standard. His insistence on
the worthlessness of most case law, and the importance of selecting only
decisions that reflected the basic principles of the common law, might
have been welcomed as a bulwark against the unending flood of decisions,
aggravated in the 1880s by the West Publishing Company’s promiscuous
National Reporter System. Up to the turn of the century, however, most
leaders of the bar had been apprentices. The minority who had been educated
in law schools looked to the revered Dwight at Columbia or the eminent
Cooley at Michigan, both of them practical men, not to the obscure and
idiosyncratic Langdell, aided by “scientists” like Ames. The gap between
the profession at large and the academy widened as the teachers grew in
number and in professional definition. It would take a generation before
market pressures would drive the bar and the new professoriate into each
other’s arms.
VI. ENTER THE AALS
In 1900, appalled by the “rats” in their own business and frustrated by the
low priority the ABA attached to the problems of legal education, a group of
35 schools organized the Association of American Law Schools (AALS). The
Cambridge Histories Online © Cambridge University Press, 2008
58 Hugh C. Macgill and R. Kent Newmyer
membership criteria of the AALS reflected “best practices” and the higher
hopes of the founding schools. Members required completion of high school
before admission, a hurdle that would be raised first to one, then to two
years of college. Members had to have an “adequate” faculty and a library of
at least 5,000 volumes. Part-time programs were strongly disfavored, and
as a general matter, the cost of compliance excluded the very schools the
Association sought to marginalize, if not to drive out of business altogether.
But that was just the problem: the marginal schools could not be eliminated,
for reasons that were uncomfortable to acknowledge. Law schools organized
on Langdell’s principles offered superior students a superior education that
was well adapted to the needs of big-city practice. There might be a close
fit between Harvard, for example, and the Cravath system, which became
as influential in the emergence of the corporate law firm as the case method
had become in legal education. But many could not afford that kind of
education, and a great deal of legal work did not require it.
Most schools paid lip service to the standards of the AALS, and some
extended themselves mightily to qualify for membership. A few, however,
made a virtue of condemning the elitism of the AALS. Suffolk Law School
in Boston, for example, secured a state charter despite the unanimous opposition
of the existing Massachusetts law schools. If that lesson in political
reality were not sufficiently sobering, Suffolk’s founding dean drove it home
with repeated blasts at the educational cartels of the rich, epitomized by
Harvard. Edward T. Lee, of the John Marshall Law School in Chicago, contended
with considerable persuasiveness that the requisites for teaching
law – books and teachers – were readily to be found outside of university
law schools, and even in night schools, often at higher levels of quality
than that obtained in some of the more rustic colleges. The movement to
require two years of college prior to law study would inevitably – and not
coincidentally – exclude many of the poor and recently arrived from the profession.
Opposing that change, Lee emphasized the religious, ethnic, and
national diversity of night school students, declaring that “each of them
from his legal training becomes a factor for law and order in his immediate
neighborhood. . . . If the evening law schools did nothing more than to help
leaven the undigested classes of our population, their right to existence,
encouragement, and respect would be vindicated.”12 The scientists and the
mandarins were unmoved.
These were the principal themes in the educational debate at the turn of
the century. Ultimately, every university-affiliated law school in the United
States came to adopt some form of Langdell’s model of legal education, but
12 Edward T. Lee, “The Evening Law School,” American Law School Review 4 (1915), 290,
293.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 59
they traveled by routes that varied enormously according to local institutional
circumstances, politics, and professional culture. Although no single
account can stand for all, the evolution of the law schools at the University
ofWisconsin in Madison and Marquette University in Milwaukee, and
the strained relations between them, furnishes the best illustration of the
practical playing-out of the dynamics at work in legal education at the
beginning of the twentieth century.
Wisconsin: A Case Study
The University ofWisconsin’s 1854 charter provided for a law school, but it
was only after the CivilWar that one was established – not on the university’s
campus, but near the state capitol so that students could use the state library
for free. The law school did not come into being at the initiative of the
Wisconsin bar. Rather, the university found it prudent to add a practical
course of study in order to deflect criticism of its undergraduate emphasis
on the humanities, which some legislators thought irrelevant to the needs
of the state. Care was taken to cultivate leading members of the bench and
bar, and it was Dean Bryant’s boast that Wisconsin offered the education
a student might receive in “the ideal law office.” In a professional world
where apprenticeship opportunities were inadequate to meet the demand
(and where the invention of the typewriter and the emergence of professional
secretaries reduced the value of apprentices), this was not a trivial claim.
In 1892, however,Wisconsin installed a new president, Charles Kendall
Adams. Adams had taught history at Michigan for more than twenty years
before succeeding Andrew Dixon White as president of Cornell. Like Eliot,
he had toured European universities in the 1860s and was similarly influenced
by the experience. At Michigan, he introduced research seminars,
which he called “historical laboratories,” where young historians could work
with original documents.
Aware of Langdell’s case method and predisposed in favor of the idea of the
library as laboratory, Adams set out to bring Wisconsin’s law school up to
date. It would have been impolitic to bring in a missionary from Harvard, so
Adams hired Charles N. Gregory, a well-connected local lawyer, as Associate
Dean. Gregory was charged with remaking the law school, distressing Dean
Bryant as little as possible in the process. Gregory spent part of his second
summer at the country house of James Barr Ames, now dean at Harvard,
where Ames and Keener, now dean at Columbia, drilled him in the case
method and the culture that came with it. Gregory did what he could to
convertWisconsin, holding off the faculty’s old guard and hiring new people
trained in the case method when he had the opportunity, before leaving in
1901 to become dean at Iowa. In 1902, Dean Bryant finally retired, and
Cambridge Histories Online © Cambridge University Press, 2008
60 Hugh C. Macgill and R. Kent Newmyer
his successor, Harry Richards, a recent Harvard graduate, completed the
make-over Gregory had begun. “The ideal law office” was heard of no more.
Something resembling Harvard emerged in Madison, andWisconsin was a
founding member of the AALS, which Gregory had helped organize.
In the mid-1890s, while Gregory labored in Madison, a study group of
law students in Milwaukee preparing for the Wisconsin bar examination
evolved into the Milwaukee Law School. The school offered evening classes
taught by practicing lawyers, held in rented rooms – the classic form of
urban proprietary school. In 1908, this start-up venture was absorbed by
Marquette University, an urban Jesuit institution that hoped to confirm
its new status as a university by adding professional schools in law and
medicine.
The reaction in Madison to a competitor in the state’s largest city was not
graceful. In 1911 Dean Richards attempted to block Marquette’s application
for membership in the AALS. He did not do so openly, lest he appear
interested solely in stifling competition. In fact the two schools appealed
to rather different constituencies. Marquette’s urban, relatively poor, often
immigrant, and Catholic students were not Richards’s ideal law students,
nor were they his idea of suitable material for the bar. To his distress,
Marquette was elected to the AALS in 1912, with adept politicking, compliance
with many of the membership standards, and every indication of
a disarming desire to meet them all. Richards was skeptical, perhaps with
cause, but he lost the round.
The following year a bill was introduced in the Wisconsin legislature
that would have raisedWisconsin’s educational requirement for admission
to the bar and would also have provided a paid secretary for the board
of bar examiners. These were reforms that Richards normally would have
supported. But the bill, thought to be backed by Marquette graduates,
also would have abolished the diploma privilege (admission to the bar on
graduation) thatWisconsin graduates had enjoyed since 1870. The privilege
was inconsistent with the standards advanced by both the ABA and the
AALS, and Wisconsin could hardly defend it on principle. To lose it to a
sneak attack from an upstart competitor, however, was a different matter.
After an exchange of blistering attacks between partisans of both schools,
the bill was defeated.
Another conflict erupted in 1914, when the Wisconsin Bar Association
became concerned over low ethical standards, “ambulance-chasing,” and
comparable delicts. Some of the offending conduct might have been merely
the work disdained by the established practitioner, for the benefit of an
equally disdained class of clients, but it was not so characterized. Richards
proposed to solve the problem by requiring all prospective lawyers to have
two years of college preparation, three years of law school, and a year of
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 61
apprenticeship. The schooling requirements happened to be those of his
own institution, but it was unlikely that Marquette could meet them –
nor would it necessarily have wished to. Had his proposal succeeded and
Marquette failed, Richards would not have been downcast.
Richards was upset over the large enrollment in night law schools
(Marquette would offer classes at night until 1924) of people with “foreign
names,” “shrewd young men, imperfectly educated . . . impressed with
the philosophy of getting on, but viewing the Code of Ethics with uncomprehending
eyes.” But theWisconsin bar, its still largely rural membership
unaffected by immigration, did not adopt Richards’s proposal. Indeed, it did
not accept his premise that increased educational requirements would lead
to improved ethical standards. His effort to elevate educational standards,
to the disadvantage of Marquette and its ethnically diverse constituents,
was dismissed by many as retaliation for the diploma privilege fracas.
Richards’s fears and prejudices notwithstanding, this feud was not
Armageddon. It was, however, a museum-grade exhibit of the characteristics
and developmental stages of two different but representative types of
law school: Wisconsin exemplified the twentieth-century shift to technocratic
elitism, whereas Marquette represented the nineteenth-century ideal
of open, democratic opportunity. Wisconsin, so recently accepted into the
Establishment, was especially severe in seeking to impose the Establishment’s
standards on a deviant institution. Richards could not see Marquette
for what it was. A school open to the urban alien poor, it seemed to him
the very nursery of corruption. In reality, Marquette started on a different
track altogether, one not marked by Langdell. Had Marquette been thrust
into the outer darkness in 1912, its graduates would still have found their
way into the profession. If bringing Marquette into the AALS came at some
initial cost to the nominal standards of that association, it had the longterm
effect of improving, by those standards, the education its graduates
received.
VII. “STANDARDS” MEET THE MARKET
As the “better” schools ratcheted up their entrance requirements (completion
of high school, one year of college, two years), increased the length of
the course of study (three years was the norm by 1900), and raised their fees
along with their standards, a large percentage of a population increasingly
eager for a legal education was left behind.
Would-be law students excluded from the most prominent schools for
want of money, time, or intellectual ability constituted a ready market for
schools that were not exclusive at all. In the absence of restrictive licensing
standards for the bar or accreditation standards for law schools, that market
Cambridge Histories Online © Cambridge University Press, 2008
62 Hugh C. Macgill and R. Kent Newmyer
was sure to be met. The number of law schools doubled every twenty years
from 1850 to 1910; by 1900, the total had grown to 96. From the beginning
of the twentieth century to the end ofWorldWar I, law schools continued
to multiply rapidly as demand increased and as unprecedented waves of
immigration produced a more heterogeneous population than American
society and American educators knew what to do with.
The schools that sprang up to meet this market had little in common with
the leading schools, and did not care. The “better” schools, though, were
troubled indeed. Although enrollment at the established schools grew at a
healthy rate, their percentage of the total law student population actually
declined. Schools were indeed winning out over apprenticeship, but which
schools?Wisconsin’s Richards, president of the AALS in 1915, reported that
the number of law schools had increased by 53 percent since the Association
was organized in 1900, and the number of students, excluding correspondence
schools, had risen by 70 percent. The number of students enrolled in
member schools had increased by 25 percent, to 8,652, but enrollment in
non-member schools had risen by 133 percent, to 13,233. Member schools
accounted for 55 percent of all students in 1900, but for only 39 percent in
1915. Law schools had won the battle with apprenticeship as the path to
practice, but the “wrong” schools were in the lead.
The universe of academic legal education was divided into a few broad
categories. The handful of “national” institutions at the top implemented
Harvard’s reforms of a generation earlier and adopted some form of the case
method. Sustained by their wealth and prestige, they were not dependent on
trade-group support. The next tier, close on their heels, superficially similar
but less secure, constituted the bulk of AALS membership. Below them,
more modest schools offered a sound legal education to more regional or
local markets, on thinner budgets, with uncertain library resources. These
schools relied heavily on part-time instruction, and many offered classes at
night for part-time law students who kept their day jobs. Many aspired to
membership in the AALS and worked so far as their resources permitted to
qualify for membership. Then there were the night schools, conscientious
in their efforts to train the newly arrived and less educated. And there
were proprietary and commercial night schools who simply crammed their
customers for the bar examination. The lower tiers would remain as long
as they could offer a shorter and cheaper path to practice, and a living for
those who ran them, regardless of the standards of their betters. The AALS,
acting alone, could not be an effective cartel.
Parallel developments in medical education and the medical profession
are instructive, and they had a powerful influence on the legal academy
and the bar. Through the offices of the Carnegie Foundation (whose president,
Henry S. Pritchett, had been, not coincidentally, head of the Bureau of
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 63
Standards), Abraham Flexner was commissioned to prepare a study of medical
education in the United States. Flexner was a respected scholar, but not
a doctor. His independence permitted him to use terms more blunt than the
AMA itself dared employ. He grouped medical schools into those who had
the resources and will to provide a scientifically sound – i.e., expensive –
medical education, those that would like to but lacked the means, and the
rest, denounced as frauds. He urged that the middle tier be helped to move
up and that the bottom tier be eliminated. Following publication of his
report in 1910, this is exactly what happened. Philanthropists and research
foundations followed Flexner’s criteria in their funding decisions, putting
the leaders still further ahead. State licensing standards were tightened,
and the applicant pool for the bottom tier dried up. By 1915 the number
of medical schools and the number of medical students had declined
sharply.
The Carnegie Foundation had already sponsored one study of legal education,
a relatively brief report on the case method prepared by Josef Redlich of
the University of Vienna after visits to ten prominent law schools. Redlich
blessed the method but noted its narrowness, giving some comfort to proponents
and detractors alike.13 In 1913, a year before publication of the
Redlich Report, the ABA Committee on Legal Education turned again to
Carnegie, hoping for a Flexner of its own. Alfred Z. Reed, not a lawyer,
was commissioned to study legal education in the United States. He visited
every law school in existence at the time, plowed through all available
statistical compilations, and analyzed structures, politics, and curricula.
Reed admired the achievement of Harvard and the other leading law
schools, and acknowledged that the case method succeeded splendidly in
the hands of instructors of great ability, teaching well-prepared and able
students in first-year courses. It was not clear to him, however, that the
method was equally effective in advanced courses or in institutions with thin
financial and intellectual resources, whose students might be of Dwight’s
“middle sort.” Instead of following Flexner in recommending a one-sizefits-
all approach, Reed concluded that the bar, in terms of work done and
clients served, was in fact segmented rather than unitary and that legal
education should be so as well. He recommended that the role of night
schools in preparing those who could not take their professional education
on a full-time basis be acknowledged and supported. Expected to condemn
the schools that produced the bulk of the dubious applicants to practice,
he instead declared that there was both room and need in the United States
for lawyers who would not be Tocqueville’s aristocrats and for the schools
13 Josef Redlich, The Common Law and the Case Method in American University Law Schools
(New York, 1914).
Cambridge Histories Online © Cambridge University Press, 2008
64 Hugh C. Macgill and R. Kent Newmyer
that trained them. The Reed Report remains the most comprehensive study
of legal education ever conducted. Reed’s research was prodigious and his
prose was marvelous, but his recommendations were not wanted and they
were rejected immediately.
VIII. THE BAR MILITANT
Leaders of the bar had become increasingly alarmed at the condition of
legal education as it related to professional standards. The magnates of a
profession that, at its top, was distinctly homogeneous shared a genuine
concern about standards for admission to practice and were dismayed at the
impact on the status of the profession of the recent infusion of large numbers
of imperfectly schooled recent immigrants. “Character” loomed large in
their discussions, and in the Canons of Ethics, published in 1908 to establish
the ABA’s position as arbiter of the profession. While xenophobia and, more
specifically, anti-Semitism were rarely overt in the public statements of the
leaders of the bar, neither were these elements perfectly concealed. Plainly
there was a question whether the character required for a grasp of American
legal institutions and the ethical dimension of the practice of law might
not be an Anglo-Saxon monopoly.
The bar was nearly as white and male at the turn of the century as it had
been before the Civil War. Several schools were established to overcome
the obstacles African Americans encountered in attempting to enter the
profession, the Howard University Law School being the best known and
most successful, but the path to practice remained a very stony one for black
Americans.Women fared hardly better. Michigan, and a few other schools
in the mid and far West, could boast of their openness to women, but it
was only in 1919 that a woman was hired as a full-time law teacher (at
Berkeley), and it took Harvard until 1949 to admit women at all. To these
familiar patterns of prejudice, nativism was now added.
In his 1916 presidential address to the ABA, Elihu Root stressed the role
of the lawyer as public servant, neatly subordinating the democratic notion
of open opportunity to the paramount consideration of fitness for practice.
Apprenticeship had given way to schooling. Therefore the standards of law
schools had to be raised in order to screen out the unfit: the “half-trained
practitioners [who] have had little or no opportunity to become imbued
with the true spirit of the profession,” which is not “the spirit of mere
controversy, of mere gain, of mere individual success.”14 Harlan F. Stone,
dean at Columbia and, in 1919, president of the AALS, agreed with Root.
John HenryWigmore, clearly envious of the AMA’s success, made the same
14 Elihu Root, “The Training of Lawyers,” American Law School Review 4 (1916), 188, 189.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 65
point with brutal directness. “The bar,” he declared, “is overcrowded with
incompetent, shiftless, ill-fitting lawyers who degrade the methods of the
law and cheapen the quality of service by unlimited competition.” To meet
this problem, “the number of lawyers should be reduced by half,” and he
concluded, stricter pre-law educational requirements would be a sensible
“method of elimination.”15
Finally the explicit connection was made between higher academic standards
and the exclusion of “undesirables” from the profession. Both legal
education and the practice of law at their least elevated levels remained
“pretty much a free thing,” as Joseph Baldwin had put it before the Civil
War. Unregulated markets for education and for lawyers perpetuated the
democratic openness of the Jacksonian era. That very openness, however,
was an obstacle to the attainment of the dignity sought by the bar and of the
stature sought by the academy.Wigmore’s candor identified competition as
an additional and crucial element: entry of the unwashed into the profession
was not merely damaging to its pretensions, but to its pocketbooks as well.
If the lower depths of the bar had taken over criminal defense, personal
injury, and divorce work – all beneath the dignity of the corporate lawyer –
what would prevent them from moving into real estate, wills, trusts, and
other respectable work as well? Once the bar grasped that threat, the need
for regulation became clear.
Increasing state licensing requirements to include two years of college
prior to admission to law school could cut out many “undesirables” and
socialize the remainder in ways that could repair the deficiencies of their
birth and upbringing. There was no risk of creating the “caste system in its
worst form” that the president of Yale feared,16 because a college education
was within the reach of anyone with character, grit, and stamina, regardless
of family wealth. Doubtless Root, Stone, William Howard Taft, and their
peers were sincere in this belief. In 1915, however, only 3.1 percent of the
college-aged population was enrolled in degree-granting institutions of any
kind. Something more tangible than grit was required, and most people
knew it.
Root and his associates, armed with a pre-publication copy of Reed’s
work, prepared their own report for presentation to the ABA in 1921. They
realized that, if the bar was to be mobilized, they would have to do the
mobilizing themselves. The Root Report sought at long last to commit
15 John H.Wigmore, “Should the Standard of Admission to the Bar Be Based on Two Years
or More of College-Grade Education? It Should,” American Law School Review 4 (1915),
30–31.
16 Arthur T. Hadley, “Is the B.A. Degree Essential for Professional Study?” American Law
School Review 1(1906), 379, 380.
Cambridge Histories Online © Cambridge University Press, 2008
66 Hugh C. Macgill and R. Kent Newmyer
the organized bar unequivocally to the standards long urged by the AALS,
specifically to a three-year course of study and a minimum of two years of
college preparation. Academics showed up in force at the 1921 meeting of
the ABA to help secure the report’s adoption.
At this conjunction of the bar and the academy, long-ignored political
realities forced themselves on the attention of all. The leading law
schools, through the AALS, had set a standard for education, but they had
no means of enforcing it on non-members. They had a carrot but no stick.
The ABA was equally powerless to enforce educational standards against
non-conforming schools. The ABA represented a minuscule fraction of the
profession (1.3 percent in 1900, 3 percent in 1910, 12 percent in 1920)
and had no authority over the 623 state and local bar associations, some of
which had the effective connections with state governments that the ABA
lacked.
The ultimate form of professional recognition is the sanction of the state.
The American Medical Association, with the Flexner Report, had indeed
exterminated its “rats.” But it had done so because it stood at the apex of a
pyramid of state and county medical societies, whose local influence, aided
by Flexner’s findings, secured higher local licensing standards. Medical
schools that could not train their graduates to the requisite level lost their
market and folded.
The ABA, with many generals but few troops, did not have that local
political influence. At its 1921 meeting, therefore, the ABA leadership
decided to convene a conference the following year of representatives from
all state bar associations, in order to sell the Root Report to people who
might be able to put teeth into it. All the powers of the legal establishment
were brought to bear on this National Conference of Bar Associations, held
inWashington in 1922. The influence ofWilliam Howard Taft, the encouragement
of the dean of the Johns Hopkins Medical School, and the dread of
socialism were all deployed successfully on behalf of the Root Report. For
the moment the academy and the profession were united. From that moment
of unity much would flow, but not quickly. In 1920, no state conditioned
admission to the bar on a law degree, still less a college degree beforehand.
In 1920, there was no nationwide system for accreditation and licensing of
law schools. The contours of practice would continue to change, affected
by the Depression and the New Deal. The hard edge of Langdell’s model
for law schools would become progressively softer, as increasing numbers
of academics – some later to be called “realists” – looked to empirical work
and the social sciences for a thicker description of law and the lawyer’s social
role. The last vestige of “scientific” justification for the case method would
be discredited, but the method and its accompanying structures survived.
Lest they be thought too impractical, some schools would create clinical
Cambridge Histories Online © Cambridge University Press, 2008
Legal Education and Legal Thought, 1790–1920 67
programs, a distan,t echo of apprenticeship first sounded in 1892. But the
road that would lead to higher licensing standards for lawyers and a national
system of law school accreditation was clearly marked, and the elements that
would lead to legal education in its modern, apparently monolithic form
were all in place.
Cambridge Histories Online © Cambridge University Press, 2008
3
the legal profession: from the
revolution to the civil war
alfred s. konefsky
The American legal profession matured and came to prominence during
the century prior to the Civil War. The profession had entered the Revolutionary
era in a somewhat ambiguous state, enjoying increasing social
power and political leadership, but subject to withering criticism and suspicion.
Its political influence was clear: twenty-five of the fifty-six signers
of the Declaration of Independence were trained in law; so were thirty-one
of the fifty-five members of the Constitutional Convention in Philadelphia;
so were ten of the First Congress’s twenty-five senators and seventeen of its
sixty-five representatives. And yet, just three weeks after the signing of the
Declaration of Independence, Timothy Dwight – Calvinist, grandson of
Jonathan Edwards, soon to be staunch Federalist, tutor at Yale College and,
within several decades, its president – delivered a commencement address
in New Haven full of foreboding, particularly for those among the graduates
who would choose the legal profession. What would await them? Little
but “[t]hat meanness, that infernal knavery, which multiplies needless litigations,
which retards the operation of justice, which, from court to court,
upon the most trifling pretences, postpones trial to glean the last emptyings
of a client’s pocket, for unjust fees of everlasting attendance, which
artfully twists the meaning of law to the side we espouse, which seizes
unwarrantable advantages from the prepossessions, ignorance, interests and
prejudices of a jury, you will shun rather than death or infamy.” Dwight
prayed that, notwithstanding, “[y]our reasonings will be ever fair and open;
your constructions of law candid, your endeavors to procure equitable decisions
unremitted.” And he added an historical observation:
The practice of law in this, and the other American States, within the last twenty
years has been greatly amended; but those eminent characters to whom we are
indebted for this amendment, have met with almost insurmountable obstructions
to the generous design. They have been obliged to combat interest and prejudice,
powerfully exerted to retard the reformation: especially that immoveable bias, a
68
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 69
fondness for the customs of our fathers. Much therefore remains to be done, before
the system can be completed.1
In one short valedictory diagnosis Dwight captured the essence of the
dilemma that would stalk the profession throughout the succeeding century.
Was law a public profession or a private profession? Did lawyers owe a special
obligation through their learning, education, role, and place in society to the
greater good of that society, or was their primary loyalty to their clients (and
by extension to themselves)? Could lawyers credibly argue the intermediate
position, that by simply representing the private interests of their clients
they also best served society?
Dwight’s address, first published contemporaneously in pamphlet form,
was later reprinted in 1788 in The American Magazine. Alongside Dwight’s
lofty sentiments there also appeared a far less elevated essay, “The Art of
Pushing into Business,” satirical advice from an anonymous author, Peter
Pickpenny (reportedly a pseudonym for NoahWebster). This essay has been
largely ignored. Nevertheless Pickpenny’s observations deserve attention,
for he too picked up on the old refrain. “Are you destined for the Law?” he
wrote. “Collect from Coke, Hale, Blackstone, &c. a catalogue of hard words,
which you may get by heart, and whether you may understand them or not,
repeat them on all occasions, and be very profuse to an ignorant client, as he
will not be able to detect a misapplication of terms.” And again: “As the
success (or profit, which is the same thing) of the profession, depends much on
a free use of words, and a man’s sense is measured by the number of unintelligible
terms he employs, never fail to rake together all the synonymous words
in the English, French and Latin languages, and arrange them in Indian
file, to express the most common idea.” And finally: “As to your fees – but
no true lawyer needs any advice on this article.”2
Peter Pickpenny in his own way reinforced Dwight’s disquisition on
the danger and temptation of the pursuit of purely private gain. Lawyers
chased their own private, selfish interest. Contrary to professional lore, they
would dupe their own clients while professing to represent them. At the
very moment that the Republic was relying on lawyers to reconstitute
the form of government, the repository of the ultimate public virtue, their
capacity for public virtue was – at least for some – in doubt. Legal ideas
were about the nature of the state and the theory of republican civic virtue,
1 Timothy Dwight, “A Valedictory Address: To the Young Gentlemen, who commenced
Bachelors of Arts, at Yale College, July 25th, 1776,” American Magazine (Jan. 1788), 99,
101.
2 “Peter Pickpenny,” “The Art of Pushing into Business, and MakingWay in theWorld,”
American Magazine (Jan. 1788), 103, 103, 105.
Cambridge Histories Online © Cambridge University Press, 2008
70 Alfred S. Konefsky
but lawyers lived in the marketplace constituted by private interests.
That crucial intersection between public and private was where lawyers’
roles and reputations would be determined, rising or falling depending
on the perception and reality of whether the twain could ever properly
meet.
It is tempting to invoke for the legal profession in the century after the
Revolution the iconic category (or clich´e) of a “formative” or, perhaps, a
“transformative” era. But it is not exactly clear that any such label is satisfactory.
What we know is that the legal profession evolved in some ways
and not in others. The century was clearly of critical importance in the
growth of the profession. In 1750 the bar was in many respects an intensely
local, perhaps even provincial or parochial profession, more like a guild than
anything else. By 1860 it was poised on the verge of exercising truly national
political and economic power. During the intervening years, lawyers began
to exhibit the classic signs of modern professionalism. They began to cement
control over admission to what they defined as their community, through
education (knowledge, language, technical complexity) and social standards.
They began to regulate their own behavior after admission to practice,
to shape the market for their services, and generally to enhance their status
in society. Lawyers encountered values, ideas, and self-images embedded in
a world of developing and expanding markets, increasingly at a remove from
the rhetoric of republican virtue. This new world provided both opportunity
and temptation.
Though they never missed a chance to lament their changing world,
lawyers displayed a remarkable ability to adapt to opportunity and temptation.
Their educational methods slowly altered, the numbers admitted
to the profession expanded, the organization of practice gradually shifted,
lawyers adapted their practices to legal change, and they occasionally forged
that change themselves. The profession helped reshape professional rules of
conduct to meet the demand of new marketplaces. Lawyers simultaneously
complained about change and embraced it. The public did not really understand
what they did, they said, so attacks on their behavior were misplaced.
Yet they also tried to convince the public it was wrong, or – subtly –
changed their conduct to address the criticism. The public’s skepticism
always haunted the profession, particularly as lawyers began to exercise
political power. In a society that moved in theory from trust that elites
would exercise their judgment in the best interests of all to suspicion of
the legitimacy of elites to retain or exercise power at all, lawyers believed
they had no choice but to open up their profession. Still, in a culture outwardly
unwilling to tolerate signs of special status, lawyers kept struggling
to maintain control of their own professional identity.
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 71
I. LAW AS A PROFESSION IN THE NEW REPUBLIC
The legal profession prior to the Revolutionary era is not amenable to easy
summary. Across some 150 years, lawyers in different colonies underwent
different experiences at different times. Before 1700, colonies occasionally
regulated various aspects of lawyers’ lives, from bar admission to fees. The
bar’s internal gradations and hierarchies in England (between barristers and
solicitors) did not entirely survive transplantation in America, where the
paucity of lawyers seemed to undermine the necessity of creating ranks.
Suspicion of attorneys, often as a carryover from religious life, existed in
some places. The Massachusetts Bay Colony’s system of courts and judges
flourished at times without lawyers at all – no doubt viewed by the Puritan
elders (perhaps contrary to their sensibilities) as some evidence of heaven
on earth.
By the beginning of the eighteenth century, more lawyers were entering
professional life. Lawyers followed markets for their services; they were to
be found primarily in seaboard areas where the colonial populations tended
to cluster. Accurate figures for the number of lawyers in colonial America
have not been compiled, but estimates suggest about a half-dozen in the
Pennsylvania colony around 1700 rising to at least seventy-six admitted
between 1742 and 1776; about thirty to forty in Virginia in 1680, many
more a hundred years later, and prominent and prosperous as well; about
twenty in South Carolina (primarily Charleston) in 1761, thirty-four or so
in 1771, and fifty-eight in 1776. Figures vary for NewYork, from about 175
from 1709 to 1776, to about 400 for the longer period from 1664 to 1788
(about 50 in New York City alone from 1695 to 1769). In Massachusetts,
there were only fifteen trained lawyers in 1740 (one lawyer per slightly over
ten thousand people); in 1765, there were fifty lawyers for a population
of about 245,000; in 1775, a total of seventy-one trained lawyers. With
an estimated population of one-and-a-half million people in the British
colonies in 1754, the numbers of lawyers were trifling, if not insignificant.
The social power and influence of colonial lawyers far exceeded their numbers.
As the colonial economy expanded, trade increased, and populations
grew, the number of lawyers followed suit. Some prospered (though others
struggled financially). More important, as the Revolution approached, arguments
both for and against independence were forged by lawyers, drawing
on their education, training, and experience. Attorneys familiar with arcane
land transactions and property rights or routine debt collections came to
represent the public face of a political class seeking revolution and independence.
Some were cemented to Revolutionary elites through marriage
and kinship networks, but other than personal ties and a familiarity with
Cambridge Histories Online © Cambridge University Press, 2008
72 Alfred S. Konefsky
political and historical ideas related to law, it is unclear why law practice
should have become associated with the Revolution: revolution might just
as easily be construed as a threat to law. Aware, perhaps, of the anomaly,
lawyers recast the Revolution as a purely political act that changed the form
of government, but maintained and institutionalized reverence for law. The
outcome was somewhat paradoxical. On one hand, it became accepted in
the new United States that the sanctity of law lay at the very core of civic
virtue; on the other, that the actual business of representing clients involved
in legal disputes was potentially corrupting. In public roles, lawyers might
be admired. As attorneys in private practice, they were condemned all too
often.
II. IDEOLOGY AND THE PROFESSION
In the aftermath of the Revolution the legal profession appeared in disarray.
Tory lawyers – by some estimates, 25 percent of all lawyers – fled. The
remainder struggled to adapt to a new legal environment, untethered from
the English common law and its authority. But the profession’s disarray has
been exaggerated. Though there is no question that there were Tory defections
(particularly in Philadelphia, Boston, and New York), their numbers
were rather fewer than reported, particularly in some of the new states.
As for the remainder, they quickly burnished their images in the glow of
republican ideals while grasping new market opportunities.
Lawyers’ Republicanism
To understand the social function of the nineteenth-century American bar,
it is necessary to crack the code of republicanism. Republican ideals focused
on the identification and pursuit of a public good or interest that, in theory,
was located in a shared belief in civic virtue. Private interest was to be
subordinated, and responsibility for administering the public welfare delegated
to a natural elite that would place the commonwealth’s interest above
all else. Republican governors would derive their authority from general
recognition of their character, merit, and demonstrated ability, not from
their inherited role or hierarchical position in society.
The republican ideal presented both opportunity and challenge for the
legal profession. The American version of revolution was primarily driven
by ideas. One might consider lawyers an unlikely repository of revolutionary
fervor, but patriot practitioners certainly had preached ideas – notably
separation from the crown – and were responsible, therefore, for developing
a replacement. The public good was thus deposited substantially into the
hands of lawyers; their responsibility was to frame forms of government
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 73
that would guarantee the civic virtue on which the future of the Republic
depended.
For lawyers turned politicians/statesmen, the keys were twofold, constitutions
and the common law, both envisaged as foundations for institutions
that would restrain or limit the power of the state and ensure liberty. Rules
were the purview of lawyers. Pay attention to the carefully crafted and
drafted rules distilled from the voices of experience drawn from the ages.
You will get social order and control, and avoid the threat of licentious
freedom. Or so the lawyers believed.
But the lawyers were reluctant to leave anything to chance. Here opportunity
met its challenge. The lawyers who drafted the Republic’s constitution
were afraid that the document itself, placing sovereignty in the people and
deriving its authority from the consent of the governed, might in fact be
more democratic than republican. Lacking faith in the capacity of the people
to abide by the limits of the Constitution and behave within its restraints,
the founders hence sought to create additional means to mediate between
the Constitution and its core values and popular rule; to protect the people
from their own excesses. Fifty years after the Revolution, lawyers were still
delivering anxious jeremiads that reflected on their fears for the republican
legacy with which they had been entrusted. In 1827, Lemuel Shaw, then
practicing law in Boston a few years before being appointed Chief Justice
of the Massachusetts Supreme Judicial Court, enjoined his colleagues of the
Suffolk County bar to “[guard] with equal vigilance against the violence
and encroachments of a wild and licentious democracy, by a well balanced
constitution.”Well balanced meant “a constitution as at once restrains the
violent and irregular action of mere popular will, and calls to the aid, and
secures in the service of government, the enlightened wisdom, the pure
morals, the cultivated reason, and matured experience of its ablest and best
members” – people like themselves.3 It was not enough to write the documents
and then get out of the way. Lawyers should be the checks and
balances too.
The danger was that the public would accuse lawyers of being undemocratic
for intervening in the political process, for trusting neither the constitutional
institutions they had created nor the citizens they had placed
in positions of responsibility to undertake their own wise self-government.
Ironically, however, the public documents of revolution were rule-bound.
Lawyers were positioned to interpret and apply them in two distinct capacities,
first as participants in the public process of creating rules of selfgovernment
(laws), and second as interpreters and practitioners of law – as
3 Lemuel Shaw, An Address Delivered before the Suffolk Bar, May 1827, extracted in
American Jurist and Law Magazine 7 (1832), 56, 61–62.
Cambridge Histories Online © Cambridge University Press, 2008
74 Alfred S. Konefsky
providers, that is, of services to fellow citizens who were, in their own
interests, navigating the system the lawyers had themselves developed.
Here we meet the second hallmark of the post-Revolutionary profession:
its new, enhanced role in the process of dispute resolution. As the meaning
of republican virtue changed and became increasingly contested, what
emerged was a new kind of constitutional faith that interests and factions
would ultimately balance each other out and that no one interest would
ultimately dominate the polity. Given that a lawyer’s job was to represent
interests, the new republicanism dovetailed neatly with a professional
norm that insisted on pursuing the best interests of clients in an adversarial
environment. If the Constitution and the common law created a framework
within which private interest had to be recognized, who better than lawyers
to mediate between these interests by getting everyone to play by the rules,
by laws, and by procedures so that social order and not chaos would ensue?
The problem, of course, was that lawyers could be accused of fomenting
private disputes for their own personal gain and of tearing at the fiber of
society, rather than preserving it. The lawyers’ response was that they were
only representing the interests that the country’s constitutions recognized
and that they would be shirking their republican responsibilities if they did
not participate in the system of resolving disputes that helped preserve the
rule of law. There was public virtue in representing the interests of others.
But lawyers still wanted to be the “best men,” the dedicated, dispassionate
elite that would guide the Republic. Lawyers by training and education,
familiar with classical antiquity and its lessons, would form a learned profession,
a natural calling, that would replace the ministry as society’s preferred
leaders. Particularly well situated by preparation to participate in a government
of laws, attorneys could as a profession shepherd post-Revolutionary
America through daily life, or through the most trying times, just as they
had through the Revolution itself.
To accomplish all these tasks, lawyers had to maintain some control over
who was admitted to the practice of law. From an early date they emphasized
moral character as a determining factor in bar admission almost as
much as acquired knowledge. Lawyers acted as gatekeepers to the profession:
only those judged safe in the wake of the Revolution were deemed worthy
of admission and its consequent public and social responsibilities. A halfcentury
after the Revolution, Tocqueville captured part of this idea when he
referred to lawyers as an “American aristocracy.” Tocqueville’s observation
had many meanings, but as part of his characterization of this aristocracy
he noted “[t]hat the lawyers, as a body, form the most powerful, if not the
only, counterpoise to the democratic element.”4 Elites, independent and
4 1 Alexis DeTocqueville, Democracy in America (Phillips Bradley ed., 1945), 278.
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 75
not dependent members of society, could be trusted to identify the true
public interest in a society reduced to competing, potentially ungovernable,
and exuberant private interests. Or at least, so the rhetoric of the bar
proclaimed. The risk was that elites could be corrupted by their own private
interests (always just around the corner in a market society) or that
the bar could be viewed through its admission control mechanisms as a
monopoly restricting opportunity or, in a related sense, could be accused of
a lack of commitment to democracy and, even worse, of resisting change or
the will of the people by asserting a preference for order. Republicanism,
then, appeared to grant post-Revolutionary lawyers three major vocational
opportunities – mediating between government and the sovereignty of the
people by fostering the public good, providing the services of dispute resolution
in a society of competing interests, and maintaining a disinterested
bar trained to exercise enlightened leadership. All, however, would turn out
to be unresolvable tensions in the life of the bar between the Revolution
and the CivilWar. The difficulties they posed were played out over and over
again – in legal education; in bar admission, composition, and structure;
in the organization of practice; in law reform; in ethics; and elsewhere. The
bar never could quite escape the ambiguity of its role in American society.
Anti-Lawyer Critiques in the Republic: Defining the Public Good
and the Nature of Community Under Law
Not everyone thought American lawyers were living up to the republican
creed. There has long been an anti-lawyer tradition in America, although it
is not exactly clear whether the Revolution exacerbated or eased it. But some
post-Revolutionary critics, for political and other reasons, clearly believed
that lawyers were far from paragons of civic virtue, and their attacks tried
systematically to stymie the attempts of attorneys to align themselves with
the constituent elements of republicanism. There was a certain irony to
this criticism, for it showed that lawyers’ dual capacities rendered them
vulnerable as well as powerful. Through their active participation in the
founding of the nation, lawyers had worked hard to institutionalize the
insights of republican theory as well as to situate themselves as public
representatives of it. As private lawyers, however, they could be found
wanting in a wide variety of interrelated ways that served to undermine
their carefully constructed public role.
First, there was the perpetual, vexing problem of the complexity of law.
Law in a republic ought to be accessible to all, not the special province
of experts. The more technical and complex the law – with only lawyers
qualified to administer, superintend, or interpret it – the more costly and
the less accessible it became. The call came to simplify the words and cut
Cambridge Histories Online © Cambridge University Press, 2008
76 Alfred S. Konefsky
the costs. One radical program, suggested in Massachusetts by Benjamin
Austin in 1786 (and republished in 1819), was simply to abolish the “order”
of lawyers altogether. Similarly, the citizens of Braintree asked for laws that
would “crush or at least put a proper check or restraint on that order of
Gentlemen denominated Lawyers” whose conduct “appears to us to tend
rather to the destruction than the preservation of this Commonwealth.”5
If the state found it impractical to control lawyers, then perhaps communities
could reduce reliance on the artifice of law as practiced by lawyers.
Lawyers’ “science,” some critics charged, cut law off from its natural roots
in justice. In the immediate post-Revolutionary generation, they proposed
ways of restoring the quasi-utopian, pristine quality of law. Massachusetts,
Pennsylvania, and Maryland all considered legislative proposals to embrace
arbitration procedures that would wrest control of the administration of
justice from lawyers and simplify the legal process. Arbitration was also
occasionally linked to court reform. As one Maryland observer noted, “The
great mass of the people have found business to proceed much faster by
mixing a little common sense with legal knowledge . . . . I know many private
gentlemen, who possess more accurate legal erudition than the majority of
attorneys, although, perhaps, not so well acquainted with trick and finesse.”6
Second, as practicing attorneys, lawyers appeared merely self-interested,
rather than interested in the public good. As the citizens of Braintree hinted,
self-interest threatened the fabric of the community by pitting citizens
against each other. At the very least, lawyers exacerbated conflicts by representing
opposed parties. Worse, as Jesse Higgins observed in 1805 in
Sampson Against the Philistines, lawyers might actually foment conflict for
their own purposes, rather than prevent or resolve disputes. In 1830, in
Vice Unmasked, P. W. Grayson stated the problem concisely: “Gain I assert
is their animating principle, as it is, in truth, more or less of all men. . . . A
tremulous anxiety for the means of daily subsistence, precludes all leisure to
contemplate the loveliness of justice, and properly to understand her principles.”
7 Rather than belonging to a learned profession or a higher calling,
Grayson suggested, lawyers were now embedded like everyone else in the
marketplace. Self interest in an increasingly atomized society was the norm,
and lawyers were no exception; in fact they seemed particularly susceptible
to falling victim to corruption and luxury.
Third was the problem of independence in a republic that rejected forms
of dependence and subordination. Lawyers were in an ambiguous position
5 Petition of the Braintree Town Meeting, Sept., 1786.
6 Baltimore American, Nov. 29, 1805 (italics in original).
7 P. W. Grayson, “Vice Unmasked, An Essay: Being A Consideration of the Influence of
Law upon the Moral Essence of Man, with other reflections” (New York, 1830).
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 77
or, perhaps, a double bind. On the one hand, lawyers represented others,
clients, so the claim could be made that they were dependent on others for
their business or that they were not independent producers, free of others,
and self-sustaining. On the other hand, one of the aspects of republican
lawyering could be construed as reserving to the attorney the right to make
independent moral judgments about the virtue of the claims of clients on
whom he depended for a livelihood.Would clients tolerate the substitution
of the will of their attorney for their own will when clients thought that
they were purchasing expertise and knowledge in the marketplace? Was
the independence so prized by republican theorists fated to be eternally
compromised by the social function of lawyering? And what about the perceptions
of clients? In a society that valued independence, would clients
resent being in the thrall of their lawyer, who possessed a grip on language
and technicality? In a society that talked openly about the promise of equality,
clients might chafe if they were placed under the protection of another
person, dependent on his expertise.
It was equality, finally, that caused lawyers their most pressing ideological
problem. Republicanism required selfless, educated, virtuous elites to
lead and govern. Lawyers thought they were well suited to the task. They
had forged connections or networks with other elites through marriage or
kinship and also through business and economic transactions, which nevertheless
contributed to the image of attorneys as dependent. Moreover,
obsessive and risky land speculation led some lawyers into financial distress.
Yet in a society that also valued the equality, individuality, and independence
of its citizens, pretensions to leadership suggested pretensions to
aristocracy and hierarchy. Lawyers had not been elected in their professional
lives, though the charge was that they acted as if they were. (In Jacksonian
America, this insight helped fuel the move to elect judges.) In their public
lives they did often serve in elected political office, shaping public policy
in part through their legal insights, not associating overwhelmingly with
one political party or ideology.
Inevitably the bar’s claims to elite status became caught up in the maelstrom
of Jacksonian democracy. In 1832, Frederick Robinson jeered at
lawyers’ pretensions: “And although you have left no means unattempted
to give you the appearance of Officers, you are still nothing more than followers
of a trade or calling like other men, to get a living, and your trade like
other employments, ought to be left open to competition.”8 Though his
words were anti-monopolistic in tone, with implications for the educational
8 Frederick Robinson, “Letter to the Hon. Rufus Choate Containing a Brief Exposure of
Law Craft, and Some of the Encroachments of the Bar Upon the Rights and Liberties of
the People” (1832).
Cambridge Histories Online © Cambridge University Press, 2008
78 Alfred S. Konefsky
and admissions process, the heart of the matter was equality of opportunity.
Should the profession be less of a closed fraternity with the de facto right to
exclude entry, particularly if the bar was associated with economic power?
The common law was not supposed to be mysterious, but available to all.
The Constitution was supposed to apply to all. So the legal profession should
be open to all – though whether increasing the number of lawyers was a
good idea or a bad idea seemed not to concern Jacksonian theorists, any
more than the question whether simplifying the law through codification
would cause lawyers to behave any differently once they were trained in the
mysteries of the craft.
Though criticism of lawyers was widespread, it was not crippling. In
some places, indeed, it seemed rather muted. Lawyers did not appear to be
viewed as a particularly potent or threatening social or political force in
Southern society. Their reputation, perhaps more myth than reality, placed
them in a rather more genteel classification: well educated, well read, tied
more closely to the planter elites and their culture, more interspersed in
society, and often practicing law only when it suited them. Prosperous
Southern lawyers often invested in land, plantations, and slaves, seamlessly
blending with their culture rather than standing apart from it. Perhaps
there was a lesson in their experience for other lawyers. Their major moral
challenge was their involvement in a slave society, but most seemed to
concern themselves simply with administering the system internally, coping
with its contradictions and inconsistencies, rather than spending much
time, at least at first, defending the system from external ideological attacks.
They just acted like lawyers.
So both lawyers embracing republicanism and republican critics of
lawyers helped shape the contested images that would follow the profession
in various forms and elaborations throughout the century.Was it declining
or was it rising?Was it a learned profession or a business?Was it selfless or
self-interested? Was it public spirited or private oriented? Was it political
or apolitical?Was it independent or dependent?
III. THE EDUCATION OF LAWYERS: THE SEARCH FOR
LAW AS A SCIENCE IN A REPUBLIC
Apprenticeship
For most of the eighteenth and nineteenth centuries, the overwhelming
majority of American lawyers were trained by other lawyers through what
was known as the apprenticeship method, a method apparently conceived
of as if its purpose was to train fledgling artisans in the mysteries of a craft or
guild. Special knowledge and expertise were to be imparted by those solely
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 79
in control of that knowledge to those wishing to enter a “profession” that
was responsible for defining itself. Admission to the educational process
was tantamount to admission to the profession, because the standards for
bar admission were primarily established by the bar with occasional supervision
by the courts. Those standards tended to prescribe a period of time
“reading law” in an attorney’s office, followed by only the most rudimentary
examination by a judge. Whether one obtained a solid legal education was
mostly fortuitous. There was not much method to the process, scientific or
otherwise.
By necessity, therefore, almost all legal education was local. Potential
students – often by dint of personal association or friendship, community,
and family – enlisted in the offices of attorneys in their towns or metropolitan
areas and agreed to pay a tuition of $100 or $200 (waived on occasion)
to receive an “education.” Though the education was decentralized, it was
remarkably uniform. From the late eighteenth through the first quarter of
the nineteenth century, law students began by reading primarily what their
mentors had read before them. The process often started by reading general
historical and jurisprudential works focusing on the feudal origins of
law, or the law of nations or natural law. From the general, the educational
program moved to the particular. The great advance in legal education at
this time was provided by Blackstone’s Commentaries, absorbed by generations
of grateful law students. Arranged by systematic legal categories, the
Commentaries provided complex yet concise insights and an overview into
foundational legal principles. Blackstone also took the pressure off lawyers
actually to teach, allowing them to carry on business (also using the cheap
labor supplied by students), which the students might also observe.
After reading, students were expected to organize their own knowledge.
They did this by compiling their own commonplace books, which distilled
their readings into accessible outlines. Whether learning lessons from
Blackstone (or St. George Tucker’s later American version of Blackstone,
or Kent’s own Commentaries) or copying the writs, declarations, or answers
of the attorneys in whose offices they read, the students, often unsupervised,
in theory assiduously mastered the accrued lessons of the past filtered
through the remarkably similar experiences of their teachers in the present.
As a result, a certain regard for tradition, continuity, and timelessness was
transmitted. Over time, the educational process was enhanced as judicial
decisions became more available through case reports and legal treatises on
more specialized subjects were published. Even then, a student’s exposure
to these materials was often at the mercy of the library of the law-office
attorney.
A legal education could be haphazard, and as many students complained,
it was almost always drudgery. At the dedication of the Dane Law School
Cambridge Histories Online © Cambridge University Press, 2008
80 Alfred S. Konefsky
(Harvard) in 1832, Josiah Quincy described the current form of legal education
in need of reform. “What copying of contracts! What filling of writs!
What preparing of pleas! How could the mind concentrate on principles.”
Books, said Quincy, “were recommended as they were asked for, without
any inquiry concerning the knowledge attained from the books previously
recommended and read. Regular instruction there was none; examination as
to progress in acquaintance with the law – none; occasional lectures – none;
oversight as to general attention and conduct – none. The student was left
to find his way by the light of his own mind.” The result was boredom, inattention,
ignorance. “How could the great principles of the law . . . be made
to take an early root . . . by reading necessarily desultory . . . and mental exercises
. . . conducted, without excitement and without encouragement, with
just so much vagrant attention as a young man could persuade himself to
give. . . .” 9
Reading law, therefore, was thought of as a practical education, technical
learning by osmosis, but an education where acquiring the principles of
the mysterious science was left to chance. There was much unhappiness
with the methodology, but precious little change or thought about change.
A prospective student understood that the critical step in the process was
finding an office in which to read because, by rule or custom in most places,
after three years or so of tedious endurance, bar admission would result. The
bar decided who was worthy enough to enter the profession. The members of
the bar were primarily a homogeneous group, and they generally rewarded
those who were striving and seeking opportunity. To read law, one did not
have to come from a wealthy family (merchants or planters), and though
wealth helped a young man get accepted into a law office and pay his tuition,
plenty of farmers’ or ministers’ sons found their way there. Also, having
some form of undergraduate college education clearly helped – indeed,
over time in some jurisdictions, the bar rules would require some formal
education. But the search for organizing principles and alternative methods
was only beginning.
University Law Professors
University law lectureships and professorships never really flourished in
immediate post-Revolutionary America. Seeking to emulate Blackstone’s
success as Vinerian Professor of Law at Oxford, a small number of universities
created professorships or chairs. The experiment began, thanks to
Thomas Jefferson, with GeorgeWythe’s 1779 appointment atWilliam and
9 Josiah Quincy, “An Address Delivered at the Dedication of the Dane Law School in
Harvard University, October 23, 1832.”
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 81
Mary as professor of “Law and Police.” (Wythe would be followed in the
position by St. George Tucker.)Wythe’s professorship, mostly because of his
gifts and intellect, was to be the most successful of these attempts at legal
education, but other examples abound in the 1790s – from the important
law lectures of James Wilson at the University of Pennsylvania and James
Kent at Columbia (though after the initial ceremonial lectures, interest and
students seemed to wane), to David Hoffman’s ambitious undertaking at
the University of Maryland in 1814. Along the way Harvard, Virginia, and
Yale began offering undergraduate law courses that over time evolved into
university law schools.
In addition to signifying discontent with the apprenticeship custom, all
these fledgling programs had one purpose in common. The lectureships
stemmed from a conviction that law was to be a learned profession and that
law, if not yet a science, was certainly one of the liberal arts essential to
teaching and learning about the nature, place, and role of civic virtue in a
newly minted republican society. If this society was to be self-governing,
it needed to educate an elite that would understand the lessons of the past
and devise institutions, legal or otherwise, to prevent the mistakes of the
past from recurring. So, although there was some discussion of practical
legal detail, the emphasis was on organizing knowledge about the law to
establish an impact on the shape of society. Society would not be safe without
republican lawyers.
Proprietary Law Schools
Proprietary law schools arose in the United States to fill a perceived vacuum.
Noone was teaching principles, and the grasp of the practical was assumed to
flow seamlessly from observation and repetition. Lawyers also for the most
part could superintend only a handful of students, and if no lawyer was
available, a prospective student might have to travel or be inconvenienced.
Monopolizing students might be frowned on, and so in some senses, it
might be more efficient to form a school to attract a variety of students
from a variety of places, particularly if the school could guarantee that they
would be getting plenty of attention, organization, and books they might
not find elsewhere.
Such was Tapping Reeve’s insight and gamble when he founded the
Litchfield Law School in a little corner of Connecticut in 1784. Reeve,
eventually in partnership with James Gould, and finally Gould by himself,
trained about one thousand lawyers (many of whom served in important
positions in politics and law) before their doors closed in 1833, perhaps as a
result of the competition emerging from university law schools, particularly
Harvard.
Cambridge Histories Online © Cambridge University Press, 2008
82 Alfred S. Konefsky
Reeve and Gould offered rigor, supervision, and lectures. Over the course
of fourteen months, students heard ninety-minute daily lectures organized
around legal principles (not just the mindless rote of rules), recorded their
lessons in notebooks, took weekly tests, and participated in forensic exercises.
The measure of Litchfield’s success is that though students were drawn
primarily from the New England and mid-Atlantic states, the school’s reputation
was such that despite its relative isolation and Federalist proclivities
about 30 percent of its enrollees were from the South, including John C.
Calhoun.
Litchfield’s reputation inspired other attempts by lawyers and judges to
earn a living teaching law or to supplement their income by aggregating
student apprentices. None of these efforts achieved the same level of broad
acceptance and intellectual impact as Litchfield. But they appeared and then
disappeared with various degrees of seriousness in localities in Virginia,
North Carolina, New York, Massachusetts, and elsewhere. Their lack of
infrastructure and financing, coupled with the slow reexamination of the
ideas justifying the forms of legal education, eventually led some to believe
that the place for the education of lawyers belonged in a university that
could justify professional, as well as undergraduate, training.
University Law Schools
Joseph Story had such a vision. In 1829, beginning with the remnants of
the endowed law professorship established at Harvard College more than
a decade earlier, Story sought to transform the nature of legal education.
A simple law professorship would no longer do; Litchfield showed that.
Apprenticeship left too much to the risks of mentor inertia and student
indifference. What was needed was systematic endeavor demonstrating that
law was a science and belonged in a university. The question was, what kind
of science. Story preferred to see law as a set of ideals, stressing universal
principles of natural justice, spanning the ages and ever appropriate. Law
was a moral science designed to guide human behavior. It was important,
he thought, in a republic to develop a cadre of specially trained guardians,
“public sentinel[s],”10 to protect, as Lemuel Shaw had put it, against the
excesses of democracy. Ever mindful of the spread of Jacksonian democracy,
Story wanted to guarantee that lawyers would retain strong moral character.
If lawyers could not control admission to the profession, they could at
least control the content of a lawyer’s education. Republican virtue must
be perpetuated, sound principles enunciated clearly, governing standards
declared. Training lawyers also meant sending them out into the world.
10 Joseph Story, “Discourse Pronounced Upon the Inauguration of the Author as Dane
Professor of Law in Harvard University (Aug. 25, 1829).”
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 83
Timothy Walker, who founded the Cincinnati Law School in 1835 in the
image of Story and Harvard, was one. If they came to Harvard to learn,
Story wanted them to populate the nation as missionaries.
Story’s reach was national. Systemization of thought for him meant forming
and shaping the general legal categories with which to organize a legal
literature designed to tell lawyers how to believe or act. Story contributed
greatly to his cause by writing innumerable legal treatises, ranging from
Commentaries on the Constitution to various technical legal subjects. For Story,
theory was practical. The success of Harvard Law School rose during Story’s
tenure and began a generation of decline on his death in 1845.
Not all who sought refuge in a university legal education shared Story’s
vision. Different ideas about the nature of legal education and legal science
flowed from the rationality preached by the philosophers of the Scottish
Enlightenment. Tied to what became known as Protestant Baconianism,
which was rooted in natural theology and eventually the natural sciences, the
recommended method was one of taxonomical classification that organized
knowledge and the acquisition of knowledge from the bottom up around
readily recognizable first principles, instead of from the top down, as Story
advocated. The Baconian system of legal thought – espoused by lawyers and
law professors like David Hoffman, David Dudley Field, George Sharswood,
and, most ominously for Story, his colleague at Harvard, Simon Greenleaf –
was supposed to be verifiable and vaguely empirical. This method, because
it was more scientific, arguably had the virtue of being able to adapt better to
social change. Story did not much like change, particularly political change.
The loosely knit Protestant Baconians wanted to adapt law to American
experiences (an idea that Story in theory was not opposed to) and to release
it from its perceived dependence on pre-Revolutionary British common
law. Law needed to be explained as a science, not simply as a faith. Seeking
to train lawyers based on these principles, the Baconians saw lawyers more
as specialists or experts, technocrats providing a service in the marketplace,
though they retained concerns about the moral responsibility of lawyers.
Story apparently was afraid that unless his method was used, the republic,
founded under the stewardship of lawyers, would fade away, and that lawyers
would no longer be part of a learned profession and noble calling. And
indeed, the face of the profession was gradually changing, just as Story
feared.
IV. THE GROWTH OF THE PROFESSION
Over the course of the nineteenth century, lawyers, in conjunction with
courts, gradually lost whatever control they had over admission standards
and practices. In 1800, fourteen of the nineteen states had requirements of
between four to seven years of bar preparation. By 1840, only eleven of the
Cambridge Histories Online © Cambridge University Press, 2008
84 Alfred S. Konefsky
thirty states insisted on prescribed periods of study. In jurisdictions like
Massachusetts and New York, before the liberalization of rules governing
admission, it might take a prospective lawyer up to a decade (including a
fixed period of college education) to qualify for admission. By mid-century
that had changed drastically. Good moral character with a shortened period
of study or an examination became the standard in Massachusetts in 1836.
New Hampshire in 1842 and Maine in 1843 required only evidence of
good moral character, without any prescribed period of study. By 1860,
just nine of the thirty-nine states insisted on any period of study. University
legal education, which promised to help filter entry to the profession, was
still slow to gather momentum, with about fifteen university law schools
operating in 1850. These changes fed concerns about the composition of
the bar that reignited disputes within the profession and the public over
the proper place of lawyers in American society.
The bar was forced to open up under pressure from forces loosely associated
with Jacksonian democracy that produced leveling arguments coupling
equality of opportunity with suspicions about elites. ,The relaxation
of admission requirements has often been bemoaned in the literature of the
profession as a period of great decline. But it is difficult to determine the
baseline against which to measure the fall from grace, or to assess precisely
how many lawyers were entering practice, or who they were. The traditional
view was that the bar was a meritocracy (thereby ensuring its status as an
honestly earned craft or guild or elite). In 1841, St. George Tucker observed
that “the profession of the law is the most successful path, not only to affluence
and comfort, but to all the distinguished and elevated stations in a
free government.”11 On the other hand, lawyers from John Adams onward
had expressed concerns that increasing numbers of lawyers meant more
unscrupulous, untrained pettifoggers clogging the courts, stealing clients,
and leading the public to believe all attorneys were mendacious predators;
and that, even worse, the practice of law had become a mere business.
There are very few reliable statistics on the number of lawyers in the
United States between 1790 and 1850; most of the evidence is fragmentary,
scattered, and anecdotal. Before 1850, there are limited data on the number
of lawyers in some locations at some specific times. The federal Census of
1850, however, broke down occupations by location or state. It recorded
just under 24,000 lawyers nationwide, almost half of them in only five
states: Massachusetts, New York (with nearly 18 percent of the total alone),
Ohio, Pennsylvania, and Virginia. And not surprisingly, by mid-century
more lawyers were pushing west into Indiana, Illinois, andWisconsin.
11 [Henry St. George Tucker], Introductory Lecture Delivered by the Professor of Law in the
University of Virginia . . . 8 (1841).
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 85
As to the number of lawyers as a proportion of the general population,
“[b]etween 1850 and 1870, the ratio was fairly steady: 1.03 lawyers to every
1,000 population at the beginning and 1.07 at the end.”12 If one compares
this data with numbers on the eve of the Revolution, it is clear that by
1850 many more men were entering the legal profession, and the relative
number of lawyers in proportion to the general population had increased
significantly. Indeed, lawyers in some places complained that the profession
had become overcrowded and was degenerating into a mere business, while
anti-lawyer critiques decried the “swarms” of lawyers. But in places like
New York, the increased number of lawyers might have been a consequence
of the accelerating pace of market expansion and trade, as well as the growing
complexity of legal practice. And in any event, the impact of lawyers on
public policy and political activity may have been disproportionate to their
absolute or relative numbers. So, was the ubiquitous lament about the
overcrowding and decline of the legal profession in part a complaint about
who the new lawyers were?
Only a few brief demographic snapshots analyze data about lawyers’
social class and status over the nineteenth century. The two most extensive
studies are of Massachusetts and Philadelphia. For Massachusetts, Gerald
Gawalt found that, of 2,618 trained lawyers practicing in Massachusetts and
Maine between 1760 and 1840, 71 percent held college degrees. Admittedly,
Massachusetts was hardly the frontier, but in a period when a college
education was the exception and not the rule, this data seem scant evidence
of the decline of the profession, at least in that state. Gawalt also
found that over time most college-educated lawyers in Massachusetts and
Maine came from professional families and often were the sons of judges
and lawyers. It seems fairly clear that, at least for this period, Massachusetts
lawyers retained the gloss of an educated elite, not necessarily upper class,
but solidly grounded in the community. A narrower sample of lower federal
court judges from 1829 to 1861 also indicates the judges were generally
from the educated middle class.Western or frontier lawyers, drawn from a
different cohort, seem to have been from more humble origins.
In Philadelphia, the story was a little different. From 1800 to 1805,
68 percent of Philadelphia lawyers were college graduates, and 72 percent
came from elite families. By 1860, the number of college graduates in the
profession had fallen to 48 percent. The pool of prospective lawyers, meanwhile,
had expanded. Upper-class representation declined from 72 percent
to 44 percent. Where middle-class families were only 16 percent of the
12 Terence C. Halliday, “Six Score Years and Ten: Demographic Transitions in the American
Legal Profession, 1850–1960,” Law & Society Review 20 (1986), 53, 57. Incidentally, the
ratio “rose steeply to 1.5 in 1900, but then contracted to 1.16 in 1920.” Id.
Cambridge Histories Online © Cambridge University Press, 2008
86 Alfred S. Konefsky
sample in 1800–1805, now they were nearly half. Twenty-seven percent
came from the artisanal and lower middle class. The lower-class group
remained steady over time at 12 percent.
The appearance of a more heterogeneous profession where once there had
been a more homogeneous legal community might explain some of the bar’s
rhetorical crankiness or anxiety about its status. However, agitation about
lost status and lost community did not necessarily translate into reduced
authority. The middle class was not preaching revolution, just access. This
also meant that, as more individuals were engaged in an expanding economy,
new markets for legal services would be created. One of the paths to
enhanced market participation was to represent those touched by the same
invisible hand. Most young lawyers sought entry to the profession to build
their own lives, not threaten others.
In any case, there were clear limits to the bar’s diversity. At this time, for
obvious reasons, the profession remained overwhelmingly white, and male
and Protestant. A handful of African Americans became lawyers before the
Civil War. Only about six were admitted, beginning with Macon Bolling
Allen in Maine in 1844. Allen left Maine within a year, apparently clientless,
and was admitted to the bar in Massachusetts in 1845. Robert Morris, Sr.,
followed suit in Massachusetts in 1847, where he established a thriving
practice, best remembered for his failed quest to desegregate the Boston
public schools in 1848–1849 in Roberts v. City of Boston.
Women fared worse. There is some evidence that women on rare occasions
appeared in court on their own or others’ behalf, but no women were
admitted to the practice of law before the Civil War. Belle Mansfield was
admitted to practice in Iowa in 1869, and shortly thereafter, Myra Bradwell,
fresh from having founded the Chicago Legal News, passed her bar exam, only
to be denied admission, thereby starting the path to the constitutional decision
barring her entry to the profession. At this stage, the weight of gender
stereotypes was apparently too much to overcome.
By 1860, the bar was growing, with only a few cracks in its facade of social
class. It was now a symbol of aspiration, and if indeed it was a higher calling,
why would the bar complain about all those aspiring to enter? Anxious
about losing status and professional control, the bar continued to fret. For
all its concerns its hegemony over law never really seem threatened. However,
immigrants, non-Protestants, racial minorities, women, and poorly
educated supplicants were looming just over the horizon.
V. THE ORGANIZATION OF PRACTICE
Wherever a lawyer during this period might have been located – New
England, Mid-Atlantic, Midwest, South, West, or the so-called frontier,
Southwest, or eventually the Far West, urban or rural – the chances were
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 87
overwhelming that he was a solo practitioner. And it was just as likely
that he was a generalist, prepared to take any business that walked in the
door. “In this country,” one lawyer commented in 1819, “there is little
or no division of labour in the profession. All are attornies, conveyancers,
proctors, barristers and counselers. . . . It is this habit of practical labour,
this general knowledge of business, which connects the professional man in
this country with all classes of the community, and gives him an influence,
which pervades all.”13 The realities of practice thus also determined the
place of lawyers in American society.
A lawyer had to have some facility in pleading and litigation (though
just exactly how much litigation actually went to trial is unclear), and the
dimensions of a lawyer’s expertise might be tested by where he practiced. For
example, if a lawyer practiced on the frontier or the old Northwest, or parts
of the South, or interior New England from 1790 to about 1820, unless
he was anchored in a small metropolitan area, he probably rode circuit;
that is, took his business on the road following the terms of the courts as
they circulated throughout the jurisdiction. Thus, the typical lawyer faced
a number of challenges. First, he probably did not know most of his clients
until he met them. Second, he had to be a quick study, or at least capable
of reducing the great mass of business into a routine processing mode
(often just filing actions to preserve claims, particularly debt collections, or
appearing to defend them). Third, he had to be nimble on his feet. He had to
go to trial with little or no preparation, so some forensic ability might help,
including an aptitude for shaping or developing a narrative – telling a good
story. Rhetoric might or might not get in the way, although there is some
evidence that the trial calendar was treated in some locations (rural as well
as urban) as local theater or entertainment. Fourth, a lawyer’s reputation
was treated as a kind of roving commission: the success of his business
depended on his perceived performance. Last, he had to be willing to travel
with and develop a tolerance for a group of fellow lawyers and judges. In
the absence of bar associations in most places, lawyers boarded and bonded
with one another in all kinds of settings. There is a fair amount of bar
and other literature heralding the brotherhood of lawyers – looking out for
each other’s business, for example. There are also accounts of boisterous and
occasional violent confrontations between lawyers in the South and West,
which sometimes are cited as evidence of their community.
As courts became more centralized, one shift in the method of practice
over the century was the reduction of circuit riding, sometimes over the
complaints of those who found it difficult geographically to gain access
to courts. Though transportation networks were expanding, judges and
13Warren Dutton, “An Address Delivered to the Members of the Bar of Suffolk . . . 1819,”
6–7.
Cambridge Histories Online © Cambridge University Press, 2008
88 Alfred S. Konefsky
lawyers tended to withdraw from traveling, or at least circuit riding was
no longer a central identifying feature for some of the bar. Over time in
some places in Tennessee, Ohio, and Michigan, lawyers went to clients
or physically searched for clients less often; rather the clients came to the
lawyers. The market for services had changed.
Attorneys had another reason for choosing solo practice: there was not
enough business to support more extensive office practices. Most lawyers
made a decent enough living. Some struggled. A few became very rich. By
1830 Lemuel Shaw was earning between $15,000 and $20,000 annually, a
great deal of money in those days. Alexander Hamilton, Daniel Webster,
and others also made large sums of money. In New York and in some of
the eastern seaboard cities from North to South, lawyers in practices tied to
economic expansion and organization prospered by investing or by serving
as corporate officers, bank directors, or trustees. Nonetheless, in 1856 John
Livingston reported in his national survey of the bar that a lawyer’s income
averaged about $1,500 per year, less than most appellate judges’ salaries at
the time.
This general “sufficiency” does not mean the bar was not stratified. It
was, not formally as in England, but by income. Age was one factor. In
some places attorneys when first admitted were limited to practice only
in lower courts for a probationary period. Young lawyers tended to move
west to seek opportunity and avoid competition. The income hierarchy was
differentiated further over time in some locales, like cities, based on what
courts a lawyer practiced in – courts of criminal jurisdiction, for instance, as
opposed to appellate practice. The primary marker of stratification, however,
was the lawyer’s clients. For a profession that valued its independence, it
was remarkable to see a de facto classification of lawyers emerge based on
whom they represented.
Closely examined, a simple debt transaction reveals the initial layers of
the profession. On one side would stand the debtor, typically starved for
cash. His lawyer would spend his time ascertaining the circumstances of
the transaction, gathering the documents (probably limited and primitive),
responding to pleadings (usually mechanical and rote), but often seeking to
postpone the payment or judgment for as long as possible so his vulnerable
client could remain afloat. The lawyer had few economic resources or legal
strategies available, and he often negotiated or corresponded with the creditor’s
attorney from a position of weakness. His fee was likely to be small and
difficult to collect. He scrambled for business and was fortunate if he could
bargain with his opponents to renegotiate the terms of the transaction or
arrange a settlement.
The creditor’s lawyer, by contrast, operated in a much more stable environment.
Legally protected in most circumstances, the creditor asserted his
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 89
rights in the transaction from a position of strength. His lawyer behaved
accordingly. He also evaluated the factual setting, counseled his client,
negotiated on his behalf, and prepared the pleadings. But the underlying
economic circumstances of the creditor were likely, though not always, to
be better than the debtor’s. Securing a debt for a relatively wealthy client
was very different from scrambling to avoid paying a debt for a client
with more limited resources. The creditor’s lawyer, further, might have
been specifically retained with fees to pursue the debt – mercantile firms
or banks paid handsomely for lawyers’ services, particularly in urban settings,
and particularly as, over time, the transactions both in the original
counseling and drafting phases became more complex and sophisticated.
Thus, although lawyers might share a professional identity of sorts, on any
given day, they might be doing very different things with very different
consequences growing out of the same subject matter. The different forms
of legal practice became segmented over time through repetition. They also
became stratified as what the lawyer did increasingly reflected the wealth
of the clients he represented.
Over the course of the century, the wealth of the client was more likely to
be corporate, not individual. Here lay major engines of stratification – retention
and repetition. An individual landowner might need an attorney occasionally
for buying and selling land or arranging leases or easements. But
what if a newly chartered railroad sought to take or cross his land? Suddenly
the quiet enjoyment of his property became a major problem. Meanwhile
his attorney – attuned to bread-and-butter miscellaneous disputes or minor
property matters – might find himself confronting a new phenomenon, the
retained railroad lawyer, professionally sophisticated and with substantial
client resources at his disposal. The railroad attorney might have helped
draft the legislative charter creating the enterprise (and lobbied for it politically
with fellow lawyers), arranged and secured financing for the project
(and drafted those documents as well), fended off the competing claims
of other competitive roads or corporations (with their own retained attorneys),
overseen the eminent domain or taking proceedings before nascent
administrative bodies or the courts, negotiated complex deals, and generally
dealt with a host of railroad-related matters. Hired railroad attorneys
were very polished repeat players in the expansion of economic development
projects. Landowners and their generalist lawyers were not. The enterprise
or corporate lawyers tended to separate themselves from other strata of the
profession through their specialization and economic success and, therefore,
exercised more social and political power.
The emergence of a segmented and stratified profession was reinforced
by social kinship and family networks. Bar elites cemented their social and
political status and power by alliances with entrepreneurs: lawyers’ families
Cambridge Histories Online © Cambridge University Press, 2008
90 Alfred S. Konefsky
were often connected by marriage to fledgling industrial capitalists in New
England, or to the owners of large manorial land holdings or mercantile
interests in New York, or to banking or insurance interests in Philadelphia,
or to planter elites in Virginia and South Carolina. Lawyers representing
other constituencies tended to identify with them. Though its republican
rhetoric asserted that the bar should have been monolithic, in practice it
was not, giving rise to concerns about the profession’s integrity.
Identification of lawyers with or by their clients had a ripple effect on
contested views about ethical norms. If a lawyer had close social ties with his
clients, zealous advocacy on their behalf could be assumed – it would seem
to follow naturally from the perception that the moral universe or behavior
of client and lawyer tracked each other. Hence there would be little need
to question the acts of representation undertaken, thereby enhancing the
lawyer’s professional discretion. A lawyer who did not have the luxury of
representing clients with whom he was economically or socially associated
could not risk developing a reputation for less than complete devotion
lest he endanger his future prospects. A lawyer who had to devote himself
to a client to maintain that relationship or forge new relationships, that
is, lacked discretion. Yet ultimately, whether a lawyer was comfortable
representing interests he found congenial or was economically dependent
on his client and therefore zealous, the organization of practice tended to
inhibit the ethical standards of disinterested republicanism, or to inhibit the
lawyer’s appreciation of the tension in the marketplace that reduced their
relevance.
During the century before the CivilWar, social changes slowly occurred
in the nature of practice. Partnerships emerged, though they were still the
exception and not the rule, and may not have been very stable. Typically a
partnership was composed of two attorneys who, because of the increased
press of business, divided their responsibilities between litigation and
office practices; the so-called office lawyers dealt with a growing diversification
of practice, no longer just pleading, trial preparation, and jury work.
Drafting instruments, planning transactions, and advising clients as to what
was possible and what was not became the province of the office lawyer, who
rarely entered a courtroom. Sometimes partnerships were formed between
older and younger attorneys – the younger at first managing the office
and preparing documents – somewhat akin to the apprenticeship relationship.
The move toward partnerships tended to signal a recognition of the
increased pace and complexity of practice.
Combining forces paved the way for another shift in practice, a subtle
move toward specialization. There had always been pockets of specialization.
The Supreme Court bar, composed of lawyers like Pinkney, Webster,
and Wirt, was known for its oratorical skills in appellate advocacy. Trial
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 91
lawyers like Rufus Choate or Lincoln were renowned for their forensic and
rhetorical skills. But now specialties along the lines of specific areas of law
began to emerge; they were technical, complex, and narrow. For example,
bankruptcy law experts, still mostly solo, developed around the short-lived
federal Bankruptcy Acts. Lawyers who might once have been generalists
now devoted more of their time to one subject, where their talents and
expertise could be honed more and more by repetition and familiarity.
There were maritime lawyers, insurance lawyers, railroad lawyers, patent
lawyers, finance lawyers, bank lawyers, factor and agent lawyers, and creditor
lawyers – all primarily devoted to stoking the engine of economic
development, and many focused on moving money as well as goods and
services. In a number of ways, specialization fed the segmentation of the
profession. As we have seen, the economic status of the client helped define
the social and professional status of the attorney.
Increasingly, lawyers tended to cluster in cities. Eventually, particularly
after the CivilWar, the cities would become the home to larger law offices
and law firms as demand for complex work across a variety of legal services
exceeded the capacities of individual attorneys. Law practice was slowly
forced to adapt to meet the multiple needs of clients in an interdependent
world. Representing complex organizations in the various facets of their own
corporate lives or in legal relationships with other complex organizations
required more than one or two lawyers. The division of labor between
litigation and office work was no longer sufficient: office work in particular
could involve a whole new wave of planning and drafting demands, and
integration with the world of commerce and enterprise.
Lawyers were skilled, if not always at shaping markets, at least in adapting
to them. The organization and structure of practice moved fitfully toward
life in the post–CivilWar economy: more urban, less rural; more industrial,
less agricultural; more expansive and interconnected, less local and isolated.
Solo practitioners remained the backbone of the profession in numerous
small communities, but the idea of the law firm was slowly taking shape.
VI. LAW AND LAWYERS
On one matter, most lawyers of any intellectual stripe between the Revolution
and the Civil War could agree: law either was a science or should
be a science. But exactly what the meaning of science was or what consequences
flowed from law being a science was deeply contested. The critical
question was the relationship of law as a science to civic virtue. The republican
lawyers and their ideological descendants, the Federalist-Whig elites,
strove mightily to capture the high road of the rhetoric of law as a science
and, therefore, to seize and define the terms of the debate.
Cambridge Histories Online © Cambridge University Press, 2008
92 Alfred S. Konefsky
The Science of Law and the Literature of Law
For most republican lawyers, establishing legal science became a crucial
organizing idea in the republican program, whether in legal education or
political engagement. It was, they thought, the special responsibility and
province of educated lawyers to ensure that private and public decisions
were grounded in or sanctioned by the solid principles of law verifiable as
a science. Precisely what this meant was a little unclear, but certain basic
principles seemed generally accepted. First, law was a product of reason
rather than passion, and therefore restrained the base or corrupt instincts of
man. Second, law could be derived from principles that could be deduced in
a systematic and orderly fashion from the mother lode of the common law,
which was in turn derived from reported appellate cases. Third, law meant
stability, order, certainty, and predictability as, over time, it developed
culturally sanctioned norms or rules that tended to resist change, but were
capable of slowly adapting to measured progress that would serve the greater
public good. Others might have a different definition of the science of law.
Jacksonians found the science of law to be a political science, grounded in
positive law, the will of the people. Protestant Baconians found the science
of law in natural theology filtered through the Scottish Enlightenment,
preferring the methods of inductive natural science to deduction. But the
republican vision of law dominated the debate, and every competing theory
began by positing an alternative to it. Once generally embraced, how did
the idea of legal science contribute to the formation of the literature of the
law? The impact can be measured in three developments in the literature:
law reports, legal treatises and commentaries, and legal periodicals.
The proliferation of American law reports was both a response to the
demand from the profession for certifiably “decided” law and a result of
its need for a reflective distillation of the rapidly increasing numbers of
judicial decisions. The first reporters in the late eighteenth century were
entrepreneurial actors meeting a perceived market; by the early nineteenth
century the states and the federal government had begun to commission
official law reports. Judicial reports satisfied the profession’s demand for
indigenous American law to reduce reliance on English precedents and to
cope with the vast expansion in market activity that was a hallmark of the
Early Republic.
In 1807, at the outset of the growth of law reports, a young lawyer named
Daniel Webster, reviewing a volume of reports for a literary journal, made
explicit the connection between case reporting and legal science:
Adjudged cases, well reported, are so many land-marks, to guide erratick opinion. In
America the popular sentiment has, at times, been hostile to the practice of deciding
cases on precedent, because the people, and lawyers too, have misunderstood their
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 93
use. Precedents are not statutes. They settle cases, which statutes do not reach. By
reference to books, an inquirer collects the opinions and arguments of many great
and learned men, on any particular topick. By the aid of these, he discovers principles
and relations, inferences and consequences, which no man could instantaneously
perceive. He has, at once, a full view of his subject, and arrives without difficulty,
to the same conclusion, to which, probably, his own mind would in time have
conducted him by a slow and painful process of ratiocination.14
In the canon of republican legal science, the identification of precedents from
which followed order and stability was necessary to forestall incursions of
“popular sentiment.”
The second development in the literature of law was the appearance
of commentaries and treatises, some as American versions of English editions,
but increasingly over time, purely American volumes on various
specific legal subjects. Blackstone had provided the model for the organization
of legal knowledge for Americans, and he was emulated first in St.
George Tucker’s version of Blackstone in 1803, which sought to provide an
American legal and political adaptation, and then by James Kent, whose
four-volume Commentaries were published between 1826 and 1830. But the
general classification of principles for study and application, though invaluable,
needed supplementation as law practice became more varied and, in
some manner, more technical. Lawyers wrote treatises covering in depth a
range of subjects: water rights, corporations, insurance, evidence, contracts,
damages, and international law. Most prominent among the treatise writers
was Joseph Story, who wrote on the Constitution, equity, bailments, agency,
partnership, promissory notes, bills of exchange, and conflict of laws. Each
work in its own way conveyed Story’s view of legal science, mining the
common law and wider sources – if necessary the civil law or the law of
nations – to derive legal principles from the historical foundations of law.
In a sense, Story preempted the field of treatise writing as well as providing
an American model. And he presided over a rejuvenation of legal writing,
though it might be a conceit to call it a “literature.” Between 1760 and
1840, almost 500 legal monographs (approximately 800 editions) were
published in the United States, only about 90 of them (125 editions) in
the period up to 1790. (The figure does not include case reports, codes,
statutes, digests, legal periodicals, or most miscellaneous pamphlets like
bar orations or discourses.) Lawyers were reaching out for guidance, and
Story entered the field to ensure that the guidance conformed to his view
of legal science.
14 DanielWebster [Book Review of 1William Johnson, NewYork Supreme Court Reports],
The Monthly Anthology 4 (1807), 206.
Cambridge Histories Online © Cambridge University Press, 2008
94 Alfred S. Konefsky
The third forum for writing about law was the legal periodical. Between
1790 and 1830 a total of twelve legal periodicals were published. In 1810,
only one existed; in 1820 again only one; in 1830, five. In other words,
early in the century very few legal periodicals generated enough interest or
subscribers to survive. Between 1840 and 1870, in contrast, thirty-seven
were formed, and more of them survived at least for the short term. They
were an eclectic mix; most were utilitarian, printing early notices of decided
cases, or book reviews of new treatises, or surveys of new statutes. But some,
like American Jurist and Law Magazine, published in Boston between 1829
and 1843, the Monthly Law Reporter also published in Boston from 1838
to 1866, and the Western Law Journal published in Cincinnati from 1843
to 1853, had higher aspirations, publishing essays on subjects internal to
the bar and on topics of general public concern to lawyers as well. The
founding editor of the Monthly Law Reporter, Peleg Chandler, divulged to
Joseph Story, his mentor at Harvard Law School, his reasons for beginning
the journal: “A great deal is said in particular cases, even in arguments to the
court, about what the law ought to be or might well be, but precious little of
what it is.” What was needed, Chandler insisted, was “to hold up before the
profession and the public the decisions fresh from the court – to place before
them the law as it comes from the dispensers of it – from those who are too
far removed from the public to be easily affected by the changing fashions
of the day. . . . ” By so doing, his magazine would illustrate why “[n]oisy
radicals are not men who have read intimately the reports and become
acquainted with the intricate machinery, of which, if a part be disarranged,
the whole may suffer. . . . ”15 Appealing directly to Story’s understanding of
legal science, Chandler sounded very much like DanielWebster a generation
before, applauding the arrival of law reports. He assumed that finding and
stating “what it is” was a scientific undertaking.
As Chandler more than hinted, engaging in this pursuit of legal science
had political consequences. Lawyers in a republic had a responsibility to
be engaged in civic discourse, reasoning and arguing for the most effective
legal rules in the public interest. Lawyers from the time of the Constitutional
Convention in Philadelphia onward had gravitated toward the public,
political arena, whether in legislatures, or state constitutional conventions,
or executive offices. In Massachusetts from 1760 to 1810, just over 44 percent
of all lawyers were elected to some public office; from 1810 to 1840,
about a third of all Massachusetts lawyers were elected to public positions.
(There is some evidence that lawyers served extensively in public positions
throughout the nation.) Essays that Chandler published hence investigated
the social, economic, and political implications of the scientific principles
15 PelegW. Chandler to Joseph Story, December 1, 1838.
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 95
of law they presented. To fulfill its mandate for civic virtue, a governing
elite needed information and a forum to work out its arguments.
The legal science expounded in and by law reports, treatises, and periodicals
also served an instrumental purpose, reinforcing the notion that
only lawyers, scientifically and technically trained, could be trusted with
law. Ironically, the anti-lawyer complaints that law was inaccessible and too
complex might be true after all: only lawyers had sufficient command of
arcane procedures and pleading, complex doctrine, and strange language.
Through its literature, the bar justified its role to itself and the public by
separating itself off – a special professional group, different from others
in society. Law was the domain of lawyers. Their expertise, they argued,
qualified them to administer the legal system and to resist the inroads of
any non-scientific thought as they defined it.
The Common Lawyer and Codification
No technical issue of law reform so agitated the elite and academic lawyers
in the nineteenth century as codification. At its core, the project of codification
undermined the legal profession. By questioning the legitimacy of
the common law and offering an alternative vision of law in a democratic
society, codifiers challenged the central role lawyers played as guardians of
the repository of law. As a result, there was much heated rhetoric on the
subject. Whether the threat of codification was ever palpable is an interesting
question, but at the very least codifying ideas was a political challenge
to lawyers’ control over the content of law.
The codifying impulse has a long history in America, beginning at least
with the Puritans. Arguably the state and federal constitutions are examples
of the art. So it is a little difficult to understand why the common lawyers
were so upset at the appearance of arguments on the subject. Codification
was never an organized movement. In fact, there were at least three distinct
strands to the call for legal codes: a middle-class complaint about the
common law, a social activist complaint, and a purely lawyerly complaint
(with overtones of social activism). All criticisms focused on the perceived
failings of the common law to provide responsive legal solutions to current
social problems. Codifiers argued that the common law was bogged down
by inaccessible technicalities derived from outdated British, not American,
experiences and that lawyers manipulated the common law for their own
self-interest, not the public’s interest. In other words, law and lawyers were
failing to deliver on promised republican virtue, and therefore, the making
and administration of law should be returned to its true source in a
democracy, the people, by having elected representatives in the legislature
(who, ironically, might be lawyers) draft laws truly reflecting the will of the
Cambridge Histories Online © Cambridge University Press, 2008
96 Alfred S. Konefsky
people. In the face of these charges, the common lawyers sought in effect
to co-opt the arguments by transforming the debate into an internal legal
discussion, rather than an ideological conflict.
The middle-class strand of codification drew its inspiration from prevailing
anti-lawyer sentiment. The concerns expressed in the 1780s in Benjamin
Austin’s pamphlet, seeking abolition of the “order” of lawyers, slowly
led to reconsideration of the nature of the law being practiced. In 1805,
Jesse Higgins questioned the adequacy of the common law in a pamphlet
entitled “Sampson against the Philistines; or, the Reformation of Lawsuits;
and Justice Made Cheap, Speedy and Brought Home to Everyman’s Door:
Agreeably to the Principles of the Ancient Trial by Jury, before the Same
Was Innovated by Judges and Lawyers.” Higgins did not call for codification.
Rather he thought lawyers made lawsuits expensive and time consuming
and so suggested a system of arbitration to restore “cheap, speedy”
justice, devoid of complexity. All that lawyers did, according to Higgins,
was capitalize on people’s distress and pull communities apart, rather than
bind them together as republicanism required: “[T]he whole body of common
law, the whole body of pleading, rules of evidence, &c. have no legislative
vote to establish or even to define them. They depend wholly and entirely
for their authority on notes taken by lawyers and clerks, about this very
time, and hence the judges become the legislators.” In addition, “all those
laws which relate to property, . . . which are just and ought to be valid, are
in every age and every country, the simplest rules, and fittest to the plainest
capacities; . . . that any and every ignorant man . . . can decide any question
agreeable to law, although he never heard a law read, or read one during his
life.”16
Higgins’ middle-class lament was a central component of codification:
Legislate, simplify the rules, state them clearly, make life easier, and reduce
our dependence, financial and otherwise, on lawyers. Restore law to its roots
in justice and reduce the power of lawyers.
The ideological origin of the common law was a distinct issue that
attracted the attention of codifiers who had pursued an agenda of social
activism, sometimes perceived as radical change. The social activists drew
their criticisms from their causes: labor, antislavery, and religious tolerance.
William Sampson, an Irish emigr´e attorney in New York, provides
an example. His defense of New York City journeymen cordwainers in
1809 anticipated his more thorough-going call for codification in 1823
in his “Anniversary Discourse . . . on the Nature of the Common Law.”
Sampson attacked the nature of the cordwainers’ indictment for conspiracy
at common law for seeking to exercise their power as a nascent labor union.
16 [Jesse Higgins], Sampson Against the Philistines . . . 16, 27 (1805).
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 97
Sampson’s criticism of the common law was organized into four separate
categories. He asserted that in America, at least formally under law, all men
are or should be equal: “[T]he constitution of this state is founded on the
equal rights of men, and whatever is an attack upon those rights, is contrary
to the constitution. Whether it is or is not an attack upon the rights
of man, is, therefore, more fitting to be inquired into, than whether or not
it is conformable to the usages of Picts, Romans, Britons, Danes, Jutes,
Angles, Saxons, Normans, or other barbarians, who lived in the night of
human intelligence.” Second, in England statutes were vehicles of inequality.
“[T]he English code and constitution are built upon the inequality of
condition in the inhabitants. . . . There are many laws in England which can
only be executed upon those not favoured by fortune with certain privileges;
some operating entirely against the poor.”17 Third, in America, statutes
created equality; the common law was the source of inequality. Indictments
at common law in the United States, therefore, were suspect because
they were at variance with America’s enlightened constitutional tradition.
Finally, Sampson suggested that statutes were to be trusted because they had
involved a process of filtration through the will of the people who were ever
vigilant about equality. Codification, he added in 1823, would guarantee
that “[o]ur jurisprudence then will be no longer intricate and thorny.”18
The attacks that defenders of the common law found most difficult to
deflect came from lawyers, many of them Jacksonian Democrats, who challenged
the basic underlying political legitimacy of an uncodified law in a
democracy. Robert Rantoul, tied to social reform movements and risking
ostracism in Brahmin Boston, threw down the gauntlet in 1836. Judgemade
common law, according to Rantoul, was simply judicial legislation.
Judges had arbitrary power because the common law provided no certain
and predictable rules. Law should be “a positive and unbending text,” not
maneuvered by lawyers in front of judges. “Why,” asked Rantoul, “is an
expost facto law, passed by the legislature, unjust, unconstitutional, and
void, while judge-made law, which, from its nature, must always be expost
facto, is not only to be obeyed, but applauded? Is it because judge-made law
is essentially aristocratical?” This was a charge that republican lawyers like
Joseph Story strangely might have found apt or congenial. An aristocracy,
Rantoul suggested, that is indebted to the feudal barbarity of the dark ages
for its power is inimical to the social needs and purpose of a modern nation.
17 [Argument of William Sampson], “Trial of the Journeymen Cordwainers of the City of
New York.”
18William Sampson, “An Anniversary Discourse, Delivered Before the Historical Society of
New York, on Saturday, December 6, 1823: Showing the Origin, Progress, Antiquities,
Curiosities, and the Nature of the Common Law.”
Cambridge Histories Online © Cambridge University Press, 2008
98 Alfred S. Konefsky
“Judge-made law is special legislation,” and, according to Rantoul, “[a]ll
American law must be statute law.”19
If Rantoul supplied the ideological framework, it fell to David Dudley
Field to shore up the theory and carry out the project and practice of codification.
And he did so with relentless zeal, though only modest success,
proposing code after code for New York and elsewhere. Field sought to
demonstrate that codes rather than the common law were workable, expedient,
and responsive, not inflexible and inexpedient. Codes devoted to specific
legal subjects like civil procedure or criminal law would be comprehensive
and transparent. Everyone would know what the law was; nothing would
be mysterious. The advantage would be “the whole law brought together,
so that it can be seen at one view; the text spread before the eyes of all our
citizens; old abuses removed, excrescences cut away, new life infused.” The
“CODE AMERICA,” as he put it, would contain “the wisest rules of past
ages, and the most matured reflections of our own, which, instinct with our
free spirit of our institutions, should become the guide and example for all
nations.” And for lawyers, “the great task is committed of reforming and
establishing the law.”20
Most of the academic lawyers who actually noticed the push against the
common law were horrified and set about their own “task” of capturing the
move for codification and reshaping it to their own ends. They were led by
Joseph Story, Associate Justice of the U.S. Supreme Court and Dane Professor
of Law at Harvard. In 1836, Story chaired a commission appointed
by Governor Edward Everett of Massachusetts to determine the “practicality
of reducing to a written and systematic Code the common law of
Massachusetts, or any part thereof.” Story set out to fend off codification by
in effect rehabilitating the common law. In the process, he ended up either
making concessions or engaging in contradictions, depending on how one
assesses his arguments. Codes, Story argued, were cumbersome and inflexible.
They could not by their very nature adjust quickly enough through
the legislative process to changed social circumstances. “[I]t is not possible
to establish in any written Code all the positive laws and applications of
laws, which are necessary and proper to regulate the concerns and business
of any civilized nation, much less of a free nation, possessing an extensive
commerce. . . . ”21 But a limited form of codification could take place,
one familiar and useful to lawyers and judges, a kind of digesting system
19 Robert Rantoul, “Oration at Scituate, Delivered on the Fourth of July, 1836.”
20 David Dudley Field, “Reform in the Legal Profession and the Laws, Address to the
Graduating Class of the Albany Law School, March 23, 1855.”
21 “Report of the Commissioners appointed to consider and report upon the practicality
and expediency of reducing to a written and systematic code the Common Law of Massachusetts
. . . ,” reprinted in American Jurist and Law Magazine 17 (1837), 17, 30, 27.
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 99
consistent with Story’s view of legal science, ordering categories and principles
culled from cases and judicial decisions; in other words, the common
law. Indeed, Story was already engaged in a version of this process through
his prodigious treatise-writing efforts.
To reject codification, however, Story had to concede implicitly that
Rantoul and others had a point. Once defended by him as stable, certain,
predictable, universal, and the voice of experience, the common law was now
described as flexible, changing, unfixed, and capable of growth. Ironically,
uncertainty was now the common law’s strength compared with positive
law, which could not adjust as quickly to social change: “[T]he common law
of Massachusetts is not capable of being reduced to a written and systematic
Code; and . . . any attempt at so comprehensive an enterprise would be either
positively mischievous, or inefficacious, or futile. . . . ” Instead, he argued,
“the common law should be left to its prospective operations in future (as it
has been in the past) to be improved, and expanded, and modified, to meet
the exigencies of society” by the application of its principles to new cases
only rarely supplemented by legislation.22
Here then was the spectacle of common lawyers like Story defending
the common law as flexible and capable of growth. Its flexibility was its
strength. Once having brandished the common law as an unassailable citadel
of stability and certainty, fixed in its derivation and application, the common
lawyers now transformed it into a progressive science. To ward off the view
that laws should exist in positive codes, Story was willing to risk admitting
that judges make law. He did so because in his mind the greater danger to
continuity, order, and stability was the old fear of democratic excess – the
fear that the legislature, expressing the will of the people and taking the
promise of equality too seriously, might readily undermine the carefully
honed certainty and predictability of property rights. What Story was really
afraid of was not that positive codes might fail to adjust quickly enough to
changing circumstances, but that legislatures drafting codes would actually
seek to change circumstances. Story was not opposed to the common law
adapting to change grounded in recognized principles; he was opposed to
changes in law he saw as derived from purely political motives.
The codifiers responded that if judges actually made law – if law was
merely a matter of will – then let it be roped in, rendered consistent, and
made by the legislature. For all of the debate among lawyers in elite circles,
codification never obtained sufficient traction among lawyers who were
focused on the more mundane issues of everyday practice. But the debates
did reveal what the academic lawyers thought about what lawyers should
be doing and the virtue of the law they were practicing.
22 Id. at 31.
Cambridge Histories Online © Cambridge University Press, 2008
100 Alfred S. Konefsky
VII. THE REGULATION OF THE PROFESSION: ETHICAL
STANDARDS, MORAL CHARACTER, CIVIC VIRTUE,
AND THE ADVERSARY SYSTEM
In the face of widespread public criticism of the profession, lawyers faced
a dilemma: how to regulate the conduct and behavior of their profession
without at the same time conceding that their critics had a point. The problem
was compounded by the fact that during the first half of the nineteenth
century there was virtually no formal regulation of the conduct and behavior<,BR>of attorneys. To the extent there was any supervision, it appeared to be
self-regulation, but not self-regulation in a modern sense governed by codes
of professional responsibility with rules or principles explicitly delineated.
Rather regulation seemed to be left to the individual moral compass of each
attorney perhaps reinforced by the norms of a professional culture. As long
as the attorneys controlled the education and admission process, they could
be vigilant about the moral character of aspirants to the bar, filtering by
social class or critical observation the potential rogue attorney. Occasionally
the handful of functioning local bar associations might enforce discipline
or recommend action by a court. But courts had few guidelines as to appropriate
conduct. When confronted with charges of unethical behavior, they
had to rely on vague standards drawn from a lawyer’s oath or duties as an
officer of the court.
As the nineteenth century progressed, the ultimate question became
what the social function of the profession was and what ethical guidelines
would follow from it. Was it a profession whose legitimacy was grounded
in its service to the public, with ethical rules to fit accordingly, or was the
profession’s primary responsibility to its clients, with rules adapted to the
evolving practice of law in a market economy? The real task of the defenders
of the role of the profession was to convince the critics, both internal and
public, that law as a higher calling always had the interests of the community
in mind and that the rhetorical posture of those participating in the debates
over ethics was to forge standards that would foster, if not cement, the
importance of providing legal services in a government of laws, and not
men. The problem was that many more men were now practicing law, and
it was probably going to be impossible to account for them or to testify as
to their suitability. That anxiety helped feed discussion of what it meant to
be an ethical lawyer.
Two figures predominate in America’s antebellum discourse on the ethical
conduct of lawyers, David Hoffman and George Sharswood. They embraced
slightly different positions. Hoffman, a member of the elite Baltimore bar
and a Federalist in the throes of anxiety for the lost republic, attempted
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 101
to recast the profession in a fading republican vision in fifty “Resolutions
in Regard to Professional Deportment,” a kind of incipient code of professional
responsibility appended to the second edition of his A Course of Legal
Study, published in 1836. According to Hoffman, lawyers should be guided
by their moral sentiments and judgments. They should exercise a critical
analysis about the justness of their client’s claims and refuse to participate
in pursuing unfair or wrong causes, to engage in questionable tactics to
vindicate the interests of clients, or to seek unfair advantage – in other
words, lawyers should always behave as virtuous citizens. Hoffman stood in
contrast to the notion asserted by Lord Brougham in England in the early
nineteenth century that the lawyer’s role was to pursue his client’s interest
zealously. In resolution after resolution, Hoffman meticulously laid out how
lawyers confronted with difficult situations in practice should exercise their
critical, moral judgment: “My client’s conscience, and my own, are distinct
entities: and though my vocation may sometimes justify my maintaining
as facts, or principles, in doubtful cases, what may be neither one nor the
other, I shall ever claim the privilege of solely judging to what extent to
go.”23 As a trained elite, lawyers should reserve the right to express their
independent moral judgment, not just their professional judgment derived
from their special knowledge or skill. For Hoffman, professional judgment
and moral judgment went hand in hand.
Hoffman’s was a nostalgia for a lost age. Suspicious of open bar admission
and unsupervised legal education (with law schools slow to develop), he
believed that moral codes were necessary perhaps because the elites could
no longer rely on lawyers to attend to the public good. By proposing ethical
rules, Hoffman seemed to be conceding that private interests were now
dominant and that what were really required were special standards for a
world of zealous advocacy. If the bar could no longer control admission by
ties of class and status, at least it could try to influence the character of
those admitted by providing them with the ethical rules, guidelines, or
prescriptions that formerly they might have been assumed to possess as
second nature by dint of social upbringing. Lawyers now needed the rules
spelled out explicitly, since the hustle and bustle of the marketplace had
become the norm. Who did the lawyer owe his primary obligation to: the
public or the client? Under republican theory, as one of Hoffman’s allies
remarked, the lawyer “feels that his first duties are to the community in
which he lives”24 and not necessarily to his client.
23 David Hoffman, A Course of Legal Study (2nd ed., 1836), 755.
24 Simon Greenleaf, “A Discourse Pronounced at the Inauguration of the Author as Royall
Professor of Law in Harvard University (1834).”
Cambridge Histories Online © Cambridge University Press, 2008
102 Alfred S. Konefsky
Others were becoming less sanguine and more realistic about a lawyer’s
obligations. One was George Sharswood, a law professor at mid-century at
the University of Pennsylvania, destined toward the end of the century to be
Chief Justice of the Pennsylvania Supreme Court. In 1854, Sharswood published
A Compendium of Lectures on the Aims and Duties of the Profession of Law
(published in later editions as An Essay on Professional Ethics). Sharswood
moved beyond Hoffman’s moral imperatives. Though he was troubled by
the idea of abandoning reliance on moral principles, Sharswood carefully
tried to construct an ethical world that reflected law practice and yet, at the
same time, constrained some of the perceived excesses of zealous advocacy.
Perhaps shadowing debates in the legal periodicals of the time and justifying
the value of a client-centered practice, Sharswood saw the contemporary
ethical universe in shades of gray. A client should expect devotion from his
attorney and an attorney must do everything he can for his client, within the
law. As to distinguishing morality from law, Sharswood appeared reluctant
to insist on rigid, moral stances. Lawyers might on occasion, depending
on the situation, reserve the right to reject a client, but once a cause was
accepted, zealous representation would follow.
Sharswood and others were in some senses on the horns of a dilemma,
in part precipitated by the diverging demands of the republican tradition.
Lawyers could be perceived as bastions of republican virtue by remaining
independent of clients’ interests and above the fray, though this was
increasingly difficult in an expanding and interconnected market society,
or they could embrace their clients’ causes as their own and assert independence
from others on behalf of their clients. Therefore, a lawyer could
either evaluate from the outset whether justice was attainable in his client’s
cause or accept his clients more or less as he found them, and pursue justice
as the client saw it, without assessing the consequences for the general
community.25
Lawyers at mid-century were increasingly sensitive to charges that they
were simply mercenaries. Over time, in professional journals and on other
occasions, they took great pains to explain why zealous advocacy served
everyone’s interest, including the community. They were not entirely successful
in convincing a skeptical public. They had better luck convincing
themselves, but in doing so they ran the risk of conceding publicly either
that the bar had a public relations problem, or that some of the charges
were true, or that the profession, as perceived by elites, was in a period of
decline. The risk, of course, was that if the bar recognized the legitimacy of
25A version of this point is made in Norman W. Spaulding, “The Myth of Civic
Republicanism: Interrogating the Ideology of Antebellum Legal Ethics,” Fordham Law
Review 71 (2003), 1397, 1434.
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 103
the complaints, the next logical step would be calls for regulation, because
self-regulation would be interpreted as unavailing or self-serving.
The trick for lawyers who were called on to justify the evolution of the
professional norm of zealous advocacy was how to fit this norm within the
warm rhetorical embrace of fading republicanism. For a profession and a
public accustomed to hearing (if not as often believing) lawyers’ attempts
to justify the bar by invoking republican ideas about virtue and the public
good, defending lawyers’ own private interests was no mean task. In a
democratic society concerned in theory with equality, convincing the public
of the legitimacy of a self-described learned and educated elite took
some doing. When it came to defending the ethical standards associated
with zealous advocacy, the bar had only a few intellectual choices. It could
admit that zealous advocacy was for private interest or gain. Or it could
try to convince the public that zealous advocacy was yet another selfless act
by lawyers serving their communities; that what lawyers were doing was
consistent with republican virtue because lawyers were not acting in their
own behalf, but selflessly for others; that the nature of legal representation
had changed as society changed; and that lawyers were still meeting the
needs of a public they had always served. Much of the anti-lawyer sentiment
sought to strip away the veil of the public-spirited rationale of lawyers. The
bar, attuned to the critique, tried to secure its place in society by reassuring
its members that it was doing society’s work and carving out ethical
prescriptions to meet its needs.
CONCLUSION
In 1870, the nature and face of the profession were about to change. The
end of the Civil War set in motion forces already gathering in antebellum
America. The population was expanding, and the inexorable shift from
rural to urban had begun. Immigrants and the children of immigrants
added diversity to what once was a relatively homogeneous population. Former
slaves, now free, had to cope with the ambiguous promise of freedom.
Economic growth fueled by expanding railroads, developing interstate markets,
and large industrial corporate organizations with proliferating labor
requirements occurred in new and increasingly complex fashion.
The bar and the practice of law adjusted as well. The organization of
practice slowly shifted. Though solo practitioners remained the backbone
of the profession, and apprenticeship the main means of legal education,
groups of lawyers with specializations began in increasing numbers, particularly
in cities, to organize themselves into partnerships and then firms. As
usual, the bar’s elite remained concerned about who was admitted to practice.
Bar associations, long dormant, were revived to maintain standards for
Cambridge Histories Online © Cambridge University Press, 2008
104 Alfred S. Konefsky
entry and behavior. Lawyers also continued to participate in political life,
safeguarding the Constitution and social order and never entirely losing
sight of republican virtues.
The bar refocused and redoubled its efforts to cope with the demands
that shifting demographics placed on admission and professional education,
with alterations in forms and organization of practice, and with the reconfiguration
and restatement of ethical norms. The pressure for change was in
part resisted by recurring to the lessons of the past, a reliance on redesigned
and redefined commitments to public citizenship as the true calling of the
profession. Over the century from the Revolution to the CivilWar, the profession
changed subtly to avoid or rise above criticism, adopted educational
practices to control access to the profession and professional knowledge,
expanded the number of lawyers and variety of practices to create and serve
markets for legal services, reshaped ethical and moral standards to fit the
demands of modern markets, and confronted the nature of law itself to
ensure that the state served society.
The bar’s invocation of change, particularly its rhetoric, was not without
its ironies, not the least of which was that, contrary to elite fears, the growth
and expansion of the profession would lead to enhanced power and status
in society. Opportunity and equality in the long run helped maintain the
status of the bar as more people became lawyers, and the goals and norms
associated with the hallmarks of professionalism and expertise reinforced
rather than undermined social stability. When the ideas that animated
professional legal identity came under pressure, lawyers sought to capture
the shifting ideology, recast it in the bar’s own image, and shape the ideology
to serve the profession’s own purposes. As a result, as America emerged from
its shattering, destructive Civil War, attorneys, unlike almost any other
professional group, were positioned to lead the country’s reconstruction
and beyond. Lawyers had survived and prospered, and they were prepared
once more to direct their energy toward their understanding of what was
necessary for the public good, even as what exactly the public good was
would increasingly become contested.
Of the many figures born before the CivilWar who sought immediately
thereafter to escape the profession’s earlier limitations, three in particular,
in very different ways, foreshadowed the future. John Mercer Langston, one
of the few practicing African American lawyers before the war, participated
in Reconstruction America in the training of African American lawyers at
the newly founded Howard University Law School in Washington, DC,
heralding the embrace of newly found citizenship for some or, for others,
the fulfillment of the meaning of citizenship. Myra Bradwell, pursuing a
lifelong professional interest in law in Chicago, fought for admission to the
bar, only to be rejected in her quest for formal professional identity by a
Cambridge Histories Online © Cambridge University Press, 2008
The Legal Profession 105
U.S. Supreme Court that could not allow her constitutional claim to escape
their narrow views of a woman’s proper role. And Christopher Columbus
Langdell fled aWall Street practice, beckoned by President Eliot of Harvard
to reconstitute law as a science and reframe American legal education in the
shape of the modern Harvard Law School. Langdell sought to professionalize
the study of law and remove it from the dead hand of law office ritual and
part-time university lecturers – all to prepare lawyers to meet the challenges
of a new economic order increasingly remote from its roots. The question
for the profession as it embarked on its new journey was whether it would
inadvertently rediscover its past, or reject its past, or simply be condemned
in new forms to repeat it.
Cambridge Histories Online © Cambridge University Press, 2008
4
the courts, 1790–1920
kermit l. hall
I. INTRODUCTION: COURTS AND DISTRIBUTIVE JUSTICE
IN THE NINETEENTH CENTURY
With independence, Americans achieved one of the crucial goals of the
Revolution: direction over their economic future. The process of economic
transformation and the social and political changes that accompanied
it quickened over the next century. Alexis de Tocqueville in the 1830s
observed that the quest for “profit” had become “the characteristic that most
distinguished the American people from all others.” Signs of economic
transformation dotted the landscape. By 1920, trains knitted the continent
together; steamships plied the interior lakes and rivers and extended
into international commerce; airplanes extended warfare to the skies; the
telegraph and the radio provided unprecedented levels of communication;
smoke belched from scores of new factories; cities such as Chicago and San
Francisco thrived; and a great torrent of immigrants swept over the nation’s
borders. The personal, informal, and local dealings that typified the colonial
economy yielded in the nineteenth century to an impersonal national and
international market economy. Increased trading among private individuals
for profit was one of the central developments of the period from the
nation’s beginning through the Progressive Era.
Social and political changes accompanied the nation’s accelerating economy.
At the middle of the nineteenth century slavery posed a massive contradiction
to the underlying proposition that all men were created equal.
Perhaps even more importantly, as the nation spread across the continent,
slavery raised serious political questions about how free and slave labor
could coexist. After the CivilWar the nation had to wrestle with the fate of
4 million persons of African descent previously held in bondage. The war
was also a struggle over the relationship of the states to the nation, the powers
of the national government, and more generally the power that government
106
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 107
at all levels should wield in dealing with issues of health, safety, morals,
and welfare, the so-called police powers.
The exploding market economy had other consequences. The opportunity
for economic gain lured millions of persons of foreign birth and often
non-Protestant religions to America’s shores. This unprecedented influx of
human beings provided badly needed labor but it also undermined the traditional
hegemony of white, Protestant America. Native Americans were
driven increasingly from their original lands and eventually placed on reservations.
Women and even children entered the labor market, the population
shifted from rural to urban, and corporations arose as the primary means of
conducting business.
The American political system had seen as much change as the economy
and society. Political parties, disdained by the Founding Fathers,
quickly emerged as a necessary means of providing unity to separated and
divided governments constitutionally mandated in both the states and the
nation. Parties then evolved into mass movements that broadened the base
of politics, albeit without including women and African Americans. The
parties themselves ultimately became a source of concern, and by 1900
a new reformist movement, the Progressives, emerged with the promise
of corruption-free government founded on a scientific, non-partisan, and
rational approach to governance. They challenged the prevailing political
paradigm and, among other goals, urged that politics and law, courts and
politicians, be divorced from one another.
Progressive criticism of the role played by courts and judges was as
widespread as progressive criticism of the state of American politics. The
concern was appropriate. Throughout the preceding decades both had
helped to reshape the distribution of wealth that flowed from agreements
reached among private individuals. But it would be a mistake to conclude
that the results were expressly the work of judges in particular or lawmakers
in general. As much as driving actions taken by merchants and bankers,
lenders and borrowers, farmers and planters, and business people and laborers,
courts reacted to them. Over the course of the nineteenth century, that
is, simply adjusting existing legal rules to new economic realities became
one of the chief contributions of courts, state and federal.
That said, legislators, state and national, did intervene in the economy
with varying degrees of success. Hence, a constant interplay between judges
and legislators over economic rights characterized the era. When legislators,
for example, attempted to regulate the impact of economic change,
courts sometimes struck their actions down as a violation of individual
and corporate rights. Throughout the era courts tried to answer the critical
question of how to allocate through law the costs, benefits, rewards,
Cambridge Histories Online © Cambridge University Press, 2008
108 Kermit L. Hall
and risks associated with an increasingly acquisitive commercial market
economy.
This meant, almost inevitably, that the question of distributive justice
became one of the courts’ most pressing concerns. In turn, a focus on distributive
justice meant that the courts found themselves operating in a
sometimes awkward embrace between law and politics. Tocqueville is once
again helpful. He observed that in America eventually every political issue
became a legal cause and the courts the forum for its resolution. The famed
French visitor went on to explain that “the Americans have given their
courts immense political power.” Tocqueville’s words offer an enduring
insight into the interaction among politics, law, and courts, the rich brew
from which distributive justice flows. Scholars and public commentators
may debate the desirability of dispassionate and apolitical justice, but the
historical reality of the courts in action, at all levels and in all places, underscores
that they have generally been unable to escape shaping public policy,
even when that might be their desire. Because, from the earliest days of the
Republic, the courts have been embedded in and formed by politics, they
have always been the subject of intense debate. Never was this truer than
during the nineteenth century. The scope and substance of their dockets,
how courts should be structured, staffed, and administered – every aspect
of what they did was scrutinized intensively.
The courts addressed issues of distributive justice through a unique
scheme of judicial federalism that matured during these years. America
at its inception had two distinct systems of courts, one federal and the other
state. Traditionally, the federal system generally and the Supreme Court of
the United States in particular have commanded the lion’s share of attention.
This emphasis on the justices and their work calibrates the entire American
court system by the actions of nine justices and gives exceptional weight
to the federal courts. The perspective is not necessarily unreasonable; any
account of courts in American history must pay serious attention to the
Supreme Court and the lower federal courts. Indeed, the trend over the
course of the century unmistakably recommends that attention. As America
expanded geographically and burgeoned economically, so the stature of
the federal courts grew with it. Especially in the wake of the Civil War
and Reconstruction, a continental empire required a federal court system
capable of bringing stability, certainty, and a national rule of law. Even so,
during the nineteenth century the great body of day-to-day justice took
place in the state trial and appellate courts, not the federal courts. Nor
did growing federal judicial power necessarily come at the expense of state
courts, which saw their importance and prestige increase too, as that of state
legislatures decreased. When Americans became wary of their legislatures,
it was to state appellate courts that they turned.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 109
In short, as Tocqueville noted, Americans showed a tendency to place
unprecedented faith in courts, whether state or federal. The story of the
courts during these years is thus one of accelerating responsibility, of growing
involvement in issues of distributive justice, and of increased importance
in hearing, if not always settling, some of the century’s thorniest political
issues. It is also, on balance, one of an unwillingness to embrace equally
those who did not already have power within the political system.
II. STATE COURTS AND JUDICIAL FEDERALISM
Americans tend to view the court system from the top down, although ironically
they tend to live in it from the bottom up. From the nation’s founding,
the courts have been strongly local institutions. As the great legal
historian James Willard Hurst explained, the colonial courts of general
jurisdiction (civil and criminal) were laid out almost on a neighborhood
basis: the geographic scope of a court was determined by the distance that
a person could ride a horse in one day, which frequently coincided with
the boundaries of a county. The first state constitutions followed this same
pattern. One of the unique features of these courts was the overall independence
they exercised over case flow, finances, and court administration.
This emphasis on localism continued in most states well into the twentieth
century and produced an often luxuriant crop of frequently parochial courts.
As the political scientist Harry Stumpf points out, by 1920 the Chicago
metropolitan area had more than 500 different courts.
Participants in the emerging commercial market economy, however,
increasingly demanded that hierarchy, specialization, and professionalism
be imposed on the courts. During the nineteenth century the courts gradually
devolved from their initial three-tiered ordering (a variety of courts of
limited jurisdiction at the bottom, state trial courts of general jurisdiction
in the middle, and an appellate court at the top) into what was typically a
five-layered system.
The bottom layer comprised justice of the peace or magistrate courts,
the latter to be found largely in rural areas. The second layer grew out of
the inadequacies of the first as, at the end of the nineteenth century, a few
states began to establish municipal courts of limited jurisdiction, accompanied
by specialized courts such as those devoted to juveniles. At the next,
third, level one finds trial courts of general jurisdiction, which handled
both civil and criminal matters. The fourth layer again emerged in the late
nineteenth and early twentieth centuries, when many states created intermediate
courts of appeals primarily in response to population growth and
attendant rising rates of litigation and greater demands on the courts. Given
the rapid expansion of judicial business, intermediate appellate courts were
Cambridge Histories Online © Cambridge University Press, 2008
110 Kermit L. Hall
designed to filter cases on appeal and so reduce the workload of the fifth and
final tier, the highest appellate courts, which were usually called supreme
courts.
State Courts of Limited Jurisdiction
The bulk of the legal business in the United States was handled by the
first two tiers of state courts, those of limited and specialized jurisdiction.
These courts had the most direct impact on the day-to-day lives of citizens,
whether rich or poor, native or foreign born. Taken together, these courts
heard about 80 percent of all legal disputes and in almost all instances their
decisions were final.
The courts of limited jurisdiction had a broad range of responsibilities
and modest resources with which to exercise them. In criminal matters
they dealt with minor offenses, such as petty larceny and burglary, and had
the power to impose only limited punishments – fines, usually by 1920 no
more than $1,000, and jail terms, usually not longer than 12 months. These
offenses constituted the great majority of all criminal matters, which meant
that most criminal justice was meted out by underfunded and understaffed
courts in often hurried and uneven ways. Nor did courts at this level keep
any comprehensive record of their proceedings. Many kept no record at all.
The lack of records meant appeals were difficult and infrequent.
Until the first third of the twentieth century the judges of these courts had
either little or no training in the law. Initially appointed from local elites,
by the third decade of the nineteenth century the great majority of judges
at the lowest levels were elected, most on partisan ballots, and held their
offices for limited terms. When Tocqueville visited the United States, the
practice of electing inferior court judges was sufficiently widespread that it
drew his attention and wrath. Like more recent critics of judicial elections,
Tocqueville concluded that election coupled with limited terms reduced the
independence of judges and left them vulnerable to the prevailing political
winds.
The judicial role itself was not well defined. In rural areas and small
towns, judges often held other positions, serving, for example, as ex officio
coroners. Numerous studies have revealed that judges of courts of limited
jurisdiction tended to show a strong presumption about the guilt of those
who appeared before them and, as a result, focused their attention not on
questions of guilt or innocence but rather on the sentence to be imposed.
They were usually compensated by fees rather than salary, which meant that
their incomes varied according to the proportions in which those brought
before them were adjudged guilty.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 111
State Courts of General Jurisdiction
The trial courts of general jurisdiction formed the next layer. When Americans
think of courts, it is these, which hear and decide civil and criminal
matters at trial, that they generally have in mind. While similar in character
they often varied in actual operation. For example, in many states, these
courts heard appeals from lower courts of limited jurisdiction in addition to
functioning as courts of original jurisdiction. In some states, the jurisdiction
of these courts was divided into two divisions, one civil and the other
criminal. Courts of general jurisdiction had an array of names, which could
imply that similar courts enjoyed very different jurisdictional capacities: In
California, for example, courts at this level were known as superior courts, in
other states they were circuit or district courts, and in New York they were
called supreme courts. (In that state, the highest appellate court became
the Court of Appeals.) The judges of these courts of general jurisdiction
invariably had formal legal training, were better paid than their counterparts
on courts of limited jurisdiction, and enjoyed better facilities. After
mid-century they too were almost always elected to office, for limited terms
of service. Courts of general jurisdiction were also courts of record, which
meant that taking appeals from them was far easier than with courts of
limited jurisdiction.
Trial courts of general jurisdiction were the principal places in the legal
system where grievances of the most serious kind were converted into formal
legal disputes. Most of their business was civil rather than criminal – some
60 percent of the trials held in the United States during the nineteenth
century involved civil, not criminal matters. Reliant in most instances on
juries to render verdicts, the trial courts performed the vital function of
taking complex grievances and addressing them through an adversarial
process. This forced aggrieved parties to frame their disputes in formal,
legal ways. For example, a person injured in a railroad accident would make
a claim based on the emerging law of torts, a business person attempting
to collect money would turn to the law of contract, and so forth. The legal
framing of these disputes was important because the time and cost associated
with doing so more often than not prompted a settlement without resort
to a formal trial. As is true today, the pattern was for parties to settle their
differences before having a jury do it for them. And, just as today, litigants
with greater resources had a better chance of prevailing when they did go
to trial.
These phenomena were not confined to civil litigation. Out-of-court
settlements occurred in criminal trial courts where they were known as plea
bargains. There too, defendants with money to buy the best legal counsel
Cambridge Histories Online © Cambridge University Press, 2008
112 Kermit L. Hall
were at a major advantage. Most perpetrators of crimes in the nineteenth
century were never caught, let alone brought to court. Those most likely to
be caught and charged were persons committing the most serious crimes
(rape, murder, theft, burglary); murder showed the highest rate of success.
Property crimes were far less likely to be cleared. Overall, less than 2 percent
of all reported crimes resulted in final settlement by trial and verdict.
Instead, plea bargains, supervised and approved by trial court judges, were
struck.
The courts of general jurisdiction bore the brunt of a surging population,
an accelerating economy, and the inevitable recourse to law that
accompanied both. The composition of their dockets mirrored the social
and economic circumstances of industrialization. By 1890, civil trial courts
in Boston, for example, had more than 20,000 plaintiffs a year. The courts
were asked to address issues involving business relationships, real estate
transactions, financial arrangements, and injuries associated with the growing
complexity of urban life. The courts became safety valves of sorts,
mediating conflicts among strangers stemming from business transactions
or transportation accidents. The vast majority of these cases were cut-anddried.
Debt collection was the main theme: grocers, clothing stores, and
doctors asked the courts to make their debtors pay. In 1873, Ohio’s courts
of general jurisdiction handed down more than 15,000 civil judgments
worth more than $8.5 million. In December 1903, there were more than
5,100 cases on the dockets of Kansas City’s courts, about 60 percent of them
liability claims against companies.
As the civil business of the courts increased, the inability of the era’s generally
decentralized and unprofessional court system to deal with the results
became ever more evident. In 1885, a special committee of the American
Bar Association found that under then-existing conditions, processing a
lawsuit all the way to decision took from one and a half to six years. In
1876, New Hampshire’s county circuit courts had 4,400 cases continued
on their dockets; 6,000 new cases were added the following year. Crowded
dockets and delays were the norm. The rising professional bar demanded
more courts and more judges. In the Progressive era, in some instances, the
bar would have its demands answered.
State Appellate Courts
Business grew at the top of the hierarchy no less than everywhere else in the
judicial system. By 1900 the work of the nation’s appellate courts amounted
to about 25,000 cases annually. These cases sustained more than 400 different
series of case reports. New York’s famous Court of Appeals, perhaps the
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 113
most revered high court in the late nineteenth century, handed down almost
600 decisions a year. Between 1890 and 1920, the Illinois Supreme Court
produced between 700 and 900 decisions annually. The California Supreme
Court in 1860 published about 150 opinions. By 1890 that number had
tripled. By 1920, however, organizational changes instituted by Progressive
reformers had cut the court’s output by more than half. One of the most
important innovations adopted was establishment of an intermediate court
of appeals designed specifically to relieve the workload of the high court.
Other states soon joined California in this reform effort.
Intermediate courts of appeal had not existed through most of the nineteenth
century. By the beginning of the twentieth century, however, they
had emerged as an increasingly popular solution to the problem of rapidly
expanding appellate dockets. By 1911, thirteen states had created intermediate
appellate courts. A century later, forty-two states had done so.
The reform clearly reduced the flow of cases going to the highest appellate
courts. More important, by granting the judges of the highest appellate
courts choice over the appeals they heard, they allowed state high courts to
set their own agendas.
The diffuse nature of the American appellate courts reflected historical
practices and traditions of the bar that varied from state to state, as
well as differing assumptions among constitution writers about how best
to fit courts to social needs. The confusing nomenclature of these courts
makes the point. For example, the highest court of last resort in Maine
and Massachusetts was called the Supreme Judicial Court; in Maryland and
New York it was known as the Court of Appeals; in Ohio it was called
the Supreme Court. In most states the intermediate appellate courts were
separate entities, but in a few states, such as Texas beginning in 1891, these
courts were formed into separate divisions for criminal and civil appeals.
Appellate courts had to contend with state legislatures jealous to preserve
their own prerogatives from trespass by other branches of government. This
meant, among other things, that initially in the nineteenth century they
put judges on short leashes and limited judicial authority. Thus, in 1809 the
Ohio Senate tried Judges George Tod and Calvin Pease for subverting the
state constitution by undertaking as judges to pass on the constitutionality
of an act of the legislature. Both trials ended with ‘guilty’ votes of a majority
of the senators – one short of the two-thirds required for conviction.
Early in the Republic, many state legislatures continued the colonial
practice of themselves acting as appellate tribunals, setting aside judicial
decisions on their own authority. The willingness of the legislatures to do
so suggests their inheritance from the pre–Revolutionary era of a certain
distrust of courts, which were seen as arbitrary and coercive. The same
Cambridge Histories Online © Cambridge University Press, 2008
114 Kermit L. Hall
distrust is evident in most state constitutions, which designed courts with
blended common law and equity jurisdiction because of lingering fears
about the discretionary powers of equity courts. Despite these difficult
beginnings, between 1790 and 1920 state appellate courts acquired an
increasingly greater level of authority and control over their dockets, a
pattern that paralleled developments in the federal courts.
Notwithstanding their diversity, the state courts of last resort shared
several similarities. On each court, appeals were heard by a relatively small
number of judges (from three to nine) serving fixed terms (on average
about seven years; a very few state judges, like their federal counterparts,
enjoyed tenure during good behavior). State appellate judges were invariably
active politically before their judicial service; after mid-century they
reached their posts most frequently through popular, partisan elections.
Appellate judges had formal legal training, typically during the nineteenth
century by reading in the office of a lawyer or with a judge; by 1920 about 65
percent of appeals court judges had either attended or graduated from law
schools. Increasingly, judges joining the courts came from less privileged
backgrounds with fewer connections through birth and marriage to other
lawmakers. Finally, every state court of last resort enjoyed final authority
to determine the meaning of the state’s constitution.
The highest state courts were kept generally busy throughout the century.
Their sustained engagement in the legal affairs of the state meant that they
were deeply implicated in shaping and maintaining the social order. In
the pre–Civil War South, for example, these courts regularly heard cases
involving slavery, ranging from the power of masters to discipline their
slaves to the legitimacy of contracts made for the sale and transport of
human chattel. Most slave justice occurred beyond the reach of the rule
of law. From time to time, however, slaves and their masters came into
the courtroom, even into the highest courts of appeal. Judge Joseph Henry
Lumpkin of the Georgia Supreme Court in 1852 acknowledged the paradox
of giving any expression to the idea of legal rights when it came to a slave.
Lumpkin appreciated the humanity of the slave, but he accepted at the same
time that the slave could never stand as an equal, either to his or her master
or to the state of Georgia. Under such circumstances the court might have
paternalistically protected the interests of the slave. For example, when
Lumpkin considered an appeal by a slave convicted of rape, he noted that “a
controversy between the State of Georgia and a slave is so unequal, as of itself
to divest the mind of all warmth and prejudice, and enable it to exercise its
judgment in the most temperate manner.” That said, Lumpkin sustained
the slave’s guilty verdict and subsequent hanging. Other Southern judges
took the slave’s humanity into account. In Ford v. Ford (1846), Nathan
Green of the Tennessee Supreme Court ordered a slave freed through a will
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 115
despite the contention of his deceased master’s family that a slave could not
possibly sue in a court.
After the war these same courts had to address issues of racial segregation.
In almost every instance they upheld the power of the state to discriminate.
Nor was court tolerance of discrimination a peculiarity of the South. Racial
groups outside the South won no more support from the highest appellate
courts. The California Supreme Court refused to block the state legislature
from imposing special liabilities on Chinese and Japanese immigrants,
including limiting their rights to hold and use real property.Women fared
little better. The Illinois Supreme Court, for example, in 1872 denied Myra
Bradwell, who founded and published the Chicago Legal News, admission to
the bar because she was a woman.
In every state economic change imposed heavy demands on the highest
appellate courts of the states. From 1870 to 1900 more than one-third
of the cases decided in these courts dealt with business matters, such as
contract, debt, corporations, and partnerships. Another 21 percent involved
real property. Thereafter, litigation patterns began to shift gradually away
from business and property disputes and toward torts, criminal, and public
law matters. By 1920, litigants were coming to realize that alternative ways
of handling disputes, such as arbitration, were preferable to the courts, where
outcomes were expensive, technical, and above all slow to eventuate.
We have seen that during the first half of the nineteenth century, state
appellate courts found themselves confronted by legislatures anxious to
constrain the encroachment of judicial authority on their own prerogatives.
By the middle of the century, however, the authority of legislatures was
coming under general attack, the outcome of growing public concern over
corruption and the fiscal problems that legislative corruption imposed on
the citizenry. The result was a tendency among constitutional reformers
to add to the authority of state courts of last resort by providing for the
popular election of their judges to limited terms of office. In 1832, Mississippi
became the first state to make provision for election of state appellate
judges, followed quickly by New York, Ohio, and several other states. Of
twenty-one constitutional conventions held between 1842 and 1860, nineteen
approved constitutions that allowed the people to elect their judges,
often on partisan ballots. Only in Massachusetts and New Hampshire did
delegates repudiate the concept, and in both instances voters rejected the
delegates’ work. On the eve of the CivilWar, twenty-one of the thirty states
had adopted popular election. While this reform is usually interpreted as an
attempt to limit judicial authority, it was intended to do just the opposite.
With the wind of popular election at their back, state appellate court judges
began passing on the constitutionality of legislation at an unprecedented
rate.
Cambridge Histories Online © Cambridge University Press, 2008
116 Kermit L. Hall
Before the Civil War, review of state statutes by state courts was “a
rare, extraordinary event.” Before 1861, for example, the Virginia Court of
Appeals, the state’s highest appellate court, had decided only thirty-five
cases in which the constitutionality of a law was in question. Of these,
the judges overturned the legislature on only four occasions. The Supreme
Judicial Court of Massachusetts, one of the two or three most prestigious
appellate courts in the nation before the Civil War (and one that to this
day has appointed rather than elected judges), had by 1860 considered
the constitutionality of sixty-two laws. It struck down only ten. Over the
following sixty years, however, judicial review became an important practice
in state courts of last resort and, if still controversial, an accepted
feature of public life. The Virginia Court of Appeals, for example, found
against one in every three of the statutes that came before it during the
last third of the nineteenth century. Ohio’s Supreme Court held 15 state
laws unconstitutional in the 1880s, 42 in the 1890s, and more than 100
in the first decade of the twentieth century. The Minnesota Supreme Court
in the period between 1885 and 1899 struck down approximately seventy
statutes; the Utah Supreme court between 1893 and 1896 threw out eleven
of the twenty-two statutes brought before it.
Judicial review went hand in hand with new legal doctrines designed to
address the consequences of industrialization. One of the most important
was the doctrine of “substantive due process,” by which courts held it
appropriate to judge the constitutionality of legislative action not simply
according to procedural criteria of fairness but by examination of substantive
outcomes. The American Law Review summed the matter up nicely at the
end of the nineteenth century: “it has come to be the fashion . . . for courts
to overturn acts of the State legislatures upon mere economical theories
and upon mere casuistical grounds.” The New York Court of Appeals set
the doctrinal stage in the 1856 case ofWynehamer v. People, when it invoked
substantive due process to strike down a law designed to regulate the liquor
business. Thereafter the doctrine grew luxuriantly. The Iowa Supreme Court
in 1900 nullified a statute that permitted the use of oil for lighting purpose
only in lamps made by a particular manufacturer, but not in other lamps. The
judges reasoned that any manufacturer capable of producing the required
oil should be able to sell it to whomever they pleased.
By the early twentieth century, state courts were regularly striking down
statutes based on their reading of state constitutions. Because state constitutions
had become both longer and more code-like in character over
the course of the nineteenth century, the courts of last resort found more
and more grounds on which to act. Between 1903 and 1908, for example,
state courts struck down more than 400 laws. Although the state appellate
judiciaries generally held office for limited terms, judges claimed that
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 117
election provided them sufficient popular support to legitimize their interventions.
The tendency to increased judicial activism needs to be kept in perspective.
State appellate courts upheld the vast majority of economic regulatory
legislation, leaving legislatures to apply state police powers broadly. Legislation
that remained unquestioned included, for example, regulation of the
professions, development of a system of occupational licenses, and limitations
on the hours and conditions of labor. Still, appellate judges by 1920
had firmly established their right to decide conclusively what their state
constitutions meant.
State Courts and Reform
The claim of judicial review drew the attention of Progressive reformers.
State judges, they argued, exercised the,ir powers of review indiscriminately;
they campaigned for office by promising that once on the bench they would
decide issues not on the merits but with particular, predetermined outcomes
in mind. The American Judicature Society took steps to promote adoption
of non-partisan judicial elections, as well as measures to force disclosure of
the sources of contributions to judicial election campaigns, and to encourage
greater judicial professionalization. The most important gains occurred in
heavily urban states, such as New York, where judicial corruption and
boss-driven politics were connected. The Society’s greatest success would
not come until the 1940s, however, when it pioneered the so-called Merit or
Missouri Plan of judicial selection to reduce partisanship and electioneering
in judicial selection.
The attack on accepted partisan forms of judicial election was one facet
of a broader effort to rein in the direct impact of politics on the courts
while elevating the professional administration of justice generally. Future
Harvard Law School dean Roscoe Pound initiated this movement in 1906
when he authored a wholesale indictment of the shortcomings of state
court systems. State courts, Pound charged, were rife with corruption and
influence-peddling. They were also by and large completely incoherent in
their approaches to law, notably at the lower levels of limited and general
jurisdiction. As illustration of the state courts’ shortcomings, Pound
brought up the example of New York judge Albert Cardozo, father of future
Supreme Court Justice Benjamin Cardozo, who some thirty years before had
been convicted and jailed for taking bribes. Pound’s report concluded that
each state’s courts should function as an integrated system in order to break
down what Pound viewed as a destructive pattern of local autonomy. That
meant, among other things, bringing greater administrative coherence to
their operation, so that courts located beside one another would in fact
Cambridge Histories Online © Cambridge University Press, 2008
118 Kermit L. Hall
know what the other was doing. The goal was to unify the court structure
by consolidating and simplifying its management, budgeting, financing,
and rule making. Pound’s unification movement was only beginning to
gather steam by 1920, and it has proceeded by fits and starts since then. For
all of these reform efforts, the state courts remained very much creatures of
the political cultures in which they operated.
Pound’s call for reform blended with growing demands after the Civil
War from the developing legal profession to improve the quality of state
courts. As lawyers organized themselves as a profession, they expected
judges to become more professional as well. First, new state bar associations,
then the American Bar Association, founded in 1878, and then the American
Judicature Society campaigned to improve municipal and metropolitan
courts and to promote specialization of courts. For example, the movement
to record proceedings in several major municipal court systems dates to
the early twentieth century. Several states, following the model of the first
juvenile court in Chicago in 1899, began to adopt statewide systems of
specialized courts that provided consistency and predictability in application
of the law. Growing concerns about the fate of juveniles were echoed
in increasing doubts about the viability of the family and the adequacy of
the existing court structure to deal with matters of adoption, divorce, and
child custody. In 1914 Cincinnati pioneered the development of courts with
jurisdiction over cases involving both children and families. Similar courts
appeared shortly thereafter in other selected cities, including Des Moines,
Iowa; St. Louis, Missouri; Omaha, Nebraska; Portland, Oregon; Gulfport,
Mississippi; and Baton Rouge, Louisiana.
The rise of a class of consumers generated a new stratum of small claims
courts, although they did not necessarily function to protect the buyer. The
first small claims court in the United States was established in 1913 in
Cleveland as the Conciliation Branch of the Municipal Court. The movement
subsequently spread across the nation. Ironically, what was viewed at
its inception as a reform designed to give the common person easy access to
justice and to unclog the existing courts to deal with more serious matters
often became instead a means for doctors, utility managers, and department
store heads to collect debts owed by persons usually of modest income.
State courts formed the core of the new American legal system, dispensing
justice over a broad area in increasingly greater numbers. To all intents
and purposes, justice from 1790 to 1920 meant predominantly local justice
meted out through local judges embodying the power of the state. This very
localism was a source of considerable strength, but also, as Willard Hurst
has observed, increasingly of limitation. As the Republic matured, as affairs
of economy, society and state grew ever more complex and intertwined,
state courts became increasingly vulnerable to incursions from the federal
judiciary.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 119
III. THE CONSTITUTION AND THE ESTABLISHMENT
OF THE FEDERAL COURTS
The steady expansion of judicial power in nineteenth-century state courts
was matched by similar developments in the federal judiciary. What
emerged by 1920 was a uniquely American scheme of courts, characterized
in particular by a substantially more powerful and influential federal
court system than had been in existence at the nation’s inception.
The federal Constitution crafted in 1787 was designed to bolster the
authority of the national government through the establishment of an independent
federal judiciary. While the debates in the Constitutional Convention
gave relatively little attention to the issue of courts, the document
that emerged sketched an entirely new court system, most fully realized
in Article III, but with implications for the federal courts’ structure and
function scattered also through Articles I, IV, and VI.
Article III established “constitutional courts” based on “the judicial
power of the United States,” vested in “one Supreme Court, and in such
inferior Courts as the Congress may from time to time ordain and establish.”
As in so many other instances, the framers drew on their state experience in
establishing the federal judiciary. Most of them embraced the idea that the
federal courts would curb popular excesses while preserving minority rights
of property holders. JamesWilson was a notable exception; he believed that
the federal judiciary derived its authority as much from the people as did the
elected members of the executive and legislative branches. The second most
active voice in the Convention, Wilson insisted that the power of judges
derived not just from their knowledge of the law but also from the direct
grant of authority made by the people to them when the Constitution was
created.
The federal courts drew intense scrutiny in the ratification debates, and
they remained a source of controversy throughout the nineteenth century.
Supporters of the new federal judiciary downplayed their importance.
Alexander Hamilton insisted in Federalist 78, for example, that the courts
would be “the least dangerous branch” because they had access to neither
purse nor sword. According to Hamilton, the federal courts would exercise
judgment instead of will, and law instead of politics. These together – probity
and the rule of law – would become the bedrock of the federal courts’
authority. Behind Hamilton’s words lay a deeper understanding that the
success of the American economy depended on federal courts strong enough
to impose a national rule of law, one that would bring stability and order
to the new nation’s commercial and financial dealings.
Anti-Federalist opponents of the Constitution, on the other hand, viewed
the federal courts as a threat to the sovereign rights of the states and even to
the liberty of the American people. Robert Yates, of New York, insisted that
Cambridge Histories Online © Cambridge University Press, 2008
120 Kermit L. Hall
the Congress, being accountable to the people, should be the final interpreter
of the Constitution and that the role of the new federal courts should be
strictly limited. He and other opponents of the federal Constitution argued
that by making the courts and their judges “totally independent, both of
the people and the legislature . . . [we] are . . . placed in a situation altogether
unprecedented in a free country.”1
Article III secured two great structural principles: federalism and the separation
of powers. The Supreme Court became the nation’s highest appellate
court (it heard cases brought on appeal from other federal and state courts).
The lower federal courts were to operate as the trial courts of the federal
system, with special responsibilities initially in the areas of admiralty and
maritime law. The strong nationalists in the Philadelphia Convention had
wanted to specify the structure of the lower federal courts, since they feared
that without doing so the already established state courts would dominate
the interpretation of federal law. The strongest advocates of state power
in the Convention, such as Yates, proposed precisely the opposite – that the
task of interpreting the federal Constitution and conducting federal trials
should be assigned to these same state courts.
The two sides settled their differences over the federal courts by deferring
many issues to the first Congress and by leaving the key provisions of the
Constitution dealing with the courts vague. This approach stood in contrast
to state constitutional documents that typically spelled out in detail the
structure of state courts. Article III did mandate the Supreme Court, but
it left Congress to determine its size and the scope of its appellate jurisdiction.
The Constitution granted the Supreme Court only a limited original
jurisdiction in matters involving ambassadors, other public ministers and
consuls, and those in which a state was a party. The Constitution was also
silent on the question of the qualifications of the justices and the judges
of the lower courts. For example, there was no constitutional requirement
that a judge be an attorney, although throughout the history of the nation
only persons trained in the law have served on the federal bench.
Finally, the Constitution failed to specify one of the federal judiciary’s
most important powers: judicial review, the practice by which judges declare
unconstitutional acts of Congress and state legislatures. The framers certainly
anticipated that judicial review would be exercised; the only unknown
was its scope. Anti-Federalist Luther Martin, for example, observed during
the convention that “As to the constitutionality of laws, that point will
come before the Judges in their proper official character. In this character
they have a negative on the laws.” It did not follow, however, that they could
1 Essays of Brutus, No. XI, reprinted in Herbert J. Storing, The Complete Anti-Federalist
(1981), 2, § 2.9.135.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 121
do what they wanted; delegates of every ideological stripe posited a sharp
distinction between constitutional interpretation necessary to the rule of
law and judicial lawmaking. “The judges,” concluded John Dickinson, a
Federalist, “must interpret the laws; they ought not to be legislators.”
There was, however, a textual basis for the exercise of federal judicial
review, especially of state laws. Article VI made the Constitution the
“supreme Law of the Land,” and in Article III the courts were named as
interpreters of the law. The same conclusion can be reached by combining
Article I, section 10, which placed certain direct limitations on the state
legislatures, with the Supremacy Clause and Article VI. Simply put, judicial
review of state legislation was an absolute necessity under the framers’
compound system of federalism. Here too, nevertheless, the scope of the
power remained to be defined. “The Framers anticipated some sort of judicial
review,” the famed constitutional scholar Edward S. Corwin observed.
Of that, “there can be little question. But it is equally without question
that ideas generally current in 1787 were far from presaging the present
vast role of the Court.”
Article III also conferred jurisdiction (the authority by which a court can
hear a legal claim) in two categories. The first was based on subject and
extended to all cases in law and equity arising under the Constitution, laws,
and treaties of the United States, as well as cases of admiralty and maritime.
The second category depended on the nature of the parties in legal conflict.
This jurisdiction included controversies between citizens of different states,
between a state and citizens of another state, between states, and between
states and the nation.
Most of the delegates to the federal convention appreciated that the rule of
law in a republican government required an independent national judiciary
that would be only indirectly accountable. Thus, they granted the president
authority to appoint federal judges with the advice and consent of the Senate.
Once commissioned, these judges held office during good behavior, their
salaries could not be diminished while in office, and they were subject to
removal from office only “on Impeachment for, and Conviction of, Treason,
Bribery, or other high Crimes and Misdemeanors.”
More telling than the generalities of the Constitution itself, the single
most important moment in the development of the federal courts was the
Judiciary Act of 1789, a statute whose impact continues to this day. In
debating what to do with the federal courts, the first Congress echoed the
sentiments of the often conflicted delegates in Philadelphia. Critics of the
federal courts in the first Congress continued to insist that they were not
necessary, that their roles could be performed by state courts, and that
they were, in any case, a “burdensome and needless expense.” These debates
remind us of the inherent localism of the American court system. Opponents
Cambridge Histories Online © Cambridge University Press, 2008
122 Kermit L. Hall
claimed that federal judges would be remote and insensitive to state and
local issues and that those persons charged with crimes would be hauled from
their homes and tried in faraway places where they and their good characters
would not be known. Proponents of a strong national government, led by
Senator Oliver Ellsworth of Connecticut, prevailed, and in the Judiciary
Act of 1789 Congress exercised its powers to create lower federal courts,
just as the Federalists had desired. However, the Act lodged the new courts
squarely in the states, a decision meant to placate Anti-Federalists. This
politically acceptable compromise established a federal court organization
that remained in broad terms unchanged for more than a century.
The 1789 act divided the new nation into thirteen districts and made
the boundaries of the courts in these districts coterminous with those of the
states. (Massachusetts and Virginia received two each, Rhode Island and
North Carolina none because at the time they were still not members of
the Union.) The act also divided the country into three circuits, in each of
which a circuit court consisting of two justices of the Supreme Court and
one district judge in the circuit would sit twice a year. The circuit courts,
whose history was to be unsettled for more than a century, entertained
appeals from the district courts below and held jury trials involving the
most serious criminal and civil cases to which the federal government was a
party. The Supreme Court itself was composed of five associate justices and
a chief justice.
The act made Supreme Court justices into republican schoolmasters
whose presence in the circuits symbolized the authority of the remote
national government. Circuit riding, which persisted in various ways
throughout the nineteenth century, also exposed the justices, in their capacity
as trial judges, to local concerns. However, circuit riding was unpopular
with the justices, for it exacted a heavy physical and mental toll. Justice
William Patterson would complain bitterly that his travels through Vermont
were so arduous that “[I] nearly went out of my head.”
The 1789 act confirmed the power of Congress over the jurisdiction
of the lower courts, and indeed over their very existence. Their allotted
jurisdiction consisted of admiralty cases (given exclusively to the district
courts) and cases concerning diversity of citizenship, with a limited appellate
jurisdiction in the circuit courts over district court decisions. Federalists
did succeed in section 25 of the act in allowing federal courts to review state
court decisions involving federal laws and the Constitution, a provision that
stirred heated debate until the Civil War. The new structure was notable
because it actually withheld from the federal courts the potentially much
broader power to hear all cases arising under the Constitution. As a result,
for more than three-quarters of a century state courts played a distinctive
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 123
role in interpreting the nation’s ruling document and some of the laws
associated with it.
While the creation of a federal court structure below the level of the
Supreme Court had a strong nationalizing impact, the provisions of the
1789 act also recognized the strongly local quality of the courts. District
judges, for example, not only lived among the people they served, but
section 34 directed that on comparable points of law federal judges had
to regard holdings in state courts as the rule of decision in their courts.
Furthermore, district court judges were to be recruited from local political
and legal backgrounds, and these lineages made them susceptible to the
immediate pressures in the friends and neighbors who appear before them
and whose lives were often directly affected by their decisions. These federal
district courts and the judges that presided over them were a kind of hybrid
institution, organized by the federal Constitution but sensitive to state
interests. The upshot was that during the course of the nineteenth century
the federal courts only gradually pulled even with the state courts in prestige
and power.
IV. THE FEDERAL COURTS
As was true at the state level, the history of the federal courts from 1790
to 1920 shows consistent attempts to shape the courts’ structure and jurisdiction
in ways intended to produce a political and legal advantage for
the majority in control at any particular moment. Over time, the federal
courts grew more influential, more controversial, and, ironically, more
widely accepted than at the time of the nation’s founding.
The structure of the courts has generated political debate for more than
two centuries. Throughout, the forces of localism, political influence, and
administrative efficiency have tugged at one another. Circuit riding and
the larger issue of the organization of the federal courts offer appropriate
examples.
Circuit riding was one of the administrative weaknesses but political
benefits of the new federal court structure established by the 1789 Judiciary
Act. The first members of the Supreme Court were assigned not only to meet
in the nation’s capital (initially New York City) to hear and decide cases but
also to hold courts in designated circuits. The practice, however, imposed
often severe physical hardships on the justices, who faced the daunting task
of traveling over poor roads and hazardous rivers. In 1793 the Federalist
Congress bowed to pressure from the justices and made a minor change in the
system by providing that only one justice rather than three had to serve in a
circuit. More fundamental change took place in 1801, as the Federalist Party
Cambridge Histories Online © Cambridge University Press, 2008
124 Kermit L. Hall
was going out of office. Congress in the Judiciary Act of that year abolished
circuit riding altogether and created in its place an expanded circuit court
system to be staffed by its own appointed judges. The change had the
immediate political benefit of granting John Adams’ outgoing Federalist
administration the opportunity to appoint a host of politically loyal judges.
A year later, however, newly elected President Thomas Jefferson and the
Jeffersonian Republican majority in the Congress reintroduced a system of
six circuits, to each of which one Supreme Court justice and one district
court judge were assigned. The new federal circuit courts were abolished;
not until 1869 were separate circuit court judgeships reestablished. The
Jeffersonian Republicans were no fans of the federal courts in any case, and
they took some delight in imposing circuit court riding duties on Supreme
Court justices. The new circuits, which became essentially trial courts rather
than courts of appeal, proved as unwieldy for the justices as they had before.
The justices found circuit riding increasingly oppressive, especially in the
newly expanding western regions of the country. By 1838, for example, the
number of federal circuits had risen to nine. In that year the justices reported
to Congress that they traveled an average of almost 3,000 miles a year,
an astonishing distance given conditions of travel. Justice John McKinley
traveled more than 10,000 miles in his circuit, composed of Alabama,
Louisiana, Mississippi, and Arkansas. He reported that he had been unable
to hold court in Little Rock because of a combination of flooding and bad
roads.
Until the CivilWar, the organization of the federal courts changed little.
The war and the post-war period of Reconstruction, however, profoundly
accelerated the push toward a stronger national government and a more
powerful federal judiciary to uphold it. In 1875, the Republican-controlled
Congress adopted a new judiciary act that expanded the jurisdiction of the
federal courts far beyond the modest bounds established in 1789. Republicans
expected the act to permit newly freed slaves to circumvent the
prejudice of state courts, but in practice the law most benefited interstate
businesses. The most important change was a provision granting the federal
courts original jurisdiction based on the “arising under the Constitution”
provision of Article III, or under national treaties, provided the matter in
dispute exceeded $500. This meant that a litigant could initiate a case
in a circuit court based on the assertion of any federal right. As important,
a defendant who was brought into a state court could have the case
removed to the ostensibly more neutral national forum of a federal court.
Either party, then, could remove a case to federal court. In addition, any
and all diversity suits could be removed, even when one of the parties did
not live in the “forum” state (that is, they were not resident in the state
where the federal court proceeding was to be held). Most important, the act
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 125
permitted removal of all suits raising a question of federal law. Collectively,
these provisions effectively encouraged the removal of suits from state to
federal courts, from local to national forums of law.
The Judiciary Act of 1875 became a milestone in the history of the
lower federal courts’ relationship to the business community. The statute
responded to surging national commerce, in particular to railroad corporations
seeking relief from state courts in cases involving foreclosure, receivership,
taxation, and even injuries to person and property. Not only traditional
cases based on diversity jurisdiction were now before the federal courts, but
all actions involving federal laws. The act meant that for the first time a
generalized federal question jurisdiction had been established – a jurisdiction
that, as Justice Felix Frankfurter once observed, has come to be the
indispensable function of the federal courts.
One of the consequences of this expanded jurisdiction was that the
caseloads of the federal courts soared. For example, in 1870 the Supreme
Court docket listed 670 cases. By 1880 the number had more than doubled.
In 1870 federal district and circuit court dockets listed some 29,000 cases.
By 1890 the number was more than 54,000. The lower federal courts grew
in prestige and importance, emerging as “forums of order” in which interstate
businesses could secure a hearing free from the local interests to which
state courts presumably paid greater attention. That process had begun in
1842 when Justice Joseph Story’s decision in Swift v. Tyson established a
federal common law of commerce. It gathered momentum after the Civil
War and continued unchecked into the New Deal of the 1930s.
A doubling of caseloads without an increase in the number of federal
judges prompted delays not only in hearing but even more important in
deciding cases before the federal courts. Although litigants were keen to
turn to the federal courts, especially in matters involving the regulation of
business by state and federal governments, they often encountered delays of
years in having suits resolved. Growing demand and the increasing importance
of the federal courts also meant rising costs. Between 1850 and 1875,
the expense of operating the federal courts rose six-fold, from $500,000 to
$3 million. By 1900 the figure had tripled, to $9 million. By 1920 it stood
at $18 million.
In 1891, at the behest of a combination of corporate entities and the newly
minted American Bar Association, Congress passed a further Judiciary Act
to address these organizational problems. The 1891 act established a new
and badly needed layer of federal courts just below the Supreme Court: the
U.S. Courts of Appeal. Two new judges were to be appointed in each of the
nine federal circuits that now stretched from coast to coast. The act also
provided that a Supreme Court justice might serve as a third judge in each
of the new courts, but did not make the justice’s participation compulsory:
Cambridge Histories Online © Cambridge University Press, 2008
126 Kermit L. Hall
If required, a district court judge could take the justice’s place. The act did
not do away with the existing circuit courts. Rather, the U.S. Courts of
Appeal were to review appeals from both federal district and circuit courts.
The lack of clarity in the relationship between the new courts of appeal and
the existing circuit courts meant a degree of jurisdictional confusion.
Most significantly, the 1891 act increased the Supreme Court justices’
control over their own docket. Congress provided that decisions in the new
circuit courts of appeal would be final, subject in most cases only to a writ
of certiorari issued by the Supreme Court. This new authority gave the
justices greater ability to order their agenda based on their assessment of
the significance of a particular constitutional controversy. The new Judiciary
Act had the added effect of underscoring for litigants the importance of the
lower federal courts, since from that point on their decisions were given an
increased finality.
Three additional steps taken in the first quarter of the twentieth century
completed the transformation of the federal courts. First came the Judiciary
Act of 1911, which finally abolished the federal circuit courts reconstituted
by the 1802 repeal of the previous year’s Judiciary Act. The 1911 act
transferred the circuit courts’ powers to the federal district courts. Second,
congressional legislation in 1922 authorized the Chief Justice to oversee
the federal courts generally and to provide for the assignment of district
court judges where they were needed outside their own district. The act
also created the Judicial Conference of the United States, composed initially
of senior federal judges and expanded subsequently to include all federal
judges. The mission of the conferences was to provide regular surveys of
the business in the various federal courts with an eye to transferring judges
between districts and circuits as caseloads demanded.
The third and most far-reaching step was the Judiciary Act of 1925,
popularly known as the Judges’ Bill. The outcome in good part of tireless
lobbying by Chief JusticeWilliam Howard Taft, one of the leading figures
in court reform during the twentieth century, the 1925 Judiciary Act clarified
the jurisdiction of the federal courts and formalized their three-tier
structure: district trial courts, courts of appeal, and the Supreme Court.
The act established the federal district courts as the preeminent federal trial
courts equipped with extensive original jurisdiction. The courts of appeal
were identified as the final resting place in federal appellate jurisdiction,
for the measure further broadened the Supreme Court justices’ discretion
in exercising review of lower court decisions under the writ of certiorari,
which necessarily further narrowed access by litigants as a matter of right.
As in previous instances of federal judicial reform, the 1925 act responded
to corporations interested in a uniform administration of justice and to bar
groups bent on improving the efficiency of federal (but not state) courts.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 127
One of the critical roles filled by the district courts was the supervision
of bankruptcy. Article I, section 8, of the Constitution authorized Congress
to establish “uniform Laws on the subject of Bankruptcies throughout the
United States.” In 1841 Congress enacted its first attempt at comprehensive
bankruptcy legislation, setting out voluntary procedures for individuals and
largely ending imprisonment except in cases of fraud. Opponents considered
the act too protective of debtors, and it was repealed the following year.
A similar act was passed in 1867 and remained in effect for the next two
decades before it too was repealed. Finally, in 1898, Congress agreed on a
comprehensive bankruptcy statute setting out a body of law that would last
for almost a century. The act designated the U.S. district courts to serve as
courts of bankruptcy. It also established the position of referee, appointed
by district judges, to oversee the administration of bankruptcy cases and to
exercise limited judicial responsibilities under the guidance of the district
court.
During the nineteenth century Congress also created other specialized
tribunals to deal with matters falling outside the jurisdictional specifications
of Article III. Among these tribunals, territorial courts were of particular
importance. Territorial courts were temporary federal tribunals established
by Congress to extend federal justice into areas that had not yet achieved
statehood but were possessions (territories) of the United States. Territorial
courts combined the roles of both district and circuit courts. Their judges,
for the most part, had limited terms of office and were appointed by the
president with the advice and consent of the Senate. Unlike Article III
judges, territorial court judges could be removed for misfeasance without
impeachment. In 1900 there were six territorial courts. These courts were
implicated in a wide range of non-commercial issues. For example, in 1874,
Congress passed the Poland Act in an effort to stem the practice of polygamy
in Utah by bringing the weight of the federal government to bear. That law
assigned jurisdiction of polygamy trials to federal territorial courts there
and further provided for polygamy convictions to be appealable to the U.S.
Supreme Court. In 1878 the Supreme Court of the United States, in Reynolds
v. United States, sustained a Utah territorial court’s decisions upholding the
conviction of Brigham Young’s private secretary, George Reynolds, and
declaring polygamy unconstitutional.
In 1855 Congress created another special non-Article III court, the Court
of Claims. Like the judges of the federal courts of general jurisdiction – the
Article III courts – the three judges of the Court of Claims were nominated
by the president, confirmed by the Senate, and served with life tenure
during good behavior. The Court had jurisdiction to hear and determine
all monetary claims based on a congressional statute, an executive branch
regulation, or a contract with the U.S. government.
Cambridge Histories Online © Cambridge University Press, 2008
128 Kermit L. Hall
Prior to the court’s creation, claims against the government were submitted
through petitions to Congress itself. The 1855 act relieved Congress of
the workload, but preserved its traditional control over the expenditure of
all public monies by requiring the new court to report on its determination
of claims and prepare bills for payments to successful claimants. In 1863,
the Court of Claims gained authority to issue its own decisions rather than
report them to the legislature, but the revised statute still required that the
Treasury Department prepare an estimate of appropriations necessary to
meet determinations made by the court before any money was distributed.
In 1865, this resulted in a refusal on the part of the Supreme Court to
hear appeals from the Court of Claims because its decisions were subject to
review by an executive department. Within a year, Congress repealed the
provision for review by the Treasury and specifically provided for appeals
to the Supreme Court. Twenty years later (1887) Congress expanded the
jurisdiction of the Court of Claims by making it the principal forum for all
claims against the federal government. It is worth noting that until 1946
this court provided the only legal channel available for Native American
tribes contesting violations of treaties with the United States.
V. THE U.S. SUPREME COURT
Since the Founding Era, the U.S. Supreme Court has been the single institution
with national authority to develop a uniform national law. But although
it sat atop the federal judicial pyramid in the nineteenth century, it only
gradually earned the power to say conclusively what the Constitution meant.
In its earliest years, indeed, the Supreme Court enjoyed little of the stature
it would later accumulate. Among the first justices appointed by President
George Washington, one declined to serve in order to take a more prestigious
position as state supreme court judge; another, though accepting the
position, failed to appear for a single session of the Court. The first Chief
Justice, John Jay, pursued diplomatic interests as aggressively as he did his
duties on the bench. Eventually he resigned altogether to become governor
of New York.
Delegates to the Philadelphia convention had agreed on the necessity of
establishing a Supreme Court, but they had reached no consensus on its
duties. Led by James Wilson, they had debated at length the creation of
a Council of Revision, consisting of the president and a number of federal
judges ( James Madison’s Virginia plan) or cabinet officers (Charles
Pinckney’s proposal) to review federal (and perhaps state) legislation before
it became law. That idea eventually gave way to the Supreme Court, the
full scope of whose powers the delegates never defined fully. The president
was given authority to appoint the justices, with the advice and consent
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 129
of the Senate, and the members of the Court were to serve during good
behavior, subject, like other Article III judges, to removal by impeachment
of a majority in the House of Representatives and conviction by a vote of
two-thirds of the members of the Senate. Of the 110 justices who have
served on the high court to date, only one, Samuel Chase in 1804, has ever
been impeached. Chase escaped conviction the following year.
Over the course of the nineteenth century the authorized size of the
Court varied from six to ten, changing – in response both to the expansion
of the federal circuit system and to political pressures – on no less than six
occasions before 1869, when the present number of nine was established.
Every justice appointed to the high court during these years (and indeed
through 1967) was a white male.
The Supreme Court’s original jurisdiction, as outlined in Article III, was
modest. It was further limited early in the Court’s career by the famous case
of Marbury v. Madison (1803), in the course of which the Court itself decided
that jurisdiction to issue writs of mandamus directed to other branches of
government, as provided in the 1789 Judiciary Act, was unconstitutional.
Cases heard under original jurisdiction, however, comprise only a tiny fraction
of the Court’s business, slightly more than 150 cases in the past two
centuries. That jurisdiction extended only to “all cases affecting ambassadors,
other public ministers and consuls, and those in which a state shall
be a party.” The Court, further, has never accepted that it has no discretion
to refuse such cases; instead, it has declined to hear cases in its original
jurisdiction unless there is compelling reason to do so. Through 1920, the
cases that it did accept involved disputes over state boundary lines and
water rights between two or more states.
By far the most important jurisdiction granted the Court was appellate.
During the nineteenth century the justices steadily expanded that jurisdiction
and by 1925, as we have seen, they had also gained significant control
over their docket. Part of their motivation in doing so reflected the growing
belief, as Tocqueville noted, that political matters were, for purposes
of political stability, better managed through legal and judicial processes
than by political branches alone. To an important extent, then, the power
of the Supreme Court developed because Congress was willing for the sake
of political expediency to leave difficult matters of public policy, such as
the question of whether slavery could exist in the territories, to be shaped
by the Court through law rather than politics. But the expansion of the
Court’s appellate jurisdiction was also prompted by Congress’s belief, usually
driven by demands from lawyers and the business community, that
it would contribute to enhanced efficiency in the Court’s operations and
enhanced uniformity in federal law across the circuits and throughout the
states.
Cambridge Histories Online © Cambridge University Press, 2008
130 Kermit L. Hall
Originally, cases were appealed most frequently to the Court based on a
claim that an error had been committed in a court below. The justices, under
this system, had little discretion over their docket. Thus, as the caseload
of the federal courts grew, so too did the numbers of appeals. During its
first decade, the Court heard fewer than 100 cases. By the mid-1880s the
high court had more than 25,000 cases docketed, and it decided as many
as 300 in a single year. Congress, however, has consistently given the high
court greater discretion over its docket, with clear results. As it became
more difficult for a case to reach the Supreme Court, the decisions of the
justices became correspondingly more important, with public attention
increasingly focused on them.
The history of the high court up to 1920 was the history of vital leadership.
The justices played a decisive although often controversial role in
public affairs, expanding their influence often while disavowing that they
either wanted or should have such influence. For example, in addressing a
directive from Congress to seat federal judges as pension claims commissioners,
Chief Justice John Jay stated in Hayburn’s Case (1793) that Congress
could only assign judges to judicial and not administrative duties. The
same year, Jay refused President George Washington’s request for an advisory
interpretation of the 1773 Franco-American treaty. By limiting the
Court to actual cases and controversies, the early justices assured themselves
that when they spoke they did so in ways that would have direct
rather than imagined consequences, while also avoiding overt political and
policy involvements.
Chief Justice John Marshall (1803–35) built on this early foundation
by establishing the authority of the Court to interpret conclusively the
meaning of the Constitution. He did so by confirming the Court’s capacity
to exercise judicial review – first for federal legislation in Marbury v. Madison
(1803), in which the Court declared a portion of the Judiciary Act of 1789
unconstitutional; later for state legislation in such cases as McCulloch v.
Maryland (1819), in which the Court voided a Maryland law imposing a tax
on the Second Bank of the United States. The cost of this heightened judicial
authority over constitutional interpretation was inevitably the judiciary’s
greater involvement in the political system.
Marshall’s successors expanded the scope of judicial review and the prestige
of the Court at the same time that they refused to adjudicate so-called
political questions. In Luther v. Borden (1849), Chief Justice Roger B. Taney
held that the question of which of two competing governments in Rhode
Island was legitimate was entirely “political in nature.” Therefore, Taney
concluded, the political branches of the federal government, not the courts,
could best determine whether Rhode Island or any other state had met
the mandate of the Guarantee Clause of Article IV that each state have a
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, 1790–1920 131
republican form of government. The judiciary, Taney observed, had no role
to play; its business was legal, not political.
Taney would himself succumb to the seductive influences of judicial
power and in so doing provide a stark reminder of the costs to the high court
of blurring the distinction between what was legal and what was political,
between interpreting the law and making the law. In Dred Scott v. Sandford
(1857), Taney spoke for a majority of the Court in attempting to settle
the politically explosive issue of slavery in the territories by declaring that
persons of African descent were not citizens of the United States and that
they had no rights that white men were bound to respect. For good measure
the Chief Justice made sure that incoming President James Buchanan, a
supporter of slavery in the territories, knew of the Court’s decision so that
he could include an oblique reference to it in his inaugural address. Taney’s
opinion stirred outrage among free-state Republicans on the eve of the
CivilWar and sharply divided the public over how much power the justices
should exercise. Similar outcries came when, in Pollock v. Farmers Loan and
Trust Company (1895), a bare majority of the Court declared the federal
income tax unconstitutional, a position that was not reversed until the
ratification of the Sixteenth Amendment in 1913. A year later and with
only one dissenting voice, Plessey v. Ferguson (1896) sustained segregation
of the races based on the principle that separate but equal facilities met the
requirements of the Equal Protection Clause of the Fourteenth Amendment.
The high court generally supported the regulatory efforts of both state
and federal governments, but the justices learned that they too could employ
substantive due process to block legislative action when it seemed appropriate
to do so. In Lochner v. New York (1905), for example, a sharply divided
court struck down a New York state law that prohibited bakers from working
an excessive number of hours each week. The majority said that laborers
should be free to strike whatever deal they could with an employer; Justice
Oliver Wendell Holmes, Jr., in dissent insisted that the majority was
merely reading an economic theory that favored business into the Constitution.
Three years later, in Muller v. Oregon, the same court entirely ignored
its Lochner precedent and decided to shine a paternal eye on women. A
unanimous court held that the state of Oregon had power to regulate the
conditions of labor of women because women were both emotionally and
physically inferior to men. Progressive reformers argued that the Court
needed to change and among the more aggressive suggestions was doing
away with tenure during good behavior.
By 1920, both by design and circumstance, the purportedly apolitical
Supreme Court had emerged as more than a court but less than a fullblown
political institution. It was, in that regard, a metaphor for the entire
American system of courts. What its history has repeatedly shown is a court
Cambridge Histories Online © Cambridge University Press, 2008
132 Kermit L. Hall
that paradoxically functions of the world of politics without being directly
in that world.
CONCLUSION: THE COURTS AND NINETEENTH-CENTURY
CHANGE
Common law courts typically operate after the fact. They tend to respond
to rather than anticipate change. The American court system between 1790
and 1920 exuded just such qualities. Litigants had to bring cases; lawyers
representing them had to present arguments that squared precedent with
new circumstances. But if continuity was a major chord, change and adaptation
were certainly also present. Slavery, segregation, industrialization,
massive influxes of foreign-born migrants, and the development of new
technologies meant that courts could not simply do always as they had previously
done. Nor did judges simply mirror the economic and social changes
of the times through which they lived; they also attempted to shape the
effects of change in allocating the costs, risks, and benefits of economic
development while protecting individual property rights. In the process,
they acquired new authority. By 1920 the courts exercised judicial review
extensively, using that power to adjust the consequences of industrialization,
sometimes by setting aside legislation and at other times by allowing
it to stand. Even when they did not strike down a law, the simple fact that
they were capable of exercising such a power made their capacity to limit
legislative authority as important as the actual limits they imposed. The
courts became better articulated with social and economic outcomes, and
their judges more professional.
Courts’ efforts to respond to the new industrial order were mixed, ambivalent,
and even contradictory. They persisted to an extraordinary degree, even
in states with elected judiciaries, in the belief that traditional property
rights required continuing judicial protection. While judges were most
often deferential to legislatures, they nevertheless recognized that property
rights were sacrosanct. Breaking new legislative ground in matters of the
rights of workers, African Americans, immigrants, or women was hence
often beyond either their imaginative grasp or indeed their will to act.
As the 1920s opened, nevertheless, there was no doubt that, for all of the
diversity in the American system of judicial federalism, courts as a whole
had established a firmer place in the American system of governance than
they enjoyed at the nation’s beginning.
Cambridge Histories Online © Cambridge University Press, 2008
5
criminal justice in the united states,
1790–1920: a government of
laws or men?
elizabeth dale
Histories of modern criminal justice are less studies of doctrine than they are
examinations of the state, since it is generally assumed that the institutions
of criminal justice – police, courts, and prisons – play an integral role
in the process by which modern states maintain the order that advanced
capitalist economies demand. But while most accounts of criminal justice
in the modern West trace the way a formal, rational system of criminal
justice based on the rule of law developed alongsid,e a capitalist economy
and a national state, the history of criminal law in the United States follows
a different track. Although the long nineteenth century, stretching from
ratification of the Constitution at one end to the close ofWorldWar I at the
other, was marked by the emergence of an advanced, nationwide capitalist
economy, it saw the development neither of a national state nor a national
system of criminal justice.
Even as they position the United States outside the standard track of state
development, histories of criminal law in the United States still trace its
evolution along a parallel route, demonstrating that over the course of the
long nineteenth century the country developed a localized state. It differed
from the traditional state to the extent its scope was smaller, encompassing
only the institutions of city, county, and state governments, instead of
a national bureaucracy, and its operations were, as a result, on a smaller
scale. But many have argued that its smaller scale was its greatest strength.
Relative locality permitted a degree of popular participation unimaginable
in a state based on national bureaucracies; the nineteenth-century American
state encouraged popular sovereignty. The result, while not a traditional
state in theWeberian sense, shared with theWeberian states an emphasis on
law, criminal law in particular. Throughout the nineteenth-century United
States (the slaveholding South is invariably the exception that proves the
rule), the local state maintained order by channeling disputes into the state
court system, which ruled according to locally understood norms, defined
and applied by the people of the community. In addition to maintaining
133
Cambridge Histories Online © Cambridge University Press, 2008
134 Elizabeth Dale
the discipline the national economy required, these local criminal courts
offered opportunities for popular participation through service on juries, by
means of private prosecutions, and by electing court judges. The breadth of
participation was such that in much of the country (once again, the South
was the exception) even those excluded from voting or holding office by
reason of sex, race, or poverty could exercise some sovereignty through their
involvement in the local courts.
The resulting history has been one of American distinctiveness, a unique,
indigenous version of the rise of the state. But it has also been an extremely
court-centered view. If the point of reference widens beyond the formal
institutions of law, to consider what happened within the criminal justice
system as part of what happened in the society outside the courts, the picture
that emerges is no less distinctive, but considerably less uplifting. As we
will see, the wider frame of reference raises serious questions about whether
there ever was a state in the United States, even at the local level, during
the long nineteenth century. Local governments, North and South, never
developed the authority a state requires, with the result that they were
never able to exercise a monopoly on violence or implement the certainty
of a rule of law that state theory requires. Far from being instruments
of popular sovereignty, local courts were all too often nothing more than
tools of private justice, easily supplanted by extra-legal practices, while
substantive law was ignored and unenforceable. Theories of punishment
were undermined all too easily by private interests driven by a desire to
make a profit rather than by theories of penology.
I elaborate on these contentions in what follows and, in so doing, construct
an alternative history of criminal justice in the nineteenth-century
United States. First, I revisit the ambiguous role that national government
played in criminal law from the ratification of the Constitution to the Red
Scare that came at the end of World War I. Then I turn to criminal justice
at the local level. My exposition is arranged in the order of a criminal
case: policing is followed by prosecution, and we end with a section on
punishment. Each section canvasses local justice on a national scale, examining
points of similarity and difference between the practices in the North
and South; sections on formal institutions are balanced by considerations
of informal practices. The picture that ultimately emerges is of a criminal
justice system that rested on popular passions and pragmatic practices as
much as on legal doctrine – a government of men, not laws.
I. A MARKET REVOLUTION WITHOUT A NATION-STATE
Shortly after theWar of 1812, the United States began to develop a national
market economy. By the 1840s, that economy was mature. While the various
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 135
parts of the country participated in it differently – some through manufacture,
some through the national and international sale of goods, others
through the interstate sale of slaves – all had felt its effects long before the
first shot was fired on Fort Sumter. So too, each experienced some impact
of the economy’s industrialization in the decades following the Civil War.
Yet even as the economy achieved national scale, no nation-state arose in
the United States.
The non-appearance of the nation-state was a consequence of repeated
choices, not constitutional imperative. While the American Revolution
may be read as resistance to efforts to bring the colonies into the variation
on the nation-state that England was developing, and the Articles of
Confederation as the codification of that extreme anti-state position, the
subsequent ratification of the Constitution was a step back from an extreme
anti-state position. How large that step had been was hardly a matter of consensus,
as the endless antebellum debates over states’ rights demonstrated.
The impact of those debates was particularly felt in the area of criminal
law. Just before the start of the Market Revolution, in 1812, the Supreme
Court decided United States v. Hudson and Goodwin,1 which declared that
there could be no federal common law of crimes. The Court’s conclusion
that nothing in the Constitution permitted the federal courts to take on
a general criminal jurisdiction stood in marked contrast to the concurrent
development of a federal common law of commercial transactions, which
the Court formally recognized in Swift v. Tyson in 1842 and which remained
good law until 1938, when it decided Erie Railroad v. Tompkins.2 Yet Hudson
did not hold that the Constitution reserved the authority over criminal law
for the states. Instead of framing the problem in terms of federalism, the
Court’s decision turned on its conclusion that the federal courts had only
limited, rather than general jurisdiction, and could only act where Congress
expressly gave them power to do so. While that ruling left open the possibility
that Congress could pass an omnibus federal crime act, in the absence
of congressional action the federal courts were not empowered to handle
criminal cases.
Hudson actually resolved very little; its ambiguity was magnified by
Congressional inconsistency. As early as 1789, Congress gave all federal
courts the power to grant petitions of habeas corpus “for the purpose of an
inquiry into the cause of a commitment.” That act did not extend federal
habeas protection to state court actions, but made clear that the protections
existed for those held in federal custody. The next year, in the Federal
Crime Act, Congress officially created some federal crimes, involving acts
1 United States v. Hudson and Goodwin, 11 U.S. 32 (1812).
2 Swift v. Tyson, 41 U.S. 1 (1842); Erie Railroad v. Tomkins, 304 U.S. 1 (1842).
Cambridge Histories Online © Cambridge University Press, 2008
136 Elizabeth Dale
or offenses against the U.S. government. In 1793, it passed the first Fugitive
Slave Act, making it a federal crime to interfere with the capture of slaves;
at the end of the decade, Congress created more federal crimes with the
passage of the four Alien and Sedition Acts of 1798. Over the next sixty
years, Congress passed several other substantive criminal laws: in the 1840s
it prohibited postmasters from serving as agents of lotteries and banned
the importation of “indecent and obscene” prints and paintings; in 1860, it
passed a law intended to protect women who immigrated from seduction
on board ship. Other acts of Congress in the 1820s and 1830s outlawed
lotteries and dueling in the District of Columbia and criminalized the sale
of alcohol in “Indian Territory.” These laws represented only a part of the
morals legislation Congress was asked to pass in the decades before the
CivilWar, but other efforts typically failed not as a matter of constitutional
principle, but because Southern Congressmen were increasingly hostile to
any sort of legislation that might provide a precedent for national regulation
of slavery.
Even as regional interests effectively blocked efforts to pass federal criminal
laws in the antebellum era, Congress expanded the federal role in criminal
justice indirectly. A law passed in the early 1830s extended the federal
habeas power, giving federal judges the authority to hear habeas corpus
petitions brought by individuals imprisoned by state or federal authorities
for “acts committed in pursuance of a law of the United States.” At the
end of the 1830s Congress expanded federal habeas power further, with
a law providing that federal judges could hear claims by state or federal
prisoners who were “subjects or citizens of a foreign state.” Throughout the
antebellum era, the federal government also created institutions of criminal
justice. In the Judiciary Act of 1789, the first Congress created the office of
U.S. Marshal and assigned one to each U.S. District Court. The marshals
had the power to arrest and detain, and each was empowered to employ
deputy marshals who could themselves deputize temporary deputies and
summon a posse comitatus. Marshals also had the power to ask the president
to call up the militia and order it to support the marshal, a power that one
marshal exercised just three years later, during the Whiskey Rebellion in
1792. Toward the end of the antebellum era, violent disputes between proand
anti-slavery forces led to a further increase in the marshal’s powers. In
1854, in the wake of the disturbances in the Kansas-Nebraska territories, a
ruling by the Attorney General of the United States expanded the marshal’s
power further by establishing that they had the authority to deputize the
army as a posse.
Congress created other federal law enforcement agencies in the antebellum
era, most notably in 1836, when it gave the postal service the power
to hire inspectors to investigate postal crimes. Federal criminal jurisdiction
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 137
expanded further during the CivilWar. Military tribunals were created, initially
to hear cases involving charges of treason and sabotage, which tried
civilians in a variety of ways for a variety of offenses. But as time passed the
jurisdiction of the courts expanded, and they ultimately heard cases involving
crimes that ran the gamut from fraud against the government to morals
offenses, such as selling liquor to Union soldiers. Federal law enforcement
power increased in other ways as well. In 1861, Allen Pinkerton’s detective
agency, which had previously engaged in investigation for local and regional
businesses (including the railroads), was hired to serve as the secret service
for the Northern army. Its writ ran wide. The Pinkertons investigated businesses
that defrauded the federal government, tracked and arrested those
suspected of spying for the Confederacy, and also tried to monitor enemy
troop strength. Two years later, in 1863, Congress established the Internal
Revenue Agency and gave it the power to investigate and enforce tax laws.
That same year, Congress authorized funds to pay for a private police force
under the control of the Secretary of the Interior. In 1865, this force was
made a permanent federal police agency – the Secret Service – under the
control of the Secretary of the Treasury. From 1860 to 1877 the federal
government had another “super” police force at its disposal in the shape of
the U.S. Army, which performed police functions in the states of the former
confederacy. In 1878, with the passage of the Posse Comitatus Act, Congress
formally took the power to enforce criminal laws from the armed forces.
But even after the passage of that act officially relinquished the power to
the states and their National Guard units, the army was used during labor
battles in the mining regions of Montana, and in 1894 in Chicago – over
the objections of the state governor – during the strike by the American
Railway Union against the Pullman Company.
The federal role in criminal justice expanded in other ways in the period
after the CivilWar. In 1873 the Comstock Act authorized postal inspectors
to seize obscene materials (including information relating to contraception)
sent in the mail. In 1908, the Justice Department, acting initially without
Congressional approval, created an internal investigative unit, the Bureau
of Investigation, which also had the power to arrest. The Narcotics section
of the Internal Revenue Service was formed to enforce the federal drug regulations
just beforeWorldWar I; during the war, and the subsequent Red
Scare of 1919–20, those agencies, along with the Secret Service, enforced
the sedition and draft laws and began the practice of collecting dossiers
on suspected subversives. In the period between the Civil War and World
War I, Congress passed a series of laws on criminal matters as well, deriving
its authority to do so from a variety of constitutional provisions. In the
Judiciary Act of 1867, it expanded the scope of the federal Habeas Corpus
Act, declaring that federal courts could issue the writ in “all cases where
Cambridge Histories Online © Cambridge University Press, 2008
138 Elizabeth Dale
any person may be restrained of his or her liberty in violation of the constitution,
or of any treatment or law of the United States.” Congress used
its powers under the Thirteenth and Fourteenth Amendments to pass the
Civil Rights Acts of 1866 and 1875, both of which included criminal sanctions.
In 1873, Congress relied on its constitutional authority to regulate
the mail when it passed the Comstock Act. Congress passed several pieces
of morals legislation, including the Lottery Act of 1895, which were based
on its constitutional authority to regulate commerce, as was the Sherman
Antitrust Act, passed in 1890, which established a range of criminal punishments
for monopolistic behavior. Twenty years later, in 1910, Congress
again relied on the Commerce Clause when it passed the Mann (White Slave)
Act, which made it a felony to transport a woman in interstate commerce
“for the purpose of prostitution or debauchery.” In contrast, the Espionage
Act of 1917 and the Sedition Act of 1918, omnibus laws criminalizing a
range of activities relating to subversive activities and spying, were based
on Congressional authority over the armed forces. The Volstead Act (1919),
which gave the federal government the power to enforce prohibition, was
passed pursuant to the Eighteenth Amendment.
In 1919, the Supreme Court affirmed convictions under the Sedition
Act of 1918 in Abrams v. United States and Schenck v. United States.3 But in
the period between the end of the Civil War and the end of World War
I, the Supreme Court’s rulings in the area of the federal role in criminal
law enforcement were marked by inconsistencies and confusion. The Court
upheld the Lottery Act in Champion v. Ames in 1903, and the Mann Act in
Hoke v. United States a decade later.4 In yet another decision on the Mann Act,
Caminetti v. United States, which was decided in 1917, the Court explicitly
confirmed that Congress had the power to regulate individual morality.5
Other Court rulings on federalism left the balance of state and federal
authority unclear. In the Civil Rights Cases (1882), the Court struck down
parts of the Civil Rights Act of 1875 on the ground that it infringed on the
police powers of the states.6 But in its decision on the Pullman strike, In re
Debs (1895), the Court upheld a federal court contempt proceeding arising
out of a federal injunction against the railroad boycott, and it justified the
result with reference to Congressional authority to regulate the mail and
interstate commerce.7 The expansive federal power the Court recognized in
Debs seemed at odds with the more limited view of federal commerce clause
3 Abrams v. United States, 250 U.S. 616 (1919); Schenck v. United States, 249 U.S. 47 (1919).
4 The Lottery Cases, 188 U.S. 321 (1903); Hoke v. United States, 227 U.S. 308 (1913).
5 Caminetti v. United States, 242 U.S. 470 (1917).
6 Civil Rights Cases, 109 U.S. 3 (1882). 7 In re Debs, 158 U.S. 564 (1895).
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 139
power it articulated that same year with respect to the Sherman Antitrust
Act, in United States v. E. C. Knight.8
Neither rhyme nor reason strung these rulings together, least of all police
power theory. In Adair v. United States (1907) the Court declared the Erdman
Act of 1898, which made it a federal offense for any employer in interstate
commerce to blacklist or fire employees who joined a union, an unconstitutional
infringement on state police powers.9 In that case, the Court once
again offered a narrow interpretation of Congressional authority to enact
criminal legislation based on the Commerce Clause. But in E. C. Knight
the Court declared the states’ police powers were “essentially exclusive,”
which suggested that the federal government had some jurisdiction in that
area. That same year, in In re Debs, the Court implicitly rejected the theory
of Hudson and Goodwin that the federal courts were courts of limited jurisdiction,
holding to the contrary that while the government of the United
States was a government of enumerated powers, it had full sovereignty
within those enumerated powers and could, therefore, use military force,
the equitable powers of the federal courts, or the process of criminal contempt
to protect its sovereignty. The Court’s insistence in Debs, that its
decision in no way replaced state court criminal jurisdiction, could not
outweigh the importance of its ruling, since the result was to give federal
courts the power to overrule the decisions of state authorities. Government
by injunction, which greatly expanded the powers of the federal courts,
continued through passage of the Norris-LaGuardia Act of 1932.
While many of its rulings in the area of criminal law were ambiguous
and contradictory, the Supreme Court consistently refused to consider the
possibility that the provisions of the Bill of Rights protected defendants
in state court proceedings. In Barron v. Baltimore (1833) the Court had
held that the Bill of Rights did not apply against the states, thus guaranteeing
that states could determine what procedural protections defendants
would be granted in criminal trials.10 Invited, fifty years later in Hurtado
v. California (1884), to reconsider that ruling in light of the intervening
ratification of the Fourteenth Amendment, the Supreme Court once again
denied that the Bill of Rights set any limits on state law enforcement officers
or state court criminal trials.11 The Court reiterated that point twenty
years later, in Twining v. New Jersey (1908), where it held that the right
against self-incrimination set out in the Fifth Amendment did not apply
8 United States v. E. C. Knight, Co., 156 U.S. 1 (1895).
9 Adair v. United States, 208 U.S. 161 (1907).
10 Barron v. Baltimore, 32 U.S. 243 (1833).
11 Hurtado v. California, 110 U.S. 516 (1884).
Cambridge Histories Online © Cambridge University Press, 2008
140 Elizabeth Dale
in state court proceedings.12 Although it modified that position modestly
in the 1930s, it was not until the middle of the twentieth century that the
Court agreed to extend the protections of the Bill of Rights to state court
criminal proceedings.
The result, throughout the nineteenth century and well into the twentieth,
was a national government whose ambivalent exercise of power either
positively (by enacting and policing federal criminal laws) or negatively (by
means of federal oversight of state court criminal processes) kept it from
achieving the authority needed to establish a modern state. In the antebellum
era, Tocqueville had suggested that the resulting localism created
a distinctive American state that was a particular strength; writing at the
end of the nineteenth century in his dissenting opinion in Hurtado, the first
Justice Harlan was not so sure. Objecting to the Supreme Court’s ruling
that the Fifth Amendment did not apply to state court trials, he outlined
both the benefit of the Fifth Amendment and the result of the failure to
apply it to state proceedings: in “the secrecy of investigations by grand
juries, the weak and the helpless – proscribed, perhaps, because of their
race, or pursued by an unreasoning public clamor – have found, and will
continue to find, security against official oppression, the cruelty of the mobs,
the machinations of falsehood, and the malevolence of private persons who
would use the machinery of the law to bring ruin upon their personal enemies.”
While Harlan’s faith in the protections provided by the jury system
was not entirely warranted, the history of the long nineteenth century bears
out his perception that the vacuum that existed at the national level gave
the United States a criminal justice system in which there was all too often
neither state nor law.
II. FIRST FAILURES OF THE LOCAL STATE: POLICING
SOUTH AND NORTH
Policing predates both capitalist economies and the modern state; law
enforcement in a variety of forms existed in pre- and early modern Europe.
This notwithstanding, studies of the state frequently tie the development
of exclusive systems of police to the rise of the modern state. The history
of policing in the United States raises several questions about that association.
The sporadic efforts on the part of the national government to create
police forces never established a significant police presence, and while local
governments established a variety of policing agencies from 1780 to 1920,
their authority was frequently checked and challenged by popular justice
in a variety of forms.
12 Twining v. New Jersey, 211 U.S. 78 (1908).
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 141
During the antebellum era, ironically, the strongest police forces arose
in that part of the country most often considered anti-state. The English
colonists to North America had brought with them traditional forms of
policing – sheriff, constable, and night watch (a volunteer peacekeeping
company drawn from the citizenry) – when they crossed the Atlantic. Before
the American Revolution, those popularly based institutions provided the
extent of policing for most of the colonies; the exception was those colonies
in which the desire to control runaways and suppress slave insurrections
prompted the creation of additional forces. The colonial government of
South Carolina was one of the first to establish a special slave patrol, doing
so in 1693. Other slaveholding colonies followed suit over the next century.
Patrollers’ powers over blacks, free and enslaved, were considerable, but
not unlimited. In South Carolina, for example, patrols could go into the
dwellings of blacks (and white servants), seize contraband items, and arrest
slaves, free blacks, or white servants. But they could not go onto whiteowned
property without the permission of the owner, and they could be,
and often were, thwarted in their efforts to enforce pass laws and other
restrictions on slaves by masters who refused to follow the laws. Notwithstanding
the patrols’ limitations, and perhaps because of them, toward the
end of the antebellum era some elite whites in South Carolina argued that
the jurisdiction of slave patrols should expand to include white poachers,
trespassers, and vagabonds as well.13
By that point, fear of slave insurrection had already led Charleston, South
Carolina, along with other Southern cities, to create armed, semi-military
police forces. Charleston’s police force, which had the power to arrest blacks
and whites, was established as early as 1783; New Orleans established its
own police department, modeled on Napoleon’s gendarmerie, in 1805. There
were some differences between these two models. Members of the New
Orleans’ force were uniformed and armed (at first with muskets, after 1809
with sabers) and served mostly at night, though some members were on
reserve during the day. After 1836 the police in New Orleans moved away
from that military model; its officers no longer wore uniforms or carried any
weapons other than staves. By contrast, South Carolina consistently relied on
the military model of policing. From 1806 on, Charleston had an appointed,
uniformed guard whose members were paid a salary and armed with muskets
and bayonets. Until 1821 members of this force patrolled the city streets in
platoons of twenty to thirty men; in the aftermath of the abortive Denmark
Vesey uprising, Charleston’s patrol stopped wearing uniforms. While some
accounts indicate Charleston’s police squads continued to patrol the streets
13 Minutes of the Beech Island (S.C.) Agricultural Club, 3 December 1859, pp. 130–131.
South Caroliniana Library, University of South Carolina, Columbia, South Carolina.
Cambridge Histories Online © Cambridge University Press, 2008
142 Elizabeth Dale
at night, at least some guardsmen began to work assigned beats. The powers
of Charleston’s police expanded throughout the antebellum period: a horse
guard was added in 1826 and a detective force in 1846. By 1856 the
department had established a picture gallery of known criminals, as well as
a classification system for recording arrests and convictions (to put this in
perspective, Boston created its detective force the same year as Charleston,
but New York had no detective squad until 1857 and did not organize a
rogue’s gallery until the end of the nineteenth century).
With more than 100 men in the department at the start of the Civil
War, Charleston’s police force was by far the largest in South Carolina.
But by 1860 cities across the state, from Aiken to Yorkville, had active
police forces. South Carolina’s police, in turn, served as models for police
forces in the major cities in Georgia, Alabama, and Virginia. Unique among
antebellum Southern cities, New Orleans had several black officers on its
police force from 1806 until 1830, but then had no African Americans
on the force until 1867, when Reconstruction altered the balance of racial
power in the city. During Reconstruction several other Southern cities,
including Wilmington, North Carolina, modestly integrated their forces,
and others experienced significant integration. By 1876 half the officers on
Charleston’s force were black. Reconstruction’s end put a stop to that experiment,
along with so many others, though there were still African American
officers on the Tampa, Florida, police force in the 1880s; on theWilmington,
North Carolina, force as late as 1898; and in the Tulsa, Oklahoma,
police department in 1917.
But the continued presence of black officers represented the remnants
of the earlier pattern, rather than an established hiring practice. After its
only black officer resigned, Tampa hired no black officers until 1922. Nor
were the numbers of black officers ever particularly significant on police
forces North or South, even when African Americans managed to obtain
positions. In 1906 the police force in Atlanta had black officers, but they
were confined to patrolling the black parts of the city; notwithstanding
its thriving African American population, Tulsa’s police force had just two
black officers in 1919. The situation was no better above the Mason-Dixon
line. Chicago hired its first African American police officer in 1873, but
forty years later, when blacks represented 6 percent of the city’s labor pool,
they made up only 2 percent of its police force. And women, of course, fared
far worse. North and South, city police departments had women serving as
jail matrons before the CivilWar, but the first policewoman in the country
was not appointed until 1905.
While few in the South questioned the value of having squads of police
to control the slave population, many opposed the creation of police forces
in the North out of fear they posed too great a risk of increasing the size and
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 143
power of local governments. Police were a problem precisely because they
seemed a step toward the creation of a state. Philadelphia briefly established
a day watch in 1833, but had no permanent force until the 1840s; Boston
had established one only a few years earlier, in 1838. New York continued
to have elected constables, complemented by appointed day marshals and
a large force of night watchmen, throughout the 1830s. A commission
appointed by the mayor in 1836 recommended that New York create a
police force modeled on Sir Robert Peel’s reforms establishing the London
Metropolitan Police (1829), but its suggestion was ignored. There was
a second effort to establish a police force in New York in 1844, when the
state legislature recommended that the City create a “Day and Night Police”
modeled on London’s system and employing 800 men. The city government
refused to go that far, but the mayor did appoint a uniformed police force of
200 men. That force lasted only as long as the mayor’s term; when the new,
Democratic administration took control of city administration the next
year it implemented the state legislature’s recommendation and created a
department of 800 men. In contrast to the semi-military organization of the
Southern police forces, officers inNewYork’s newly created department, like
their counterparts in Philadelphia and Boston, wore no uniforms and carried
no weapons, though in New York each was given a special badge. It was
only toward the end of the antebellum era that these Northern departments
began to embrace a more militaristic model. In New York, members of
the force were given uniforms in 1855 and officially allowed to carry guns
in 1857; Philadelphia’s officers had no uniforms until 1860, and Chicago’s
officers had to wait until 1863 for theirs. For the same reason, these cities
also resisted creating centralized commands for their departments before
1860.
Just as a desire to suppress slave uprisings drove Southern cities to establish
police departments, fear of riots and mobs finally led to their creation
in the North. Boston’s police department was created a few years after a
riot that destroyed a Catholic girls’ school; New York’s efforts to establish
a department began in earnest after three violent riots in 1843. Chicago
established a police force after the Lager Beer Riot in 1855. While the
creation of the police forces in the North had been limited by the fear that
they might become a standing army, once created the forces in New York,
Boston, Philadelphia, Chicago, and other major cities were untrained and
subject to few legal restrictions. As a result, their successes were predictably
limited, and their activities created disorder as often as they restrained it.
In theory officers had authority to arrest anyone, but police typically were
deployed against the lower classes and immigrant populations, their roles
limited to breaking up fights and suppressing violence (especially riots).
They were often unable to perform either role; throughout the antebellum
Cambridge Histories Online © Cambridge University Press, 2008
144 Elizabeth Dale
period, city governments North and South often had to call in the militia,
and several cities went further, forced to turn to private “volunteer militias”
to supplement their police forces. Even that was not always enough.
In antebellum Chicago and other cities property owners often hired private
detective agencies to locate stolen property, and businesses hired private
firms, such as the privately run Merchant Police, to patrol their premises.
Sometimes, popular frustration with the failings of the police went further,
prompting revolts against local government. In 1851, the Vigilance
Committee took over San Francisco’s government in response to its failures
to maintain order. A few years later, in 1858, a Vigilance Committee
protesting a similar problem in New Orleans seized control of both the
state arsenal in that city and police headquarters. Unable to subdue the
group, the mayor of the city declared its members a special police force.
Several violent altercations followed, causing the mayor to be impeached,
but the Committee disbanded when its party lost the next election. For
others, self-help was a more straightforward, personal matter. Throughout
the antebellum period men in New Orleans, New York, Philadelphia, and
Chicago, as well as other cities, carried weapons for their own protection.
Among elites, the weapon of choice was a sword cane until the creation of
the revolver made that a more attractive option; men in the working class
relied on knives and bare fists.
Efforts to strengthen the authority of the police and create a greater
distance between governed and government increased after the Civil War.
Local governments, particularly in the North, began to professionalize their
departments in response to complaints that officers took bribes, displayed
political or ethnic favoritism, and turned a blind eye to crime. Those complaints
led most Northern cities to complete the move toward the military
model of policing that had been favored in Southern cities before the Civil
War, reorganizing their police departments under a centralized chain of
command. Those developments did little to alter the basic perception that
the police were corrupt and incapable of preventing crime or apprehending
criminals, nor did they put an end to political influence on the police.
Although centralization was intended to remove the police from political
control that aim was undermined by the politicization of appointments to
the central command. Other reform attempts, begun in New Orleans in
the 1850s, to make merit the keystone of hiring and promotion decisions
in police departments, were consistently blocked. It was not until the very
end of the nineteenth century that most cities made police work part of the
civil service and provided their officers with training. In 1888, Cincinnati
created a police academy; New York implemented some informal training
processes by the 1890s, but delayed creation of its own academy until 1909.
Chicago established its training academy a year later.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 145
Under such circumstances, as one might expect, popular forces continued
to intersect with public policing, with frequently violent results. During
South Carolina’s Ellenton Riots in 1876, the local sheriff called in an allwhite
posse to help capture blacks suspected of aiding a wanted rapist.
When the posse turned mob, it set off a weeklong race war. In 1888, a mob
in Forest, Illinois, helped capture a young black man suspected of killing a
white girl in Chicago and nearly lynched him in the process. Some suspects
were not so lucky. In 1880, a mob in Northampton County, Pennsylvania,
seized Edward Snyder, suspected of killing Jacob and Alice Geogle, and
lynched him notwithstanding the protests of the local law enforcement officers.
Police also were accused of doing nothing during moments of heightened
tension. During the race riots in Chicago in 1919 and Tulsa in 1921,
for example, the police were accused of standing by as white mobs attacked
blacks and damaged their property. During the labor strikes of the era, some
charged the police with attacks on striking workers and permitting strikers
to be attacked, while others accused the police of aiding and abetting
the striking workers.
III. THE ONGOING ROLE OF EXTRA-LEGAL JUSTICE
As all this suggests, well into the twentieth century different communities
in the United States continued to use a variety of informal means to enforce
norms. Those extra-legal processes, in turn, sometimes reinforced, but as
often interfered with the formal processes of criminal justice, preventing
local governments and police forces from claiming exclusive control over
discipline or establishing a monopoly on violence.
Two forms of extra-legal justice, honor culture and lynch mobs, provide
the bookends for the period. At the start of the antebellum era, honor
culture’s emphasis on personal response to assaults on reputation sanctioned
the resort to violent means – duels, canings, or fights with fists and knives –
by those who wished to punish everything from adultery to slander. But
while reprisal was the preferred method of defending honor, violence, lethal
or otherwise, was not the only means available. Notwithstanding that some
studies assert that going to law was inconsistent with the defense of honor,
Benjamin Perry, a lawyer who practiced in antebellum South Carolina,
brought several lawsuits that he characterized as actions by young women
brought in defense of their honor. Honor culture impinged on formal law
in other ways as well. While some affairs of honor, including the duel in
which Perry shot and killed his opponent, never resulted in prosecution,
participants in other rencontres were arrested and tried. In many of these
instances, the code of honor trumped, or at the very least modulated, the rule
of law. In South Carolina in 1845, Charles Price shot Benjamin Jones because
Cambridge Histories Online © Cambridge University Press, 2008
146 Elizabeth Dale
Jones had called his (Price’s) daughter a liar. A grand jury promptly indicted
Price for murder, but at trial the petit jury as quickly rejected that charge,
determining that Price was guilty of nothing more than manslaughter. An
equally sympathetic judge then sentenced Price to just a year in jail.
Most histories associate honor with the South, but the culture of honor
extended above the Mason-Dixon Line. In the 1840s and 1850s, merchants
in St. Louis who had migrated to that city from New England held duels
on a sandbar in the Mississippi known as “Bloody Island.” In Philadelphia,
young men of substance crept away to Delaware to kill one another in
duels until well into the 1840s. Throughout the antebellum period, men
from the middling and lower classes in cities like Philadelphia, New York,
and Chicago defended their honor with knives and fist, and juries in the
North were as willing as those in the South to excuse killings committed
in the name of honor, either by acquitting outright or reducing the charges
against the defendants. Young men North and South continued to fight and
sometimes kill one another in the name of honor after the Civil War, and
juries still treated them leniently when they were brought to trial. In 1887,
a jury in Chicago acquitted Eugene Doherty, who was accused of killing
Nicholas Jones in a fight outside a bar. In the course of reaching its verdict,
the jury ignored the evidence that Doherty had been arrested at the scene
minutes after the shooting, revolver in hand.
Even so, the close of the Civil War marked the beginning of the end
of honor’s influence as a form of extra-legal justice. But as honor suffered
eclipse, other forms of extra-legal justice prevailed. From the evangelical
backcountry of the antebellum South, to the predominantly Catholic mill
towns of late nineteenth-century Pennsylvania, churches policed offenses
committed by their congregants, judging and punishing a variety of wrongs
including intemperance, adultery, and gambling. These punishments were
seldom violent; shaming and shunning were the favored methods of reprimanding
wrongdoers in most churches, although practice and participants
varied from congregation to congregation. In some, women could be judged
but were never permitted any sort of adjudicatory role; in others women
judged and could be judged. In another informal process of investigation,
adjudication, and punishment relating to morals offenses, women exercised
greater authority. Sometimes their investigations of wrongdoing involved
other women; other times women entered and enforced moral judgments
against men. In either case, shame and social ostracism were the preferred
means of punishing wrongdoers. These everyday courts of public opinion
crossed class and regional bounds, functioning in communities of workingclass
women in antebellum New York and among elite white women in
antebellum South Carolina. Similar processes were at work on shop floors
among male laborers as well.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 147
Men, aided by some women, practiced another form of community judgment
that had a far more violent element. In antebellum New York, several
of the riots that proved so difficult to control arose when mobs of workingclass
men attempted to police their own communities by driving out brothels
and other establishments they considered immoral. Mob action was not
confined to the working class. The San Francisco vigilantes of the 1850s and
the New Orleans committee of roughly the same era were middle-class men
who claimed they were enforcing community norms when they took law
into their own hands. Once again, these informal practices continued well
after the Civil War. In Chicago in the 1870s a mob in one neighborhood
burned down a factory that they felt violated city laws and harmed their
community; in 1887 women from the town of Ellsworth, Illinois, raided a
local saloon. During the 1880s, mobs of men executed rough justice from
South Carolina and Tennessee in the South to Indiana and Wisconsin in
the North. Sometimes they formed to deal with a particular problem. In
1886, for example a mob in Irving Park, a Chicago neighborhood, drove
a man suspected of taking indecent liberties with children out of the city.
Other times, they policed general problems; in the 1880s mobs formed and
beat men who whipped or abused their wives in both Indiana and South
Carolina.
Informal vigilante efforts had organized counterparts in the Law and
Order Leagues and other citizens associations that formed in the 1870s
and 1880s. In Chicago in the 1880s, members of the Citizens Association
monitored theaters for immoral shows and enforced liquor law violations.
Officially, members of the organization tried to work through formal channels,
relying on police officers to make arrests, but they were perfectly
willing to make citizens arrests when they felt law enforcement officers
were unwilling or unavailable. In 1901 in New York City, Judge William
Travers Jerome led members of the City Vigilance League on raids of brothels
and gambling dens, arguing that citizens had to enforce the laws because
the police had failed to act.
New York’s experience with vigilante justice suggests how often the
efforts of law-and-order groups targeted vulnerable groups. From 1870
throughWorldWar I, New York’s Anti-Saloon League shut down workingclass
bars; in roughly that same period the Society for the Suppression of
Vice worked to suppress stage shows (and literature) its members deemed
obscene, while the Committee of Fourteen, another private anti-vice society,
focused on cabarets and saloons, venues particularly noted for racial mixing
or homosexual clientele.
Some law-and-order groups tried to advocate for the excluded; a Committee
of Public Safety, formed in New Orleans in 1881, monitored the
arrests made by the police department, complaining about police brutality,
Cambridge Histories Online © Cambridge University Press, 2008
148 Elizabeth Dale
particularly against blacks. Other times, minority groups took the law into
their own hands as a form of self-help. In the aftermath of Chicago’s race riot
of 1919, blacks claimed that they had acted extra-legally to protect their
lives and property because they could not trust the police to act. When the
dust settled, it was clear that, throughout the riot, Chicago’s police had been
deployed to protect white property and white lives; not until the National
Guard was brought in, at the tail end of the riot, had blacks received
any official protection. Perceived failures of law in late nineteenth-century
Chicago also led small manufacturing concerns and labor organizations to
establish their own informal rules, creating systems by which they policed
one another. Violations discovered by their informal courts were punished
through strikes or violence. Both the law-and-order leagues and their less
formal counterparts justified their actions on the ground that laws were
being ignored, which easily became the argument that the legal system was
itself unjust, or lawless.
That, of course, became the argument that Ben Tillman and other white
supremacists in the South used to justify the creation of lynch mobs. In part
because other forms of extra-legal justice conditioned both governed and
government to mob violence, from the 1880s to the 1930s little was done
to stop lynching. In that period, lynch mobs killed roughly 3,700 people,
male and female, 80 percent of them black. As was the case with other forms
of extra-legal justice, no region had a monopoly on this violence. While
most of the reported lynchings occurred in the South, in the last half of the
nineteenth century mobs killed men and women in a variety of Northern
states, among themWisconsin, Pennsylvania, and Illinois.
IV. THE POPULAR ROLE IN FELONY COURTS AND EFFORTS
TO CHECK ITS INFLUENCE
In the first half of the nineteenth century, the forces of popular justice
spilled out of the streets and into the felony courts, brought in most often
by the juries that played roles at one stage of the proceedings or another.
Throughout the antebellum era, many counties North and South followed
English practice and relied on elected coroners to investigate unexpected
deaths, with juries composed of “bystanders” selected from the neighborhood
of the death. Toward the end of the century, these juries and the
coroners who called them came under attack for their lack of professionalism.
In 1877, Massachusetts replaced coroners with the medical examiner.
But while newspapers in other parts of the country denounced coroners
and their juries, pressing for their abolition throughout the 1880s, most
jurisdictions did not follow Massachusetts’ lead. New York had a coroner
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 149
until 1915, and some counties in Wisconsin continued to rely on coroners
untilWorldWar II.
Coroner’s juries represented the first point of popular involvement in the
legal system, and their role could be significant. They not only deliberated
over the causes of unexpected death but often offered a preliminary determination
of whether any crime had occurred. Coroner’s juries could, and
sometimes did, prompt a sheriff to initiate actions with a determination
that a suspicious death needed to be the subject of prosecution, just as they
could, and often did, forestall legal actions with a finding that nothing criminal
had occurred. On more than one occasion their determinations were
suspect; in 1907 a coroner’s jury in Philadelphia found that a man found
drowned in the, Delaware River had committed suicide, notwithstanding
the fact that he had been dragged from the water with his hands bound
behind his back.
Because coroner’s juries had to be composed of people from the scene of the
crime, the juries were a popular institution, at least to the extent that they
involved all classes of white men. (Slaves, blacks, and women, along with
other marginalized groups, were rarely if ever members of coroner’s juries,
though they could provide testimony at an inquest.) In contrast, grand
juries usually were composed of a community’s elite. Notwithstanding
that demographic difference, members of the grand jury were as willing as
coroner’s jurors to apply their own standards in determining what crimes
should be prosecuted. Grand jury records from Philadelphia in 1839–59
show that the jury indicted in less than half the murder cases brought
before it. The rate of indictments was higher in antebellum South Carolina,
but even there grand juries entered indictments in only 63 percent of the
cases they heard. Their unreliable nature brought grand juries under attack
toward the end of the century; in the 1870s California began to substitute
informations for indictments. No grand jury was ever called in cases that
proceeded under an information. Instead there was a preliminary hearing
before a magistrate, who bound a defendant over for trial if he felt there
was evidence enough to proceed. This attack on jury power was relatively
successful; by the end of the nineteenth century the federal government and
many of the other states had borrowed the system from California and used
it to sidestep their grand juries.
Even as the use of informations checked one source of popular influence
on the prosecution of felony cases, the members of petit juries continued to
play an important role in criminal trials. Andrew Hamilton’s argument for
the acquittal of John Peter Zenger, which may have been jury nullification’s
most famous moment, occurred in the eighteenth century, but the history
of the practice extended into the twentieth. Such exercises of popular power
Cambridge Histories Online © Cambridge University Press, 2008
150 Elizabeth Dale
were not without challenge. Shortly after the American Revolution, many
state court systems tried to limit the jury’s power, declaring that jurors
were limited to finding facts while judges had the sole power to determine
the laws, but these declarations did not have much impact. Juries in the
antebellum South were notorious for deciding cases in accord with local
values rather than the rule of law, with the result that in states like South
Carolina conviction rates for many crimes, including murder, were less than
50 percent. But once again the phenomenon was not limited to the South.
In antebellum New York City, less than a third of all men (defendants in
murder cases were almost exclusively male) brought to trial for murder were
convicted. In Philadelphia, in 1839–46, the grand jury indicted sixty-eight
people for murder, but only 37 percent of those indicted were convicted
once they were brought to trial. Although the numbers for that city changed
after the Civil War – Philadelphia had a conviction rate for murder of
63 percent in the period 1895–1901 – the figures reflect the influence of
plea agreements, rather than a shift in juror practice. Of the people convicted
of murder in that city in 1895–1901, only thirty-four suffered that fate as a
result of a jury verdict, while fifty-eight pleaded guilty. And in other parts
of the country, conviction rates remained low after the Civil War. In late
nineteenth-century Chicago the conviction rate for people brought to trial
for murder was roughly 40 percent.
A number of reforms over the course of the nineteenth century sought to
deal with the petit jury’s power at trial; some were designed to expand that
power, others to restrict its exercise. One early change, which took effect in
the 1820s, increased the ability of jurors to convict by providing that jurors
only need find that proof of guilt was beyond a reasonable doubt. While
this standardized the burden of proof at a standard more stringent than
that applied in civil cases, the standard was lower than the near certainty
test that defense attorneys called for in the early national period. Another
significant shift in jurors’ powers came in the antebellum era, when many
states, including New York, Tennessee, and Illinois, passed laws that gave
juries the power to sentence as well as determine guilt.
Other, later reforms had an impact on the evidence that petit juries
could hear. Before the Civil War, state courts typically followed English
law, limiting defendants’ ability to testify. Many restricted the defendants’
right to testify under oath; some went further. As late as 1849, criminal
defendants in South Carolina could make the final argument to the jury
only if they presented no evidence on their own behalf. In 1867, Maine gave
criminal defendants the right to testify under oath, and this innovation was
quickly adopted in other states. Another change, made at roughly the same
time, imposed restrictions on judges’ ability to comment on the evidence.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 151
A statute in Massachusetts barred judicial commentary in 1860; Mississippi
limited judges to stating the law even earlier, in 1857.
In Chicago, one consistent influence on the low conviction rate was an
Illinois statute that provided that jurors could substitute their own view
of the law for the instructions given to them by the judge. The practice
was so well established that jurors frequently received an instruction to this
effect, most famously at the trial after the Haymarket Bombing in 1887.
Jury nullification remained good law in Illinois even after the U.S. Supreme
Court denounced the practice in Sparf and Hansen v. United States (1895).14
In fact, the Illinois Supreme Court did not itself outlaw nullification until
1931.15 But while Illinois and Maryland (where a provision in the state
constitution permitted jurors to nullify16) were unusual in the degree to
which they formally recognized that juries had the right to nullify, legal
commentators from Arthur Train to Roscoe Pound complained that juries
exercised that power informally throughWorldWar I.
Yet the evidence of the increased rate of plea bargains in late nineteenthcentury
Philadelphia reveals one force that checked the petit jury’s power
in felony courts. And that check on jury power was significant. In 1900
three out of four felony convictions in the New York county criminal courts
resulted from plea agreements.Within a few decades the numbers in other
cities were at least as dramatic. A study in 1928 determined that in 1920s
Chicago, 85 percent of all felony convictions resulted from a plea, as did
78 percent of felony convictions in Detroit, 76 percent in Denver, 90 percent
in Minneapolis, 81 percent in Los Angeles, 84 percent in St Louis,
and 74 percent in Pittsburgh.17 That shift had taken most of the nineteenth
century to occur; Massachusetts courts had begun to take pleas in
cases of regulatory crime (liquor offenses, for example) in 1808, and in
1845 a committee appointed by the Massachusetts House of Representatives
endorsed plea agreements as a reasonable exercise of prosecutorial
discretion. But plea bargaining was not quickly extended to cases involving
other felonies. The first plea agreement in a case involving murder was
not entered until 1848, and throughout the 1850s only 17 percent of all
murder cases in Massachusetts were pleaded out. The trend changed in
the decades after the Civil War; at the end of the 1890s 61 percent of all
murder cases in Massachusetts were resolved with pleas. While the effect
of the turn to plea agreements was to limit the power of the criminal court
jury, the rise of plea bargaining was a result of indirect popular influence on
14156 U.S. 51 (1895). 15 Illinois v. Bruner 343 Ill. 146 (1931).
16 Maryland Constitution, article 10, section 5.
17 Raymond Moley, “The Vanishing Jury,” Southern California Law Review 2 (1928), 97.
Cambridge Histories Online © Cambridge University Press, 2008
152 Elizabeth Dale
courts. In Massachusetts, which had an appointed judiciary throughout
the century, judges resisted plea bargaining until caseload pressure forced
them to do accept the practice at the end of the century. In contrast, in states
where judges were elected, like Georgia (where judges controlled sentencing)
and Indiana (where jurors sentenced), plea bargaining took hold in
the antebellum era. In those states judges apparently used plea bargaining
to control their caseloads and demonstrate their competence to the
electorate.
Other reforms of the century were intended to increase the authority of the
government in felony trials. To that end, by 1820 most states had created the
office of public prosecutor, and in the antebellum era many states tried to use
those prosecutors to consolidate their authority over criminal prosecutions
by eliminating the old practice of private prosecution of crimes. But those
efforts were not entirely successful. Governments did succeed in eliminating
prosecutions initiated and often presented by private people, rather than by
government lawyers, a practice that had allowed private people to use the
courts for personal revenge. But they were unable, or unwilling, to bring
to an end a second type of private prosecution, in which private attorneys
were hired to assist state-supported prosecutors in presenting the case; that
practice continued well into the twentieth century, subverting the claim
that criminal prosecutions were undertaken on behalf of the state rather
than for private revenge. The selective nature of this assault on private
prosecution had a decided class aspect. While the first approach opened the
courthouse door to the poor, letting them bring claims (even, of course,
frivolous ones) against others at minimal expense, the second gave special
advantages to the rich, who could hire the best lawyers to assist the state’s
attorneys.
The inequalities of criminal justice were more marked on the other side
of the case.Wealthy defendants throughout the century went to trial with
the best representation money could buy, but in most states criminal defendants
charged with felonies were sorely pressed to get representation at all.
As early as 1780, Massachusetts courts required that attorneys be appointed
for indigent defendants charged with capital crimes, and by the end of the
nineteenth century, defendants in New York and California had a right to
free counsel in all felony cases. Toward the end of the century, courts in
several jurisdictions, such as Chicago, asked attorneys to volunteer to represent
indigents in capital cases, but in the same period courts in Florida
refused to recognize that criminal defendants had a right to counsel. Concerted
efforts to provide attorneys for indigent defendants did not begin
until right beforeWorldWar I. In 1914, Los Angeles became the first city
in the country to create an office of public defenders. New York created a
voluntary defenders organization three years later, but many jurisdictions
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 153
waited until the late 1920s and early 1930s to provide for defendants who
could not afford representation.
The rule of law often had little impact on felony trials, and appellate
courts did little to remedy that problem. By 1840 most states permitted
appeals from criminal convictions, although Louisiana did not do so until
1843. But while the right existed, the privilege was exercised rarely because
few defendants could afford it. InWisconsin, the state Supreme Court heard
27,000 appeals in the period from 1839 to 1959, but of those only 1,400
were appeals from criminal cases, and in other states appeals remained a
relatively unimportant part of the criminal process through World War I.
More popular, in both senses of the term, was the pardon, but for most of the
period that was a decision left to the sole discretion of the elected governor,
which meant it was a process tempered by political reality far more than by
mercy or law.
V. GOVERNED WITHOUT GOVERNMENT, CRIMINAL
LAW IN THE PETTY COURTS
The nineteenth-century criminal justice system also included petty courts,
which heard the minor criminal cases, misdemeanors, and quasi-criminal
cases and offered a different perspective on the extent of the power of the
local state. In the colonial era these courts were often sites of neighborhood
justice, run by justices of the peace who often had no legal training or
experience and received no regular salary, instead collecting their pay in
fees. Through the first half of the nineteenth century, these petty courts
usually heard cases involving people from the surrounding communities,
and the justices of the peace often ruled based on their personal knowledge
of the parties before them, rather than any legal principle. In some petty
courts, in particular those in Philadelphia, informality was reinforced by the
standard practice of prosecution by private people.Without the requirement
of lawyers, even people from the poorest neighborhoods felt free to go to the
so-called alderman’s court to get justice, recourse, or revenge. But to view all
this as evidence that the petty courts were a mainstay of the localized state,
where the people expressed a sovereign will, is to confound process with
principle. By the middle of the nineteenth century, the Market Revolution
created impersonal worlds full of strangers in place of the communities that
had sustained these courts in the earlier period. Organized police forces put
additional pressure on the petty courts, as arrests swamped them with cases.
Under the pressure of increased use, judges subjected more defendants to
summary punishment and were unable either to channel or direct popular
notions of justice. Even as they failed to serve as instruments of the state, the
petty courts also ceased to provide much in the way of sovereign power to the
Cambridge Histories Online © Cambridge University Press, 2008
154 Elizabeth Dale
people who appeared before them. Contemporaries complained that those
who brought claims to these courts, or appeared before them, saw them
as nothing more than an arena for disputation, on a par with the dueling
ground, the barroom floor, or the street corner. In the antebellum era, the
petty courts neither offered the certainty of the rule of law nor preempted the
resort to alternative (and even more violent) means of settling differences.
The situation only got worse after the Civil War. By 1880, petty courts
had become assembly lines of punishment. Seventy percent of all the country’s
jailed inmates by 1910 were serving time for minor offenses, such
as drunkenness, vagrancy, or disorderly conduct, and most of them had
been sentenced by one of these petty courts. Process, from Pittsburgh to
California, became increasingly summary; few defendants received a hearing
that lasted more than a minute or two. Although the judges often had
a legal background, there were few, if any, lawyers in these courts, and less
law. Most defendants were sentenced to time served or fined a few dollars
(which often was more than they could afford and resulted in further jail
time as they worked off the fine), though justice frequently depended on
who the defendant was and where the crime occurred. In Chicago from
1890 to 1925 the vagrancy laws were used against tramps from out of
town. In Pittsburgh in that same period, young African American men
from the community were imprisoned under the tramp laws in numbers far
out of proportion to their numbers in the population, whereas whites were
underrepresented. In Buffalo in the early 1890s, vagrancy laws were used
to break strikes, which meant most of the men convicted under those laws
were white laborers.
Some efforts were made to correct the problems of overcrowded courts.
Faced with considerable hostility to its disorganized and lawless police
courts, in 1906 Chicago collapsed them all into a centralized municipal
court system. This new court heard petty crimes and handled preliminary
hearings, as had the police courts before it. The difference lay in the way
the new system handled those cases. Specialized courts were set up to hear
particular matters; Morals Court, for example, heard all cases involving
prostitution. Initially specialization reduced the number of cases before the
court, which permitted the judges to devote more time and expertise to
their cases. For a brief period after these reforms, the new courts were a
place where working-class and poor men and women brought private prosecutions.
But popular use of the new courts came with a cost. Staffed with
a phalanx of social workers and social scientists trained in a variety of
approaches (including, at least in the period aroundWorldWar I, eugenics)
who supported judges with the power to sentence people to indefinite probation,
the municipal court system was no longer a place for parties to air
out neighborhood problems and then go home. Women who filed claims
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 155
against their husbands, parents who used the court to control their children,
and any other defendant brought before the court in some other way
found that it became a permanent part of their lives. Long after the initial
cases had come to an end, judges, probation officers, and the court’s support
staff continued to track the parties. Chicago’s Juvenile Court, created in
1899, had a similar impact on the lives of its charges and their families.
Like the Municipal Court, the Juvenile Court favored ad hoc, personalized
judgments; social science ideals, not law, influenced the court’s decisions.
For all that they permitted extended intrusions into the lives of the
people who appeared before them, the new municipal court systems were
never creatures of an omnipresent state. Government underfunding meant
that in its first decades, private individuals and institutions financed much
of the work of Chicago’s Juvenile Court and influenced its direction in
the process. The Chicago Municipal Court was also subject to a variety of
private influences, as reformers and social scientists played a role shaping
its direction. Needless to say, reformers used the two courts as sites on
which to pitch competing ideas. The result was that the government spoke
not with a single voice, but with many voices. As much as overburdened
dockets limited the police courts as a source of state authority, the competing
and conflicting theories drifting out of the Juvenile and Municipal Courts
weakened the ability of the state to use either as a source of authority as
well.
VI. SUBVERTING THE SUBSTANTIVE LAW
Problems with the court systems were made all the more stark by the
endless efforts, throughout the nineteenth century, to reform the substantive
criminal law. Inspired by a variety of influences from the Enlightenment
desire to make law more rational to a republican demand that law become
more accessible to the public, in the early national period many states, most
of them in the North, began to make crime a matter of statutory rather than
common law. Pennsylvania began an extended effort to reform the criminal
law in 1794, with the passage of a statute that split common law murder
into two separate offenses. As other states followed its lead, many, often
bowing to public pressure, added new crimes to their books, criminalizing
behavior that had been frowned on, but legal before. Pennsylvania, which
had passed its original blue laws in the colonial era only to see them fall into
disuse in the 1740s, passed a law in 1779 that outlawed work and certain
kinds of diversions on Sunday. Charleston, South Carolina, passed a Sunday
closing law in 1801; toward the end of the antebellum era California passed
two Sunday closing laws, one in 1855 that outlawed noisy amusements and
a second in 1858 that closed stores and prohibited the sale of goods.
Cambridge Histories Online © Cambridge University Press, 2008
156 Elizabeth Dale
As time went on, other types of morals legislation joined the Sunday
closing laws. In the 1830s, states as far apart as Maine and Michigan passed
statutes prohibiting adultery, fornication, incest, and sodomy. That same
decade Illinois passed a law prohibiting the sale of playing cards, dice, and
billiard balls (as well as obscene materials), and temperance laws swept
New England in the 1850s. Typically, these laws were intended to increase
state control of behavior and were prompted by fears that urbanization
was exposing people, particularly young men and women, to corrupting
influences. To that end, enforcement often targeted particular groups; in
St Louis during the 1840s, brothels were winked at, while prostitutes who
rolled their tricks were charged. Notwithstanding selective enforcement,
and often in fact because of it, many of these laws were subject to challenge,
formal and informal, throughout the century. In 1833, a Jewish merchant
from Columbia, South Carolina, prosecuted under a city ordinance that
prohibited the sale or liquor or confections on Sunday, argued that the law
deprived him of the religious freedom he was guaranteed by the state constitution.
The trial court upheld the law on prudential grounds, concluding
that custom and practice in the state declared Sunday to be the Sabbath
and that the presence of large numbers of free blacks and slaves on leave in
the city on Sunday necessitated laws that restricted temptation. A decade
later, the Supreme Court of South Carolina heard a challenge to a similar
law, this one brought by a Jewish merchant in Charleston who argued that
his constitutional rights to religious freedom were violated by a Sunday
closing law. Once again the court rejected that argument, on the ground
that the state’s police power gave it the authority to pass any law to punish
behavior that shocked the conscience of the community. The court added
that in South Carolina, conscience was Christian.
While Sunday closing laws and other morals legislation were typically
passed as a result of pressure from groups interested in enforcing a morality
based on Christian (usually Protestant) precepts, most state courts upheld
Sunday closing laws on prudential, rather than religious, grounds. In 1848,
the Pennsylvania Sunday closing law was upheld against a challenge by a
Seventh Day Adventist. In its ruling the state supreme court noted that
Sunday had become a traditional day of rest and tranquility and concluded
that the law merely reflected that custom. A Missouri court upheld a Sunday
closing law in the 1840s on similar grounds, noting that convention had
declared that Sunday should be a day of peace and quiet. But while courts
upheld Sunday closing laws, in practice they were dead letters in most places
by mid-century. Attempts from 1859–67 to enforce a law in Philadelphia
that prohibited the operation of horse cars on Sunday were unsuccessful; by
1870 New York’s ban on public transportation on Sunday was a nullity; and
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 157
popular defiance of California’s Sunday closing laws led that state’s supreme
court to strike the law down in the early 1880s.
Efforts to use criminal law to control morality continued after the Civil
War. Throughout the 1870s many states passed laws regulating obscenity,
often modeling their laws on the federal Comstock Laws. Some states also
criminalized the use of drugs or passed temperance legislation. Often these
laws reflected considerable lobbying by reform groups, many of them dominated
by women: the dispensary law that the South Carolina legislature
passed in 1894 followed a decade and a half of efforts by the Women’s
Christian Temperance Union (WCTU) and other local women’s groups.
Attempts, only some of them successful, were made to regulate sexuality
as well. In the 1860s and early 1870s, lawmakers in New York considered
passing laws that would permit prostitution in the city but require all
prostitutes to be licensed and subject to medical examinations. That effort
failed, but St. Louis succeeded in passing a licensing law for prostitutes in
1870, although it was rescinded in 1874. Responding to shifts in medical
knowledge, as well as pressures from doctors who sought to increase their
professional authority by restricting the powers of midwives, the period
after the Civil War was marked by a series of laws that made it a crime to
perform abortions.
In that same period, fear that the young women who flocked to the
nation’s cities were inadequately protected against sexual predators led many
states to pass statutory rape laws and raise the age of consent. The fate of
those laws in the last decades of the century offered another example of how
laws could be subverted, demonstrating the continued weakness of the local
state. From Vermont to California, the reformers who pressed for passage
of statutory rape laws hoped to protect young women from predatory older
men, and in a few states, such as Vermont, those aims informed prosecutions
until well into the twentieth century. But in California, the law was under
attack from the first. Initially, arresting officers, judges, and prosecutors
undermined the law, choosing to protect men who had sex with minors
by refusing to arrest, prosecute, or convict them. After more judges more
sympathetic to the law’s aims were put on the bench, their efforts to enforce
the law to protect vulnerable young women were complicated, and not
infrequently thwarted, by parents who used the laws to try to regain control
over their teenaged daughters. What began as a paternalistic effort to protect
vulnerable young women by targeting a class that seemed to expose them
to especial harm was transformed into an instrument to control the young
women instead.
The problem of popular resistance was not confined to morals legislation.
The Illinois Civil Rights Act of 1885 was intended to provide a state
Cambridge Histories Online © Cambridge University Press, 2008
158 Elizabeth Dale
law remedy to blacks barred from places of public accommodation. The act
had civil and criminal aspects, but by 1920 the combination of businesses
that refused to comply with the law and failures of both public and private
prosecution rendered both parts of the law a dead letter. Juries undermined
other laws by refusing to enforce laws that were on the books. Just as they
nullified when they refused to treat honor killing as murder, so too they
nullified when they refused to enforce laws creating criminal defenses, such
as insanity. The nineteenth century had seen the rise of the insanity defense,
as most jurisdictions in the United States adopted the M’Naughton Rule.
Yet while that law was intended to reinforce the concept of mens rea and
provide greater protections for defendants, its guarantees were mostly honored
in the breach. Arthur Train, a prosecutor in New York City at the
turn of the century, reported that jurors systematically refused to follow
the insanity defense, even in cases where the defendant was clearly insane.
Rather than enter a finding of insanity, jurors preferred to sentence insane
defendants whose killings did not seem outrageous to a number of years
in prison, and sentenced other, equally insane defendants whose offenses
seemed shocking, to death. Jurors outside of New York worked from a similar
pattern, as popular opinion condemned insanity defenses as legalisms
designed to subvert justice.
VII. PROFITABLE PUNISHMENT
The same reform movement at the end of the eighteenth century that
resulted in the codification of substantive criminal law prompted reforms
of punishment. Reformers argued that punishment was the key to criminal
justice and that sentencing was a vital part of punishment. Particular
emphasis was placed on making punishment fit the crime, with the result
that many states sharply reduced the number of crimes they considered
capital. In 1790, Pennsylvania passed a law declaring that several felonies,
among them robbery and burglary, would no longer be capital offenses. Four
years later, as part of its redefinition of murder, Pennsylvania declared that
only first-degree murder was a capital crime. Over the next several decades,
Virginia and most other states joined this process, significantly reducing
the number of offenses they punished by death. By 1850 South Carolina
had reduced the number of capital crimes it recognized to 22, down from
165 in 1813.
In 1779, Thomas Jefferson had argued that to deter crimes punishments
had to be both proportionate to the offense and of determinate length.
Progressive reformers at the end of the nineteenth century took the opposite
approach, arguing that indefinite sentences were best suited to deterring
crime and reforming those convicted. A focus on the difference in those
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 159
arguments obscures the more important historical point – regardless of what
the laws on the books required, for most of the nineteenth century a variety
of practices made indeterminate sentencing the norm. In Massachusetts,
as we have seen, the first plea bargain, in which a defendant exchanged a
guilty verdict for a set sentence that was less than the possible sentence, was
entered in 1808. A defendant charged with a violation of the state liquor
license law pled guilty to one of four counts, in exchange for having the other
three counts dropped. He paid a fine and suffered no other punishment.
As that original outcome suggests, those who entered into plea agreements
might receive sentences that had little to do with the statutory
punishment for their underlying crime. But even defendants who went to
trial, and were sentenced in accord with statutory schemes, often served
different periods of time. Pardons were used to reduce prison time and
could be issued at the behest of a prison administrator, who might wish to
reward good behavior or simply ease the pressures on an overcrowded jail.
A related practice, the reduction of sentences for “good time” (good behavior),
put the power to reduce sentences directly into the hands of prison
administrators, though usually with some limitations as to the amount of
time that a sentence could be reduced. A related variation on this process,
parole, was a European invention that was adopted in U.S. prisons after
the Civil War. It again gave prison authorities the power to release some
inmates early, though in contrast to pardoned prisoners, or those whose
sentences were reduced for good behavior, parole was a conditional release.
Each of these practices helped to make the even the most specific sentence
indeterminate, as did probation, which permitted convicted defendants to
serve no sentence so long as they maintained good behavior. The practice
was formally recognized in Massachusetts in 1836, but had antecedents in
a variety of other practices; some, like the peace bond that dated back to the
seventeenth century, were formally recognized by the courts, while others,
like the practice of failing to hear charges against certain defendants so
long as they behaved, had merely been informal processes. Supervision was
another form of probation that was initially applied to juvenile offenders
and then slowly transferred over to use with some adult prisoners.
The practice of indefinite sentencing was reinforced by the most significant
reform of punishment in the nineteenth century, the creation of the
penitentiary. During the Revolutionary Era, most states imprisoned convicted
prisoners in rickety local jails, from which there were many escapes,
though some states had prisons that were more like dungeons, where prisoners
were manacled to the wall or floor of a communal cell. In 1790, the year
that Connecticut converted an abandoned copper mine into a dungeon-like
prison, Philadelphia remodeled its Walnut Street jail and sparked a major
change in imprisonment in the United States.
Cambridge Histories Online © Cambridge University Press, 2008
160 Elizabeth Dale
The idea behind the new Walnut Street prison was twofold: prisoners
who previously had been assigned to do public works on the streets of
Philadelphia wearing uniforms and chains would henceforth be isolated
from the populace (whether to protect the public from being corrupted by
the prisoners or vice versa was subject to debate); in their isolation, prisoners
would be given time and solitude in which to contemplate their offenses
and repent. To those ends, inmates were isolated in individual cells and
required to keep silent when they had contact with other prisoners during
the day. Yet practice did not completely square with purpose. While prisoners
were removed from contact with the public on the streets, they were
not completely separated from the public gaze. For most of the antebellum
era, Pennsylvania prisons admitted visitors for a small fee, in exchange
for which they were allowed to watch the prisoners go about their daily
lives. Nor did separate cells always breed the desired penitence; in 1820 a
riot in the Walnut Street prison led to several deaths. That failure did not
undermine Pennsylvania’s enthusiasm for the general project. In the 1820s
the state opened two penitentiaries, one, in Pittsburgh, known asWestern
State Penitentiary, and the other, in Philadelphia, known as Eastern State.
Western State was beset by administrative problems for several years, but
Eastern State quickly became a model for other states to follow. There, the
scheme initially set up at Walnut Street Prison was modified so that prisoners
no longer mingled with one another during the day. Instead, they
remained in isolation for 23 hours out of 24, working and living in separate
cells.
At roughly the same time that Pennsylvania was refining its penitentiary
model, New York was experimenting with its own. It opened Auburn
Prison in 1805 and for the next two decades experimented with living
arrangements in an effort to achieve the perfect system. During the 1820s,
prisoners at Auburn were also placed in isolation, but it was more extreme
than the Pennsylvania version since the prisoners at Auburn were not given
any work to occupy their time. In an effort to use loss of individual identity as
a further means of punishment, Auburn’s prisoners were assigned uniforms,
shaved, and given limited access to family, friends, or lawyers. They marched
to and from their cells in lockstep and always in ordered ranks, and they were
supposed to be silent at all times. The result was a disaster. After several
prisoners at Auburn committed suicide and several others attempted it; the
prison administration concluded that the system was unworkable. In 1829,
a modified system of punishment, which came to be known as the Auburn
Plan, was put into effect. Under this scheme, prisoners worked together
during the day (in contrast to the situation at Eastern State, where they
worked in isolation) and then were confined to individual cells at night.
This continued to be the general rule at Auburn until overcrowding in
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 161
the middle of the century forced the prison administration to abandon the
solitary cell.
Reformers in Pennsylvania and New York hoped that a regime of work,
along with regimented lives, would teach prisoners self-discipline and selfrestraint.
But if reformers intended prison labor to be only one element of
a holistic effort to restore inmates to virtue and industry, in the hands of
prison administrators and state governments it became the driving force
behind the new prisons. After administrators at Auburn claimed that their
prisoners produced such a significant profit that the prison did not need to
seek appropriations from the legislature, profits became the explicit goal
of penitentiaries built in many states – Massachusetts, New Hampshire,
Ohio, Kentucky, Alabama, Tennessee, Illinois, Georgia, and Missouri. The
different states pursued profit in different ways and with different rates of
success. Between 1800 and 1830 the penitentiary administrators in Massachusetts
ran the prison industry, while in nearby New Hampshire the
state sold its inmates’ labor to private contractors, who employed inmates
in shoemaking, stone cutting, and blacksmith work. Inmates in the penitentiary
in Alabama also produced a range of goods, including clothing,
shoes, farm equipment, and furniture, but in contrast to New Hampshire,
their work was leased to a single individual who ran the prison as if it were
a small manufacturing concern. Until 1853, Missouri leased its inmates
out to private people. When public anxiety about escaped prisoners finally
led administrators to abandon that practice, the state adopted a modified
version of the Massachusetts model, building factories within its various
prisons and having the inmates work in-house. In yet another variation on
this theme, from 1831 to 1867 Illinois leased both its prisoners and the
buildings they lived in to businesses.
The profits realized by the different states were as varied as their practices.
Penitentiaries in Kentucky and Alabama turned steady profits in the decades
before the Civil War, while the penitentiaries in Georgia usually did not.
Studies of the Alabama and Kentucky prisons argue that they profited by
dint of good management; other did not. The Massachusetts penitentiary
turned a profit by bribing inmates to work; the penitentiary in Kansas made
a profit, as did Michigan’s, by taking in prisoners from other systems for
a fee (Kansas took in prisoners from Oklahoma, Michigan took in federal
prisoners). The result, at least in Kansas, was a severely overcrowded prison.
Most prisons, in addition, relied on beatings and other forms of punishment
to make sure inmates did their assigned work.
Whether it was because of outrage over financial shenanigans or merely
the result of its famously contrarian mindset, South Carolina did not build
a penitentiary until 1866, preferring to rely on its county jails to hold
prisoners after they were convicted. Although North Carolina and Florida
Cambridge Histories Online © Cambridge University Press, 2008
162 Elizabeth Dale
joined South Carolina in resisting the trend, most other states built penitentiaries
before the CivilWar and resumed the practice at war’s end. Most
states continued to seek profits from their prisoners into the twentieth century.
Illinois maintained its modified convict leasing system until organized
labor forced through a law barring prison work in 1903, Kansas kept up its
struggle to make a profit by housing inmates until protests from Oklahoma
stopped its practices in 1909, Missouri ran its prison as a profit center until
1920, and New Hampshire did not abandon the practice of convict leasing
until 1932.
While the profit motive remained unchanged, methods did alter in some
states in the aftermath of the Civil War. These states, which were mostly
located in the South, began to lease prisoners out to private enterprises,
much as Missouri had done in the antebellum period. Florida, which had
tried and failed to make a profit on the penitentiary that it finally created
in 1866, began to lease out its prisoners to turpentine farmers, phosphate
mine owners, and railroad companies beginning in 1877. It continued the
practice throughWorldWar I.Tennessee and Alabama leased their prisoners
to coal mining concerns, and initially both states found the process quite
lucrative. By 1866, each state was bringing in $100,000 a year from the
prisoner leases, a sum that represented one-third of their respective budgets.
But as time went on, problems arose. Tennessee in particular had difficulties
when non-convict miners rioted and forced coal mining companies to release
their prisoners and close down their mines. Alabama’s experiment with
convict miners was slightly more successful, and the state used convicts,
particularly African Americans, in its mines for several years. But Alabama’s
system was subject to free labor protests as well and worked only so long
as the mining companies were willing to give the convict miners pay and
privileges. When that arrangement broke down, the convict miners refused
to produce and the enterprise became less profitable.
Other Southern states, beginning with Georgia in 1866, shifted away
from leasing out their inmates and instead put them on chain gangs to do
public work. The chain gang was not a Southern invention; from 1786 to
the opening ofWalnut Street Prison in 1790, convicts in Philadelphia were
assigned to gangs that did public labor on the streets of the city wearing
a ball and chain. In the 1840s, San Francisco housed prisoners on a prison
ship, the Euphemia, at night and assigned them to do public works in chain
gangs during the day. Nor did the idea spring fully formed from the Georgia
soil at the end of the Civil War. Initially, Georgia assigned misdemeanor
arrestees to the chain gang and leased its felony convicts out to private
enterprise. But time convinced the government of the benefits of having
all its convicts work the chain gang to build public roadways, and in 1908
Georgia passed a law that prohibited convict leasing and put all its prisoners
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 163
(including women, who served as cooks) to work in gangs. Other states,
among them North Carolina and South Carolina, followed Georgia’s lead,
assigning some inmates to a variety of public works projects. The practice
continued well into the twentieth century.
The years after the CivilWar saw another development in imprisonment,
as specialized prisons were gradually built to deal with specific populations.
Once again, this was not an entirely new idea. The first house of refuge,
a special institution for juvenile offenders, opened in New York in 1825,
and other cities including Boston quickly launched comparable initiatives.
Twenty years later, Boston offered a refinement on this principle when it
opened the first reform school for boys. The first reform school for girls, the
Massachusetts State Industrial School for Girls, did not open until 1856, and
it was not until after the CivilWar that other states, among themWisconsin,
Iowa, Michigan, and Kentucky, created similar institutions. They did not,
however, all follow the same model. When the Louisville, Kentucky, House
of Refuge opened in 1864, its inmates were boys and girls. In contrast,
when the Girls Reform School of Iowa opened for business in 1866, it was,
as its name implied, a single-sex institution. The Michigan Reform School
for Girls, which opened in 1884, not only had an inmate population that
was limited to young women but its entire staff was female as well.
While these institutions physically separated some young inmates from
adult convicts, far more young offenders were housed with the general
prison population. Even after the Civil War, offenders under 21 made up
a portion, sometimes a significant one, of the populations in penitentiaries
and county jails. In 1870, California state courts assigned boys as young as
12–15 to San Quentin and Folsom prisons. Of the 7,566 people assigned
to Cook County Jail (in Chicago) in 1882, 508 were under 16 (one was
no older than 8); 1,413 were under 21. Six years later, in 1888, Illinois
executed 17-year- old Zephyr Davis for murder. In the 1890s, a Savannah,
Georgia, newspaper reported that one-third of the people assigned to the
local penitentiary were younger than 20, and 80 of them were less than 15
years old. Nor were juvenile offenders exempt from the profit motive that
drove corrections. In Tennessee, juvenile offenders, who were not separated
from adult inmates until the twentieth century, were expected to earn their
keep by their labor, just as adult inmates were. The same was true for
juveniles in jurisdictions that did separate them from the general prison
population. Inmates in the New York House of Refuge were contracted
out to private businesses or expected to do contract labor within House
itself. Inmates at the Michigan Reform School were also contracted out to
private businesses. The same held true at reformatories opened for women
offenders. The Detroit House of Corrections, a reformatory for women, ran
a successful chair manufacturing business in the early 1870s.
Cambridge Histories Online © Cambridge University Press, 2008
164 Elizabeth Dale
Reformers, particularly women, had lobbied states to create all-women
cell blocks and to hire women as matrons for female prisoners as early as the
1820s. Some states built special reformatories for women prisoners in the
middle of the century, but for much of the nineteenth century women were
assigned to the same penitentiaries as men. Four women were incarcerated
at Eastern State in 1831, all of them African American. Although there was
a special cell block for women in that prison, at least one of the women, Ann
Hinson, did not live in it, but rather occupied a cell in the most desirable
block among male prisoners. Hinson enjoyed a special status because she
served as the warden’s cook and perhaps his mistress, but her situation,
though extreme, was not uncommon. The Old Louisiana State Penitentiary,
which functioned from the 1830s to 1918, held male and female prisoners
(and a number of the prisoners’ children) throughout most of its history.
Illinois housed female inmates (less than 3 percent of its prison population)
in the penitentiary at Joliet until it finally opened a women’s prison in
1896. Few states took the situation of women inmates seriously in the late
nineteenth century, Missouri appropriated money for a women’s prison in
1875, but neglected to build one until 1926. Idaho created a women’s ward
in its penitentiary in 1905, but did not build a women’s prison until 1974.
In contrast to those states that assigned women to penitentiaries along with
men, Massachusetts housed its women prisoners in the county jails until
it created the Reformatory Prison for Women in 1875. One reason for the
delays in creating separate women’s prisons was economic. The prisons and
prison industries relied on women to do their housekeeping.
The first completely separate prison for women (actually, a reformatory,
not a penitentiary) opened in Indiana in 1873.Afew years later, in 1877, the
first reformatory for men 30 years and under opened in Elmira, NewYork. In
theory, it was intended to rehabilitate younger prisoners by educating them
and training them for useful work. To that end, its inmates were graded on
their conduct and placed in different classes based on their behavior, with
the idea of gradually conditioning them to return to the outside world.
In practice, however, things were much as they were in the penitentiaries.
Elmira’s first director, Zebulon Brockway, had previously been director at
the Detroit House of Corrections, where he had been noted for turning
a profit with the prison’s chair manufacturing business, and he brought
the profit motive with him. Elmira inmates worked the entire day in the
reformatory’s several factories and spent only an hour and a half in the
evening at lessons in the reformatory’s carefully designed classrooms.
Although the reformatories boasted a range of services for their inmates,
the greatest differences between the penitentiary and the reformatory were
more basic. One, had to do with sentences. Inmates in reformatories typically
had indeterminate sentences so they could work themselves out of
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 165
incarceration. In practice, however, as SamuelWalker notes, their sentences
typically lasted longer. The other difference had to do with what brought
the inmates to the reformatories in the first place. While some were imprisoned
for committing crimes, many, especially women and children, were
imprisoned on much more amorphous grounds – having drunken parents
or being incorrigible.
Capital punishment was the exception to both the practice of indefinite
sentencing and the desire to turn punishment into profit. Aside from the
reduction in the number of capital offenses, capital punishment in the
United States changed very little from the ratification of the Constitution
to the end of World War I, although there were some efforts at reform in
both halves of the century. In 1846, Michigan abolished capital punishment,
and a handful of other states followed suit. Other states retained the death
penalty, but set limits on it in other ways. By 1850, many states had passed
laws or informally agreed to move executions to restricted venues, usually
inside prison walls, mostly in an effort to emphasize the somber nature of
the event and reduce the degree to which an execution was a public and
popular spectacle.
But for all the rules that provided that executions should occur within the
jail yard, rather than in front of an easily excited crowd, convicted murderers,
like the victims of lynch mobs, continued to be hanged before enthusiastic
mobs, whose members wangled tickets and passes to the event from sheriffs
and local politicians or simply slipped in past the guards watching the
gates. The pattern continued after the Civil War, as newspapers reported
the executions in grand detail for those who could not make it to the hanging
themselves. In Chicago, coverage of an execution typically began a day or so
before, with extended stories of the last days, and then the final hours, of the
convict. Those stories led up to accounts of the final scene, which reported
on the manner in which the condemned approached death (whether with
manly courage, cowardice, or dumb indifference), recounted the religious
devotions, if any, that preceded the hanging, and recorded any last words
that the defendant uttered before the drop. The hanging of particularly
infamous criminals, such as the Haymarket defendants, provided Chicago’s
papers with at least a week’s worth of stories, but even Frank Mulkowski,
dismissed by most papers as nothing more than a brutish Polish immigrant,
earned several days’ worth of coverage prior to his execution in 1886.
The biggest change in the death penalty occurred in 1890 when, after several
years of debate and considerable lobbying by the purveyors of electricity,
the first death by electrocution was attempted at Auburn Penitentiary in
New York. Described as quicker, surer, and less painful than death by hanging
– which, in the hands of an inept hangman, all too often involved slow
strangulation – the first electrocution was anything but. The condemned
Cambridge Histories Online © Cambridge University Press, 2008
166 Elizabeth Dale
prisoner,William Kemmler, did not die until the second attempt and had
to sit strapped to his chair convulsing uncontrollably for several minutes
after the first attempt while the generator was restarted. Fortunately for
those who favored the new approach, the next year New York successfully
executed four men at Sing Sing using the electric chair. Although that
execution quieted some who protested against the practice, opponents of
the death penalty had some brief successes in this period. In 1907, Kansas
abolished the death penalty, the first state to do so since before the Civil
War. Within the next ten years, six other states followed suit; the last,
Missouri, did so in 1917. But those successes were short lived. Two years
after it passed the law abolishing the death penalty, Missouri reversed itself,
reinstating the death penalty. By 1920, three of the other states that had
just abolished the death penalty had reinstated it as well.
CONCLUSION
Standard, court-centered accounts of criminal justice in the United States
over the long nineteenth century often have an unarticulated premise: that
the country moved away from a localized system of criminal justice to
embrace the European model of the nation-state, and in so doing abandoned
its commitment to popular sovereignty. While some studies note the gains
offered by this shift, particularly emphasizing the benefits of having the
protections of the Bill of Rights apply to state court proceedings, others
appear more concerned by the loss of an indigenous political tradition
and the decline of community power. Framed as a narrative of declension,
those histories gloss over the extent to which extra-legal violence, popular
pressure, and exploitation shaped criminal justice in America during the
long nineteenth century. They can do so only by ignoring the struggles that
pitted governed against government in state court criminal trials, and the
moments when different parts of the government battled one another. And
when they do so, they forget the extent to which legal decisions depended
more on who the parties were, or the passions of the moment, than on what
the law required.
Contemporaries had a sharper understanding of what was going wrong
and what needed to be done. The first Justice Harlan’s laments in Hurtado
were echoed by Roscoe Pound’s complaints about popular influence on
law.18 Nor were those objections the product of some sort of post–Civil
War decline. In the antebellum era, for every article that was published
18 Roscoe Pound, “The Need of a Sociological Jurisprudence,” Green Bag 19 (October 1907),
607.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States, 1790–1920 167
praising the local courts when they rendered a verdict consistent with local
ideas of justice, rather than the rule of law,19 there was a second that deplored
the same verdict as a sign of the nation’s retreat into a jurisprudence of
lawlessness.20
19 Philadelphia Public Ledger 8 April 1843, 2 (verdict in Mercer trial).
20 Anon., “The Trial of Singleton Mercer for the Murder of Mahlon Hutchinson Heberton,”
New Englander 1 ( July 1843), 442.
Cambridge Histories Online © Cambridge University Press, 2008
6
citizenship and immigration law, 1800–1924:
resolutions of membership and territory
kunal m. parker
The paradigmatic function of a national immigration regime is to defend
a territorial inside from a territorial outside. Access to and presence within
this territorial inside are determined on the basis of whether one is a “citizen”
or an “alien,” where both terms are understood in their formal legal
sense. All of the activities we associate with the contemporary U.S. immigration
regime – exclusion and deportation, entry checkpoints, border patrols,
detention centers, and the like – make sense in these terms.
Liberal American theorists have provided powerful moral justifications
for this defense of the territorial inside from the territorial outside on the
ground that it is only in this way that the coherence of a national community
on the inside can be preserved and fostered. In this rendering, the coherence
of the national community may not take the form of an oppressive Blut und
Boden nationalism. Rather, the territorial inside must be a homogeneous
space of rights enjoyed by all insiders. Although most of these insiders will
be citizens, resident immigrants will be treated fairly and given a reasonable
opportunity to become citizens. The very coherence of the territorial
inside as a homogeneous space of rights justifies immigration restriction.
Outsiders – who are imagined as citizens of other countries – have no
morally binding claim to be admitted to the inside.
This theoretical rendering of the activities of the national immigration
regime is the product of recent history. For the first century of the United
States’ existence as a nation (from the American Revolution until the 1870s),
a national immigration regime that regulated individuals’ access to, and
presence within, national territory on the basis of their national citizenship
simply did not exist. Even after such a regime came into existence in the
1870s, the idea of numerical restrictions on immigration emerged only
slowly and was not comprehensively established until the 1920s.
More important, both before and after the establishment of a national
immigration regime, there was simply no such thing as a territorial inside
that was a homogeneous space of rights enjoyed by all those who were
168
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 169
territorially present. Throughout American history, the territorial inside
has always been rife with internal foreigners or outsiders who have – in a
manner exactly analogous to the figure of the outsider of liberal immigration
theory – found themselves restricted in their ability to negotiate the
American national territory or otherwise inscribed with a lack of belonging.
Indeed, the activities of the national immigration regime themselves
appear inevitably to be accompanied by an often deliberate blurring of the
distinction between inside and outside, citizen and alien.
To recover this history, it is necessary first to invoke the now-vanished
world of contested non-national memberships and territorialities that prevailed
in the United States until the Civil War. Even as it confronted
mass immigration from places like Ireland and Germany, this was a world
characterized by multiple internal foreignnesses – principally those applicable
to native-born free blacks and paupers – that as such prevented the
emergence of a national immigration regime that could direct its gaze outward
on the external foreignness of aliens. Only after the Civil War, when
national citizenship had been formally extended to the entire native-born
population and national citizenship was tentatively linked to the right to
travel throughout national territory, could a national immigration regime
premised on the external defense of national territory emerge.
Although the core legal relationship between national citizenship and
national territory was established for the first time as a result of the Civil
War, the path to a national immigration regime of numerical restrictions
and “illegal aliens” was neither automatic nor predetermined. Between
1870 and 1924, confronted with a vastly expanded immigration stream
from Southern and Eastern Europe and Asia, the American immigration
regime shifted from a strategy that sought to sift out limited numbers of
undesirables from a basically desirable immigrant stream to a strategy based
on the presumption that no alien could enter, and remain within, national
territory unless explicitly permitted to do so. This shift took place in a set
of overlapping contexts familiar from the writings of American historians –
industrial capitalism, scientific racism, formal imperialism, expansion of
the national government, and the rise of the administrative state. Yet each
new restriction was beset with all manner of uncertainty. How precisely,
for example, was one to define “whiteness” for purposes of naturalization
law? How was one to determine country quotas for the new immigration
regime? How was one to set boundaries between the power of immigration
officials and the power of courts?
Notwithstanding the formal extension of national citizenship to the
entire native-born population in the aftermath of the Civil War, various
internal foreignnesses emerged as the national immigration regime sought
to exclude certain kinds of aliens as undesirable. For every undesirable
Cambridge Histories Online © Cambridge University Press, 2008
170 Kunal M. Parker
immigrant of a certain ethnic or national description, there corresponded
a domestic minority subjected to discrimination and surveillance. Groups
that had once found themselves on the inside as the result of a colonial or
imperial acquisition of territory were reclassified to the “outside” and fell
within the purview of the immigration regime. Conjoined to these new
species of internal foreignness must be the legally sanctioned, formal and
informal, public and private foreignness imposed on African Americans in
the form of segregation – a closing off of public and private spaces analogous
to the closing of the border to immigrants. Ironically, important parts of
the battle against racial segregation in the urban North would be fought
against European ethnic immigrants.
The object of historicizing aspects of the contemporary U.S. immigration
regime is to emphasize that there is nothing immanent in national
citizenship nor inevitable about its relationship to national territory that
points toward the kind of immigration regime that currently subsists in the
United States. It is also to show, through an examination of the long history
of American citizenship and immigration, that the distinction between
inside and outside, citizen and alien, is never clean.
I. EMERGING FROM THE EIGHTEENTH CENTURY (1780–1820)
It is essential to distinguish rigorously between the new category of U.S.
citizenship that emerged in the aftermath of the American Revolution
and the state-level legal regimes that governed the individual’s rights to
enter and remain within state territories. In the late eighteenth and early
nineteenth centuries, the legal relationship between national citizenship
and national territory did not undergird immigration restriction. Instead,
U.S. citizenship as a category slowly infiltrated the state-level regimes.
During the Confederation period, the individual states moved to define
their own citizenries and to establish naturalization policies. At the same
time, however, there was a sense that the American Revolution had created
a national politico-legal and territorial community that transcended state
boundaries. This is reflected in the “comity clause” of Article IV of the
Articles of Confederation, which reads in part as follows: “The better to
secure and perpetuate mutual friendship and intercourse among the people
of the different states in this union, the free inhabitants of each of these states
(paupers, vagabonds, and fugitives from justice excepted) shall be entitled
to all privileges and immunities of free citizens in the several states; and
the people of each state shall have free ingress and regress to and from any
other state.” The clause sought for the first time to create something like a
relationship between national membership and national territory through
the imposition of the duty of comity on the individual states. (Admittedly,
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 171
as James Madison pointed out at the time, the clause did so in a confused way
by asking states to accord the “privileges and immunities of free citizens” to
the “free inhabitants” of other states.1) However, what is especially revealing
about the clause are the classes of individuals it excludes from the benefits of
this obligation of comity; namely, “paupers, vagabonds and fugitives from
justice.”
With the formation of the United States at the end of the 1780s, the
category of U.S. citizenship emerged for the first time as the legal category
that would define membership in the new national political community.
An important feature was the idea of voluntary, as distinguished from perpetual,
allegiance. The English theory had been that subjects owed lifelong
allegiance to the monarch. Not surprisingly, the notion that allegiance could
be chosen – and hence cast off – was important in justifying the break from
Great Britain.
Paradoxically, notwithstanding the new emphasis on the voluntary nature
of allegiance, U.S. citizenship was extended among the native-born population
by fiat. However, the question of what segments of the native-born
population should count as U.S. citizens remained vague. As a sparsely
populated country in need of settlers, the United States retained the basic
jus soli or birthright citizenship orientation of English law. However, the
principle of jus soli probably worked best only for native-born whites. At its
moment of origin, the U.S. Constitution did not deal explicitly with the
question of whether or not those belonging to other groups – free blacks,
slaves and Native Americans – qualified as U.S. citizens by reason of birth
in U.S. territory.
The U.S. Constitution was more explicit about the induction of aliens
into the political community. Article I, Section 8 gave Congress the power
to promulgate “a uniform rule of naturalization.” In 1790, the first federal
naturalization act limited naturalization to a “free white person” who had
resided for two years in the United States, proved his “good character,”
and taken an oath “to support the constitution of the United States.”2 The
naturalization period was increased to five years by the Naturalization Act of
1795 and has remained at five years ever since, with only one brief aberration
in the late 1790s.3
The U.S. Constitution also revamped the comity clause of the Articles
of Confederation. Article IV, Section 1 provided that “the Citizens of each
State shall be entitled to all Privileges and Immunities of Citizens in the
1James Madison, The Federalist, No. 42. 2Act of March 26, 1790 (1 Stat. 103).
3 Act of January 29, 1795 (1 Stat. 414). The aberration was the short-lived Naturalization
Act of June 18, 1798 (1 Stat. 566), which increased the naturalization period to fourteen
years.
Cambridge Histories Online © Cambridge University Press, 2008
172 Kunal M. Parker
several States.” The embarrassing, but revealing, reference to “paupers,
vagabonds and fugitives from justice” in the “comity clause” of the Articles
of Confederation was removed.
Despite the inauguration of the category of U.S. citizenship, however,
Congress did not acquire the explicit constitutional authority to formulate
a national immigration policy. Neither did it attempt to establish one in
practice. If one had to identify the principal mode in which U.S. citizenship
was wielded against aliens at the national level, it would make most sense
to say that U.S. citizenship acquired meaning principally as a means of
controlling the influence of aliens in the national political arena. Segments
of the American national leadership repeatedly expressed fears about the
capacity of aliens reared under monarchies or carried away by the excesses
of the French Revolution to exercise republican citizenship in a responsible
fashion. Evidence of these fears may be observed in the Constitutional
Convention’s debates over the qualifications for national political office and
later, and more egregiously, in the Federalist anti-alien paranoia reflected
in the passage of the Alien and Sedition Acts in the late 1790s.
The point, however, is that immigration policies – those everyday policies
that determined outsiders’ access to, and presence within, territory –
remained in the hands of the states. State and local authorities regulated
outsiders’ access to their territories without relying on U.S. citizenship
as providing the exclusive logic for distinguishing between insiders and
outsiders.
For the most part, in the decades immediately following the American
Revolution, the states continued colonial policies for regulating access to
their territories. Colonial policies regarding the settling of British North
America were influential in establishing an image of America that endured
well beyond the Revolution. Hector St. John de Cr`evecoeur’s celebrated
Letters from an American Farmer, which depicted America as a place where
Europe’s dispossessed could flourish, had in fact been written before the
Revolution, although it was not published until the 1780s. Furthermore,
a set of concerted British policies that had constituted America as something
of a haven for European Protestants by the mid-eighteenth century
fed directly into the post-Revolutionary national idea, first articulated in
Thomas Paine’s Common Sense, of America as “an asylum for mankind.”
The actual legal structures regulating movement of peoples during the
colonial period had always been distinct from the rosy vision of America as
an “asylum.” Colonial assemblies had adhered to the mercantilist idea that
population equaled wealth. However, they had also repeatedly expressed
misgivings about the specific kinds of people entering their territories as a
result of British policies. These misgivings could be categorized as dislike
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 173
of (a) the foreign (with a particular animus directed against Catholics), (b)
the criminal, and (c) the indigent. Of these, it is the last that determined
most unequivocally the logic of colonial territorial restriction.
What is especially noteworthy about colonial territorial restrictions is
the seemingly indiscriminate way in which they mingled dislike of insiders
and outsiders. The regulation of what was frequently labeled a “trade in persons”
appears to have been an external manifestation of a highly articulated
internal regime for regulating natives’ access to territory. The governing
logic of this comprehensive system of territorial restriction is to be found in
American versions of the seventeenth-century English poor laws. The idea
was that the poor were to be denied territorial mobility as the poor, because
of the fear that they would impose costs on the places they entered, whether
they entered such places from a place “beyond sea” or from a place just a
few miles away.
It is particularly telling that local poor relief officials were entrusted with
the responsibility for administering external and internal statutes regulating
the territorial mobility of persons. In eighteenth-century Massachusetts,
for example, shipmasters were required by a series of statutes to post a bond
with local poor relief officials so that towns receiving “lame, impotent,
or infirm persons, incapable of maintaining themselves . . . would not be
charged with their support.”4 At the same time, townspeople were required
in a series of “entertainment” statutes to notify local poor relief officials of
individuals from other towns who were visiting them; failure to notify
meant imposition of legal responsibility for any costs associated with such
individuals on their hosts. Towns even provided their own legal residents
with travel documents – species of internal passports – certifying that they
would take them back in the event of illness or injury.
In the eighteenth century, in other words, “foreignness” was a polyvalent
word. It denoted those who were outside the larger community of allegiance
and blood, to be sure, but could also designate those who came from neighboring
towns and colonies. National membership was not mapped onto
territory in such a way that it carried with it rights of access to national
territory conceived as such. Nor did territorial disabilities follow uniquely
and unequivocally from a lack of national membership.
This sense that the poor were undesirable as the poor, and were to be
denied territorial mobility regardless of their citizenship status, continued
in full force after the American Revolution. As we have seen, the comity
4 “An Act Directing the Admission of Town Inhabitants,” in The Acts and Resolves, Public
and Private, of the Province of Massachusetts Bay, 21 Vols. (Boston:Wright & Potter, 1869–
1922), I, chap. 23 (1701).
Cambridge Histories Online © Cambridge University Press, 2008
174 Kunal M. Parker
clause of the Articles of Confederation excepted “paupers, vagabonds, and
fugitives from justice” from each state’s obligation to accord the “privileges
and immunities of free citizens” to the “free inhabitants” of the other states.
The native poor were thus rendered as internal foreigners to be denied
territorial mobility.
Although states remained faithful to colonial poor relief models in most
essentials, they also began incrementally and confusedly to insert new categories
of citizenship into these models. But the legislation of this period
does not appear to have distinguished meaningfully between U.S. citizenship
and state citizenship. Furthermore, the disabilities imposed on natives
and aliens were roughly comparable and were the result of a local politics.
For example, under New York’s 1788 “Act for the Better Settlement and
Relief of the Poor,” shipmasters were required to report the names and
occupations of all “persons” brought into the port of New York and would
be fined £20 for each unreported person, and £30 if such person was a “foreigner.”
The law further denied admission to “any person” who could not
give a good account of himself to local authorities or was likely to become
a charge to the city; such persons were to be returned “to the place whence
he or she came.”5
Massachusetts chose to refer to state citizenship, rather than U.S. citizenship,
in its legislation. Thus, in the early 1790s, in a dramatic departure
from colonial practice, Massachusetts made citizenship “of this or any of the
United States” (but not U.S. citizenship) a prerequisite to the acquisition of
“settlement” or “inhabitancy” in a town, thereby making it impossible for
non-citizens to acquire legal rights to residence and poor relief in the town
in which they lived, worked, and paid taxes. The same law also contained
various provisions intended to make it difficult for citizens from other states
and Massachusetts citizens from other towns to acquire a “settlement.”6
Occasional statutory discriminations between citizens and aliens notwithstanding,
indigent citizens might sometimes be worse off than indigent
aliens. When cities and towns physically removed foreigners from their
territories, they were far more likely to remove those who were citizens than
those who were not, for the simple reason that it was cheaper to send someone
to a neighboring state than to Europe. Connecticut’s law of 1784 expressed
an accepted principle of sound poor relief administration when it authorized
the removal of all foreigners who became public charges, so long as the cost
5 “Act for the Better Settlement and Relief of the Poor” (1788, chap. 62), Laws of the State
of New York Passed at the Sessions of the Legislature Held in the Years 1785, 1786, 1787, and
1788, Inclusive (Albany:Weed Parsons and Company, 1886).
6 “An Act Ascertaining What Shall Constitute a Legal Settlement of any Person in any
Town or DistrictWithin this Commonwealth,” Acts 1793, Chapter 34.
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 175
of transportation did not exceed “the advantage of such transportation.”7
Of 1,039 individuals “warned out” of Boston in 1791, 237 were born in
foreign countries, 62 in other states, and 740 in other Massachusetts towns.
Of course, “warned out” means only that these individuals were rendered
legally subject to physical removal, not that they were actually physically
removed. But evidence of actual physical removals out of state in lateeighteenth
century Massachusetts points toward removals to New York
and Nova Scotia, rather than to Europe or theWest Indies.
The highly local understanding of the distinction between insider and
outsider points to a central feature of systems of territorial restriction in
the late eighteenth and early nineteenth centuries; namely, that even as territorial
restrictions were promulgated at the state level and began to incorporate
the new categories of U.S. and state citizenship, individual cities
and towns rather than state authorities remained responsible in the first
instance for the administration of poor relief and territorial restrictions. As
immigration increased in the late eighteenth and early nineteenth centuries,
seaports such as Boston, New York, and Philadelphia began to protest the
injustice of having to bear the burden of supporting sick, poor, and disabled
aliens. Tensions developed between state and local authorities; they would
become more serious and would be resolved only through bureaucratic centralization
at the state level by the middle of the nineteenth century.
In the late eighteenth and early nineteenth centuries, one other emergent
system of internal territorial restriction should be mentioned: that applicable
to free blacks. This system of territorial restriction was intertwined
with that of the poor laws, but also distinct from it.
Slaves had always been subject to spatial and territorial restrictions as
slaves. However, in the late eighteenth and early nineteenth centuries, the
Northern abolition of slavery and the introduction of manumission acts in
the South brought the problem of free blacks into sharp focus. Towns and
localities all over the North expressed distaste for free blacks and sought to
exclude and remove them from their territories through any means available.
The important point here is that Northern towns and localities were expressing
hostility not only toward blacks from the territorial outside (fugitive
slaves or free blacks from the mid-Atlantic or Southern states; sailors and
other migrants from theWest Indies) but also toward individuals who had
always been on the territorial inside (i.e., individuals who had been tolerated
as town and local residents so long as they were slaves, but who had become
repugnant with the coming of freedom). Freedom for Northern blacks
brought with it, in other words, official, although ultimately unsuccessful,
7 Quoted in Marriyn C. Baseler, “Asylum for Mankind”; America, 1607–1800 (Ithaca, N.Y.,
1998), 197.
Cambridge Histories Online © Cambridge University Press, 2008
176 Kunal M. Parker
efforts to render them foreign. As we shall see, this problem would become
much more serious in the Upper South later in the nineteenth century. It is
important, nevertheless, to establish that this distinct problem of internal
foreignness began in the late eighteenth and early nineteenth centuries in
the North.
II. TENSIONS OF THE ANTEBELLUM PERIOD (1820–1860)
From the perspective of the law of immigration and citizenship, the period
from 1820 to 1860 was one of immense confusion. Although there was a
marked development of a sense of national citizenship as implying certain
rights with respect to national territory, this burgeoning national imagination
coexisted with powerful – in the case of free blacks, increasingly powerful
– internal foreignnesses. The result was two distinct sets of conflicts.
The first conflict occurred over the question whether the U.S. government
or the states possessed the constitutional authority to regulate immigration.
There was no immigration restriction at the national level. Nevertheless,
between 1820 and 1860, as part of its developing Commerce
Clause jurisprudence, the U.S. Supreme Court chipped away at the states’
constitutional authority to regulate immigration. However, as long as slavery
remained alive, the U.S. Supreme Court would not definitively rule that
states had no constitutional authority to regulate immigration, because to
do so would have stripped states – especially Southern states – of the power
to regulate alien and native free blacks’ access to their territories.
In this atmosphere of uncertainty surrounding the locus of constitutional
authority over immigration restriction arose a second, distinct conflict:
should the everyday regulation of outsiders’ access to territory take place
at the state or local level? Since the eighteenth century, the regulation of
outsiders’ access to territory had taken place at the local level. However,
centralized state authority grew steadily throughout the antebellum period.
Particularly as mass immigration into the United States picked up after
1820, state authorities increasingly became persuaded that the excessively
parochial interests of local officials were obstructing the efficient regulation
of non-citizens’ access to state territories. By 1860, after decades of experimentation
and conflict between state and local authorities, large state-level
bureaucratic apparatuses had emerged to regulate immigration into state
territories.
Federal-State Conflict and the Problem of Black Foreignness
As the Republic matured, there emerged the sense that some relationship
must exist between national citizenship and national territory. This sense
was conventionally expressed in terms of the rights that citizens of one state
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 177
enjoyed with respect to the territory of another. In 1823, in the clearest
antebellum attempt to elucidate the meaning of the “privileges and immunities”
clause of Article IV of the U.S. Constitution, Justice Bushrod
Washington declared that the “privileges” within the meaning of the constitutional
text were those “which are, in their nature, fundamental.” One
of these allegedly “fundamental” privileges was “the right of a citizen of
one state to pass through, or to reside in any other state, for the purposes of
trade, agriculture, professional pursuits, or otherwise. . . . ”8 However, one
also encounters judicial pronouncements to the effect that national citizenship
as such implied a right to travel throughout national territory. For
example, in 1849, Chief Justice Taney’s dissenting opinion in the Passenger
Cases stated, “We are all citizens of the United States, and, as members of
the same community, must have the right to pass and repass through every
part of it without interruption, as freely as in our own States.”9
The apprehension that there was some relationship between national
citizenship and national territory continued to leave open the interrelated
questions of (a) who belonged to the community of national citizens and
enjoyed rights to enter and remain within every part of national territory
and (b) which authority, the federal or the state governments, had the power
to exclude and remove non-citizens from territory. We explore the second
question before turning to the first.
The formal constitutional question was whether Congress possessed the
power to exclude and remove non-citizens from national territory pursuant
to Article 1, Section 8 of the U.S. Constitution, which gave it the authority
“to regulate Commerce with foreign Nations, and among the several
States,” or whether the states possessed a corresponding power as part of their
regular and residual “police” power to promote the health, safety, and welfare
of their populations. The paradoxes of the antebellum legal representation of
the movement of persons as “commerce” should not be lost. To begin with,
the eighteenth-century “trade” in indentured labor had essentially died
out by 1820. More important, however, to argue that the movement of
“persons” was “commerce,” and therefore that Congress could constitutionally
regulate immigration, had anti-slavery implications. It opened the door
for suggestions that Congress could constitutionally prevent the slave and
free states from regulating the ingress of alien and native free blacks into
their territories and even hinted, surreptitiously and by implication, that
native free blacks might be U.S. citizens with the right to move throughout
national territory.
Accordingly, it was the pro-slavery wing of the U.S. Supreme Court that
argued most insistently that “persons” were not “articles of commerce” and
8 Corfield v. Coryell, 4 Wash. C.C. 371, 380–81 (U.S.C.C. 1823).
9 Passenger Cases (Smith v. Turner; Norris v. Boston), 48 U.S. (7 How.) 283, 283, 492 (1849).
Cambridge Histories Online © Cambridge University Press, 2008
178 Kunal M. Parker
that tended most often to invoke the figure of “the immigrant” as someone
who exercised volition in coming to the United States. In his dissent in the
Passenger Cases, for example, Justice Daniel argued indignantly that “the
term imports is justly applicable to articles of trade proper, – goods, chattels,
property, subjects in their nature passive and having no volition, – not to
men whose emigration is the result of will”; it would be a “perversion” to
argue otherwise.10 For constitutional purposes, the invocation of the white
immigrant as an actor capable of volition in movement served to secure the
perpetuation of black slavery.
The tussle between the view that states could not constitutionally regulate
immigrant traffic and the (pro-slavery) view that states could constitutionally
regulate the influx of all non-citizens as a matter of state police
power was never resolved before the CivilWar. In 1837, in Mayor of the City
of New York v. Miln, the U.S. Supreme Court upheld a New York law that
required shipmasters to report passenger information and to post bonds for
passengers who might become chargeable to the city.11 In 1849, however,
in the Passenger Cases, a deeply divided Court struck down New York and
Massachusetts laws that involved the collection of head taxes on incoming
immigrants.12
Beneath this formal constitutional debate lay the explosive question of
whether free blacks were part of the community of U.S. citizens and, as such,
whether they possessed the right to travel throughout national territory.
Throughout the antebellum period, both free and slave states adamantly
insisted on their ability to exclude alien and native free blacks. Even in
states that saw themselves as bastions of anti-slavery sentiment, free blacks
were unwelcome. In 1822, in a report entitled Free Negroes and Mulattoes, a
Massachusetts legislative committee emphasized “the necessity of checking
the increase of a species of population, which threatens to be both injurious
and burthensome. . . . ”13 States further west sought to oblige blacks seeking
residence to give sureties that they would not become public charges. In
other instances, blacks were forbidden to move into the state altogether,
sometimes as a result of state constitutional provisions.
The paranoia about the presence of free blacks was, of course, far greater in
the slave states, where the presence of free blacks was thought to give a lie to
increasingly sophisticated racial justifications for slavery. As the ideological
struggle over slavery intensified, the situation of native free blacks in the
South worsened. Slave state legislation usually barred the entry of free blacks
10 Passenger Cases at 506.
11 Mayor of New York v. Miln, 36 U.S. (11 Pet.) 102 (1837).
12 Passenger Cases.
13 Massachusetts General Court, House of Representatives, Free Negroes and Mulattoes
(Boston, True & Green, 1822), 1.
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 179
not already residents of the state. However, over time, the states extended
these prohibitions to their own free black residents who sought to return
after traveling outside the state either to a disapproved location or to any
destination at all. Slave states also often required that manumitted slaves
leave the state forever, on pain of re-enslavement. Shortly before the Civil
War, several slave states considered forcing their free black populations to
choose between enslavement and expulsion, and Arkansas actually passed
such legislation.
The U.S. Supreme Court repeatedly acquiesced in free and slave states’
attempts to exclude native-born free blacks. For example, in 1853, in Moore
v. Illinois, Justice Grier stated, “In the exercise of this power, which has been
denominated the police power, a State has a right to make it a penal offence
to introduce paupers, criminals or fugitive slaves, within their borders. . . .
Some of the States, coterminous with those who tolerate slavery, have found
it necessary to protect themselves against the influx either of liberated or
fugitive slaves, and to repel from their soil a population likely to become
burdensome and injurious, either as paupers or criminals.”14 The larger
point here is that, in acquiescing in the states’ efforts to exclude nativeborn
free blacks, the Court was also taking a position on native-born free
blacks’ status as U.S. citizens. If Chief Justice Taney could state in the
Passenger Cases that national citizenship implied a right to travel throughout
national territory, to uphold states’ rights to exclude native-born free
blacks was tantamount to excluding native-born free blacks from national
citizenship.
In general, native-born free blacks remained suspended between the status
of citizen and alien. Northern courts trod carefully and hypocritically in
this area, formally upholding both black citizenship and the discriminatory
laws that impaired that status. Their conclusions were ultimately used to
justify a denial of free blacks’ national citizenship on the ground that no
state actually recognized the full citizenship of free blacks and, therefore,
that free blacks could not be members of the national community.
This position shaped the United States’ willingness to recognize blacks as
its own when they traveled abroad. U.S. Secretaries of State invoked blacks’
lack of full citizenship in Northern states to justify their hesitation in issuing
native-born free blacks passports attesting to their U.S. citizenship. In 1839,
a Philadelphia black was denied a passport on the ground that Pennsylvania’s
denial of suffrage to blacks meant that its blacks were not state citizens,
which implied that they could not be U.S. citizens. From 1847 on, the
policy was to give blacks special certificates, instead of regular passports.
The U.S. Supreme Court’s tortured 1857 decision in Scott v. Sandford merely
confirmed this suspension of native-born free blacks between the status of
14 Moore v. Illinois, 55 U.S. (14 How.) 13 (1853) (Grier, J.).
Cambridge Histories Online © Cambridge University Press, 2008
180 Kunal M. Parker
citizen and alien. According to Justice Taney’s opinion, blacks could not
be U.S. citizens by reason of birth on U.S. soil ( jus soli), birth to a citizen
father ( jus sanguinis), or naturalization.15
The legal decision to suspend blacks between citizen and alien status
should not obscure the range of efforts, private and public, actively to
represent native-born free blacks as “Africans” with a view to shipping
them back to Africa. Here, the effort was not so much to deny blacks legal
citizenship as quite literally to give blacks – but only those who were free –
a bona fide foreign identity and place of origin to which they could be
removed. Representing itself variously, as the occasion demanded, as both
pro-slavery and anti-slavery, the American Colonization Society privately
established the colony of Liberia in West Africa, to which it sought to
encourage free blacks to return. Slaveholders all over the south conditioned
manumission on their slaves’ agreement to depart for Liberia, conditions
that were legally upheld.
Considerable public support for colonization existed, particularly in the
Upper South. Legislatures in Delaware, Maryland, Kentucky, Tennessee,
and Virginia all appropriated moneys to facilitate colonization. Maryland’s
plan was the most ambitious. In the early 1830s, Maryland appropriated
$200,000 to be spent over twenty years to “colonize” manumitted slaves.
The legislature ordered county clerks to report all manumissions to a stateappointed
Board of Managers for the Removal of Colored People, which
instructed the Maryland State Colonization Society to remove the manumitted
slave to Africa or any other place deemed suitable. Newly freed
blacks wishing to remain in the state could choose re-enslavement or appeal
to a county orphan’s court. Those who were unable to obtain court permission
and resisted the re-enslavement option might be forcibly transported.
Of course, the draconian nature of these laws should not suggest an equally
draconian enforcement: Baltimore became a center of free black life in the
antebellum years.
Given this considerable investment in denying blacks’ legal citizenship
and in insisting on their foreignness, it is not surprising that at least some
Southern state courts formally assimilated out-of-state free blacks to the
status of aliens. This was hardly a common legal position (for the most
part, states were satisfied simply to deny blacks’ citizenship), but it is the
ultimate illustration of the internal foreignness of native-born free blacks.
In the 1859 decision of Heirn v. Bridault, involving the right of a Louisiana
free black woman to inherit the property of a white man with whom she
had been cohabiting in Mississippi, the Mississippi Supreme Court formally
ruled that the woman could not inherit property as an alien. It offered the
15 Scott v. Sandford, 60 U.S. (19 How.) 393 (1857).
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 181
following rationale: “[F]ree negroes [who were in Mississippi in violation of
law] are to be regarded as alien enemies or strangers prohibiti, and without
the pale of comity, and incapable of acquiring or maintaining property in
this State which will be recognized by our courts.”16
State-Local Conflicts Over Immigration
The constitutional conflict over whether the federal government or the states
possessed the legal authority to regulate immigration created an atmosphere
of legal uncertainty in which states were left to cope as best they could with
the growing tide of immigrants. Antebellum immigration from Europe
began in earnest in the 1820s and peaked between the late 1840s and
mid-1850s as a result of the Irish famine migration. The migration of
the first half of the nineteenth century was largely German and Irish and
heavily Catholic. It was directly connected with, indeed indispensable to,
the development of capitalism in the North. For the first time, it made
sense to refer to an immigrant working class.
For the first time as well, there was a highly organized popular nativist
movement. Antebellum popular nativism might be characterized as an
attempt on the part of white working-class Americans at a time of bewildering
change to combat what they perceived as their own increasing disempowerment.
Fired by the fear of a vast Catholic conspiracy designed
to subvert the Protestant Republic, nativists sought in the first instance to
reduce immigrant participation in political life. Anti-immigrant tracts routinely
called for lengthening the naturalization period so that immigrants
would be properly educated in the ways of republican life before they could
vote, checking fraudulent naturalizations, and safeguarding the integrity
of the ballot box.
Throughout the surge of popular nativism, state-level immigration
regimes remained oriented to the exclusion of the poor, although they
also targeted immigrants with criminal backgrounds. However, important
developments distinguished these state-level immigration regimes from
their eighteenth-century predecessors. First, the modalities of territorial
restriction were changing. Statutes that had once imposed restrictions on
all incoming “persons” with only slight discriminations aimed at aliens
gave way to statutes that targeted incoming “alien passengers” alone. Possibly
the change registered a growing sense that the right to travel without
undue impediment, at least for white Americans, was now one of the “privileges
and immunities” secured them by Article IV of the U.S. Constitution.
Whatever the reason, the local nature of territorial membership was giving
16 Heirn v. Bridault, 37 Miss. 209, 233 (1859).
Cam,bridge Histories Online © Cambridge University Press, 2008
182 Kunal M. Parker
way to a sense that (a lack of) national citizenship implied (a lack of ) rights
to enter state territories. Second, states engaged in a strategic attempt to terminate
resident immigrants’ rights to remain in state territories. Although
the applicable poor law regimes continued to provide for the removal of
both in-state and out-of-state paupers to their localities or states of origin,
the bureaucratic focus was increasingly on “alien paupers.” The aim
was explicitly to frighten immigrants into refraining from seeking poor
relief for fear that removal would be a consequence of making demands for
public assistance. The result was the beginning of a regular, if still small,
transatlantic deportation process in the 1830s and 1840s.
The creation of a relationship between national citizenship and state territory
was accompanied by a change in the kinds of disabilities placed on
entering aliens. In the late eighteenth and early nineteenth centuries, shipmasters
had been required to post bond in respect of incoming persons with
local poor relief officials; these bonds would be acted on should such persons
become chargeable to the localities they entered. However, local poor
relief officials had often found it difficult to collect on the bonds. Immigrants
often changed their names on arrival, which made them impossible
to trace. In the 1820s, 1830s, and 1840s, accordingly, there was a shift to a
system of outright taxation. In Massachusetts and New York, shipmasters
had to pay a tax on all incoming immigrants and to post bond only for
incoming immigrants with physical disadvantages. The tax revenues supported
a vast network of services for paupers, both immigrant and native.
When the Passenger Cases invalidated the Massachusetts and New York head
taxes in 1849, states resorted to the stratagem of requiring a bond for all
incoming immigrants and offering shipmasters the “option” of commuting
bonds for a fee that was the exact equivalent of the head tax.
The relationship between national citizenship and state territory was
inextricably bound up with the creation of centralized state-level bureaucratic
structures that dislodged the local structures that had continued in
force since the eighteenth century. Although this history must necessarily be
faithful to the legal-institutional arrangements prevailing in the different
states, the experience of Massachusetts is illustrative. There, the centralization
of control over aliens’ territorial rights and poor relief claims that
took place between the late 1840s and mid-1850s was in an immediate
sense a response to the Irish famine migration of the same period. But it
was also the culmination of growing tensions between the state and the
towns over matters of immigration and poor relief. Under the system of
territorial restriction and poor relief that had prevailed since the eighteenth
century, towns were required to bear the costs of supporting their own
poor. However, they were also expected to administer poor relief to those
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 183
who had failed to acquire legal residency in any town in the state, a category
that included immigrants and out-of-state migrants, on condition of
being reimbursed by the state. At the same time, town poor relief officials
were entrusted with the responsibility of regulating outsiders’ access to and
presence within territory.
As immigrant pauperism increased throughout the 1830s and 1840s,
Massachusetts sought to reduce the costs of supporting immigrant paupers
by instituting a head tax on incoming immigrants and by generating
discourses of citizenship that held the claims of immigrant paupers to be
essentially illegitimate because they were the claims of aliens. However, the
state’s efforts to reduce the costs associated with immigrant pauperism were
repeatedly frustrated by the actions of town poor relief officials. Town officials
were notoriously lax in enforcing “alien passenger” laws because they
knew that immigrant paupers would become the charge of the state rather
than of the towns. They also showed a disturbing tendency to cheat the state
in their request for reimbursements for supporting immigrant paupers by
illegally inflating their reimbursement requests (the towns sought to shift
the costs of supporting their own poor onto the state, often by representing
native paupers as immigrant paupers). At the height of the Irish famine
migration, state officials concluded that they simply could no longer afford
the costs associated with town poor relief officials’ excessively narrow view
of their own interests that caused them to cheat the state or ignore its laws.
The result was that Massachusetts centralized the regulation of immigrants’
access to territory and the administration of poor relief to immigrants in
the late 1840s and early 1850s.
The Massachusetts experience of centralization shows how the stategenerated
discursive link between national citizenship and state territory
could be of little concern at the local level. One reason for this persistent
local disregard of a state-generated connection between national citizenship
and state territory – and of state discourses that sought to demonize the
immigrant poor as aliens – was that national citizenship, understood in the
sense of a right to poor relief and a right to reside in the community of
one’s choice, was still a relatively meaningless category when it came to
the treatment of the native poor generally. So long as the native poor were
disenfranchised and remained unable to travel throughout national territory
as citizens – in other words, so long as the native poor were a species of
internal foreigners – local officials would continue to ignore the state-level
distinction between the native poor and the immigrant poor. They would
treat native paupers much as they treated alien paupers, hounding them
out of their towns and localities. Only with the replacement of local control
by state control was this problem solved.
Cambridge Histories Online © Cambridge University Press, 2008
184 Kunal M. Parker
III. THE FEDERAL ERA (1860–1924)
In bringing slavery to an end, the CivilWar removed the major impetus for
states’ insistence on the right to regulate access to their territories. Statelevel
immigration regimes were declared unconstitutional shortly thereafter.
17 The CivilWar also resulted in a clearing up of the variegated antebellum
extension of citizenship to the native-born population. In 1868,
expressly with a view to overruling the Dred Scott decision, Congress wrote
the principle of jus soli or birthright citizenship into the Fourteenth Amendment
to the U.S. Constitution, thereby fundamentally reordering the relationship
between federal and state citizenship. U.S. citizenship was defined
as a matter of a “person’s” birth or naturalization in the United States, with
state citizenship following from U.S. citizenship as a function of where
U.S. citizens resided. Native-born blacks would never again be suspended
between the legal status of citizen and alien or, worse yet, formally assimilated
to the status of aliens in certain states.
The Architecture of the Federal Immigration Order
As U.S. citizenship was formally extended to the entire native-born population
and the vestiges of state-level territorial control removed, it began
to make sense to conceive of national territory as a space of and for the
community of U.S. citizens in a more encompassing way than had been
possible in the antebellum period. There were tentative moves toward constitutionalizing
the right to travel throughout the nation’s territory as an
incident of U.S. citizenship. In 1867, the U.S. Supreme Court struck down
a Nevada tax on persons leaving the state by means of public transportation
on the ground that national citizenship encompassed the right to travel
from state to state.18 Although this decision did not attempt to bring state
legal restrictions on the territorial mobility of the native poor to an end, it
was the first significant constitutional pronouncement that set the stage for
their long decline (a decline that would not be completed before the second
half of the twentieth century19).
Such developments might be seen as contributing to the emergence of a
national immigration regime that could turn its gaze exclusively outward
on immigrants. But new forms of internal foreignness emerged coevally
with the national immigration regime. Unlike in the antebellum period,
17 Henderson v. Mayor of New York, 92 U.S. 259 (1876); Chy Lung v. Freeman, 92 U.S. 275
(1876).
18 Crandall v. Nevada, 73 U.S. (6 Wall.) 35, 41 (1867).
19 Shapiro v. Thompson, 394 U.S. 618, 89 S.Ct. 1322 (1969).
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 185
however, they did not get in the way of the development of the national
immigration regime; rather, they were often its direct outcome. The targeting
of immigrants by race, ethnicity, and national origin blurred the
distinction between immigrants and domestic minorities, even as making
U.S. citizenship a prerequisite to the enjoyment of various rights, privileges,
and benefits introduced various kinds of discrimination into the lived
community.
If the aftermath of the Civil War resulted in a national immigration
regime and the creation of fresh internal foreignnesses, however, the constitutional
legacy of the CivilWar also, perhaps unwittingly, limited both
the federal and state governments in ways that could sometimes redound to
the benefit of immigrants. As national territory was consolidated as a space
of and for U.S. citizens, it was also consolidated in theory as a homogeneous
space of constitutional rights – transformed, as it were, into a coherent territorial
inside. The nature of these constitutional rights was of course not
always clear and would be the subject of struggle. Nevertheless, because
the Fourteenth Amendment’s language lent its protections explicitly to
“persons,” rather than citizens, immigrants on the territorial inside could
invoke it against the state.
The structure of the new immigration regime is exemplified in the state’s
dealings with Chinese immigrants. Chinese had been immigrating to the
United States since the late 1840s. Despite the small number of Chinese
immigrants, anti-Chinese sentiment in California was intense. Organized
white labor in particular saw in the Chinese a dangerous threat to its hardwon
standard of living.
The question of Chinese access to U.S. citizenship was resolved early
against the Chinese. In the aftermath of the CivilWar, Congress had moved
to amend the naturalization statute that had hitherto restricted naturalization
to “free white persons” so as to make naturalization available to individuals
of African descent. In 1870, Senator Charles Sumner of Massachusetts
had proposed simply to delete references to “white” in the naturalization
law, thereby opening up the possibility of citizenship to all immigrants,
but Congressmen from theWestern states had defeated his proposal on the
ground that it would permit the Chinese to become citizens. Accordingly,
naturalization was extended only to “aliens of African nativity and to persons
of African descent.”20 Attorneys subsequently bringing naturalization
petitions on behalf of Chinese immigrants argued that the term “white” in
the 1870 naturalization law was poorly defined and should be interpreted to
include the Chinese. The federal courts disagreed, however, on the ground
that a white person was of the Caucasian race and that Chinese were of the
20 Act of July 14, 1870 (16 Stat. 254).
Cambridge Histories Online © Cambridge University Press, 2008
186 Kunal M. Parker
“Mongolian race.”21 Nevertheless, the Fourteenth Amendment’s embrace
of “persons” in its birthright citizenship clause ensured that native-born
Chinese would be U.S. citizens. In 1898, despite arguments from the government
to the contrary (which suggests that the jus soli principle of the
Fourteenth Amendment could be disputed even thirty years after its promulgation),
the U.S. Supreme Court held as much.22
Despite the hostility to admitting Chinese into the national community,
there had always existed a current of pro-Chinese sentiment growing from
appreciation for Chinese labor, on the one hand, and the desire to increase
commercial contact with China, on the other. In 1868, the United States and
China had signed the Burlingame Treaty, which recognized reciprocal rights
of travel “for purposes of curiosity, of trade, or as permanent residents.”23
However, anti-Chinese sentiment in California slowly seeped into national
attitudes toward the Chinese. In 1875, as the very first piece of federal
immigration legislation, Congress passed the Page Law, aimed at excluding
“coolie labor” and Chinese prostitutes.24
As the move to restrict the entry of Chinese became a key issue in the
national election of 1880, the United States renegotiated the Burlingame
Treaty to give itself the right to “regulate, limit or suspend” the immigration
of Chinese laborers whenever their entry or residence in the United States
“affects or threatens to affect the interests of that country, or to endanger
the good order of [the United States] or of any locality within the territory
thereof.”25 Shortly thereafter, in 1882, Congress enacted the first of a series of
Chinese exclusion laws suspending the immigration of Chinese laborers.26
For the first time, the United States denied individuals the right to enter
the country on the ground of race or nationality.
When the Chinese exclusion laws were challenged before the U.S.
Supreme Court, the Court articulated for the first time in immigration law
what was known as the “plenary power” doctrine. Although it acknowledged
that the 1888 exclusion law under challenge was in fact in conflict
with the treaty with China, the Court decided that it had no power to curb
Congress’s power to exclude aliens, regardless of the injustices inflicted on
them. It expressed itself as follows: “The power of exclusion of foreigners
being an incident of sovereignty belonging to the government of the United
States, as part of the sovereign powers delegated by the Constitution, the
21 In re Ah Yup, 5 Sawyer 155 (1878).
22 United States v. Wong Kim Ark, 169 U.S. 649 (1898).
23 Treaty of July 28, 1868 (16 Stat. 739).
24 Immigration Act of March 3, 1875 (18 Stat. 477).
25 Treaty of November 17, 1880 (22 Stat. 826).
26 Act of May 6, 1882 (22 Stat. 58).
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 187
right to its exercise at any time when, in the judgment of the government,
the interests of the country require it, cannot be granted away or restrained
on behalf of any one.”27 Thus the source of the federal government’s exclusion
power – a power that had not been free from doubt as a matter of
constitutional law for the entire period up to the Civil War – shifted from
antebellum interpretations of the Commerce Clause to an invocation of
“sovereignty” that had no explicit grounding in the constitutional text.
From its decision to immunize from substantive judicial review the
federal power to exclude entering immigrants, the U.S. Supreme Court
moved to immunize the federal power to deport resident immigrants. The
1892 Geary Act provided for the deportation of resident aliens. All Chinese
laborers living in the United States were required to obtain a “certificate
of residence” from the Collector of Internal Revenue within one year of
the passage of the Act. Under regulations promulgated pursuant to the
1892 Act, the government would issue a certificate only on the “affidavit
of at least one credible [white] witness.” Any Chinese alien who failed to
obtain the certificate could be “arrested . . . and taken before a United States
judge, whose duty it [was] to order that he be deported from the United
States.”28
The Geary Act sparked a non-compliance campaign led by the Chinese
Six Companies, the leading Chinese immigrant organization of the day.
However, when the Six Companies set up a test case that reached the U.S.
Supreme Court, they met defeat. The Court declared that “[t]he right of
a nation to expel or deport foreigners, who have not been naturalized or
taken any steps towards becoming citizens of the country, rests upon the
same grounds, and is as absolute and unqualified as the right to prohibit
and prevent their entrance into the country.” Even worse, the Court ruled
that deportation “is not a punishment for a crime,” but only “a method
of enforcing the return to his own country of an alien.” The implication
of interpreting deportation as a civil, rather than a criminal, sanction was
that the deported alien was not entitled to the constitutional protections
ordinarily applicable in criminal proceedings.29
The very harshness of the plenary power doctrine led to the invigoration
of two different sets of legal principles that are a hallmark of modern
immigration law; namely, the territorial inside/outside distinction and
the procedure-substance distinction.With respect to the territorial inside/
outside distinction, the U.S. Supreme Court made it clear that the Fourteenth
Amendment to the U.S. Constitution protected all “persons” who
27 Chinese Exclusion Case (Chae Chan Ping v. United States), 130 U.S. 581 (1889).
28 Chinese Exclusion Act of May 5, 1892 (27 Stat. 25).
29 Fong Yue Ting v. United States, 149 U.S. 698 (1893).
Cambridge Histories Online © Cambridge University Press, 2008
188 Kunal M. Parker
happened to be on the territorial inside from certain kinds of actions by
the federal and state governments (including discriminatory legislation by
state governments that the Court deemed a violation of the Equal Protection
Clause).30 It is important to note, however, that this constitutional commitment
to protecting all “persons” who happened to be inside U.S. territory
did not reach the federal government’s “plenary power” to exclude and
deport on the basis of race.
The procedure-substance distinction was the subject of regular struggle
between the federal government and Chinese immigrants. As the federal
government’s substantive power to exclude and deport aliens was progressively
immunized from judicial review under the “plenary power” doctrine,
Chinese immigrants’ strategies focused increasingly on procedural issues.
The battle between the state and Chinese immigrants over procedure is
significant because it reveals how the state consistently sought, through
manipulation of its emerging administrative forms, to blur the distinction
between citizen and alien in its efforts to exclude and remove Chinese.
From the beginning, the Chinese community in San Francisco had been
adept in seeking out judicial assistance to curb the excesses of overzealous
immigration officials. Despite federal judges’ stated opposition to Chinese
immigration, they often tended to use their habeas corpus jurisdiction to
overturn immigration officials’ decisions to exclude Chinese immigrants,
thereby leading to considerable tension between the courts and the bureaucrats,
with the latter accusing the former of subverting the administration
of the Chinese exclusion laws. The success of Chinese immigrants in using
courts to curb the excesses of immigration officials eventually led Congress
to pass laws that endowed administrative decisions with legal finality. Immigration
restriction thus became one of the key sites for the emergence of
the administrative state.
In 1891, dissatisfied with the state bureaucracies that had been administering
federal immigration laws, Congress passed a new immigration law
that abrogated contracts with state boards of immigration and created a
federal superintendent of immigration who would be subject to review by
the secretary of the treasury.31 The 1891 act also made decisions of immigration
inspection officers final. Appeals could be taken to the superintendent
of immigration and then to the secretary of the treasury. Thus, judicial
review of administrative decisions was eliminated for entering immigrants.
In 1891, when a Japanese immigrant who was denied admission on the
ground that she would become a public charge challenged the procedural
30Yick Wo v. Hopkins, 118 U.S. 356 (1886); Wong Wing v. United States, 163 U.S. 228
(1896).
31 Immigration Act of March 3, 1891 (26 Stat. 1084).
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 189
arrangements of the 1891 act as a denial of due process, the U.S. Supreme
Court dismissed her claims.32
The principle of judicial deference to administrators led to a blurring
of the distinction between citizens and aliens, and thence to the constitution
of the internal foreignness of Chinese immigrants. Of immediate
concern to administrators was the strategy adopted by the attorneys of Chinese
immigrants of taking admission applications of Chinese alleging to be
native-born citizens directly to the courts – and thereby bypassing administrators
– on the ground that the exclusion laws and the administrative
remedies they envisioned were applicable only to aliens (and not to citizens).
The U.S. Supreme Court weighed in for the government. In In re Sing
Tuck, a case involving Chinese applicants for admission who claimed to be
citizens, the Court ruled that such applicants must exhaust their administrative
remedies as provided by the exclusion laws before being able to turn
to the courts. Although the court refrained from deciding whether administrative
officers had jurisdiction to determine the fact of citizenship, the
dissenters recognized that the implication of the decision was to blur the
distinction between citizen and alien and that the decision ultimately rested
on a racialized notion of who might legitimately claim U.S. citizenship.
As Justice Brewer put it, with Peckham concurring, “Must an American
citizen, seeking to return to this his native land, be compelled to bring with
him two witnesses to prove the place of his birth or else be denied his right
to return and all opportunity of establishing his citizenship in the courts
of his country? No such rule is enforced against an American citizen of
Anglo-Saxon descent, and if this be, as claimed, a government of laws and
not of men, I do not think it should be enforced against American citizens
of Chinese descent.”33
A year later, the Court went further. In United States v. Ju Toy, it held
that the administrative decision with respect to admission was final and
conclusive despite the petitioner’s claim of citizenship. Justice Holmes
stated that, even though the Fifth Amendment might apply to a citizen,
“with regard to him due process of law does not require a judicial trial.”34
Not surprisingly, after the Ju Toy decision, habeas corpus petitions filed
by Chinese applicants for admission in the Northern District of California
dropped dramatically, from a total of 153 cases filed in 1904, to 32 in 1905,
to a low of 9 in 1906. In subsequent years, after criticism of the Bureau of
Immigration and its own decisions, the Court scaled back the harshness of
the Ju Toy decision by requiring in the case of a Chinese American applicant
32 Nishimura Ekiu v. United States, 142 U.S. 651, 660 (1891).
33 United States v. Sing Tuck, 194 U.S. 161, 178 (1904).
34 198 U.S. 253 (1905).
Cambridge Histories Online © Cambridge University Press, 2008
190 Kunal M. Parker
for admission who alleged citizenship that the administrative hearing meet
certain minimum standards of fairness.35 However, this last decision appears
to have had little impact on administrative practice.
The blurring of the difference between citizen and alien at the procedural
level suggests one of the important ways in which the national immigration
regime produced internal foreignness in the late nineteenth and early
twentieth centuries. If the Fourteenth Amendment had made it impossible
to take U.S. citizenship away from native-born Chinese as a matter of substantive
law (albeit not for want of trying), immigration officials could do
so as a matter of procedural law. In being denied judicial hearings, nativeborn
Chinese were assimilated to the status of Chinese aliens. Bureaucratic
prejudice could keep certain Americans from entering and hence residing
in the country in which they had been born. This is perfectly consistent
with the view of Bureau of Immigration officials, who viewed native-born
Chinese as “accidental” or “technical” citizens, as distinguished from “real”
citizens.
The denial of adequate procedure as a means of blurring the distinction
between citizen and alien was only one of the ways of producing the internal
foreignness of the Chinese. The harshness of the exclusion and deportation
laws applicable to the Chinese and the general paranoia about the legality
of the Chinese presence translated into a range of legal and administrative
measures that forced the Chinese American community to live for decades
in perpetual fear of American law enforcement officials. Anti-Chinese sentiment
had, of course, resulted in various kinds of discrimination since
the middle of the nineteenth century. But now the immigration regime
itself spilled into the community. Starting in 1909, for example, all persons
of Chinese descent – including U.S. citizens – were required to carry
certificates identifying them as legally present in the country.36 As deportation
increasingly became a tool for regulating Chinese presence in the
early twentieth century, Chinese communities all over the United States
were repeatedly subjected to what has since become a tested method of
ferreting out “illegal aliens” and of impressing on certain kinds of citizens
their lack of belonging – the immigration raid, with all the possibilities of
intimidation and corruption that it carried.
By 1905, the restrictionist focus had shifted far beyond the Chinese.
However, the legal struggles of Chinese immigrants had brought about an
articulation of the major principles of the federal immigration order. These
might be listed as follows: racialized citizenship, plenary congressional
power over the exclusion and deportation of immigrants as an incident
35 Chin Yow v. United States, 208 U.S. 8 (1908).
36 U.S. Dept. of Commerce and Labor, Bureau of Immigration, Treaty, Laws, and Regulations
(1910), 48–53.
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 191
of “sovereignty,” broad judicial deference to administrative decisions, and
the legal production of the internal foreignness of disfavored immigrant
groups.
Shaping the Community and Closing the Golden Door
In the 1870s and 1880s, domestic capital clearly recognized the advantages
of unrestricted immigration in driving down wages and reducing
the bargaining power of organized labor. Andrew Carnegie put it thus in
1886: “The value to the country of the annual foreign influx is very great
indeed. . . . During the ten years between 1870 and 1880, the number of
immigrants averaged 280,000 per annum. In one year, 1882, nearly three
times this number arrived. Sixty percent of this mass were adults between
15 and 40 years of age. These adults were surely worth $1,500 each – for
in former days an efficient slave sold for this sum.”37
Organized labor had long been calling for immigration restriction to
protect American workers from the competition posed by immigrant labor.
However, because it was politically untenable to shut down European, as
opposed to Asian, labor migration in its entirety, organized labor increasingly
focused on the issue of “contract labor.” In an ironic twist to the
ideologies of freedom of contract that dominated the post–CivilWar years,
the immigrant who entered the United States with a transportation contract
was represented as someone who had been “imported” by capitalists
and, therefore, as someone who was less free and more threatening than the
immigrant who came in without a contract.
Congress responded in 1885 with the first of the contract labor laws.
The 1885 Act prohibited employers from subsidizing the transportation
of aliens, voided transportation contracts, and imposed fines on violators.38
The legislation, however, proved to be purely symbolic. In the first place, the
practice of paying for the transportation of laborers, which had been prevalent
in antebellum years when the need for skilled laborers was great, had
largely died out by the late nineteenth century (when family-based migration
served capital’s need for fungible unskilled labor). Second, enforcement
of the laws appears to have been cursory and ineffective. Between 1887 and
1901, at most 8,000 immigrants were barred under the alien contract labor
laws out of a total immigration flow of about 6,000,000. In 1901, a congressional
Industrial Commission concluded that the laws were “practically
37 Andrew Carnegie, Triumphant Democracy, or Fifty Years’ March of the Republic (New York:
Charles Scribner’s Sons, 1886), 34–35.
38 Act of February 26, 1885 (23 Stat. 332). A second act provided for the deportation of
any contract laborer apprehended within one year of entry. Act of October 19, 1888 (25
Stat. 566).
Cambridge Histories Online © Cambridge University Press, 2008
192 Kunal M. Parker
a nullity, as affected by the decisions of the court, and by the practices of
the inspectors, and the administrative authorities.”39
As the national immigration regime consolidated itself, the number of
grounds of exclusion grew by leaps and bounds. The anxieties that the state
expressed in its exclusion laws were typical of the punitive, moralizing,
reformist, and eugenicist mood of the late nineteenth and early twentieth
centuries. The exclusion of those “likely to become a public charge,” a provision
enacted in 1882 and based on antebellum state statutes, became the
most important ground of barring entry into the United States.40 Generations
of immigrants learned to wear their finest clothes at the moment of
inspection to convey an impression of prosperity. Closely related were the
laws restricting the admission of aliens with physical and mental defects,
including epileptics and alcoholics, which drove prospective entrants to
attempt to conceal limps and coughs. There were also laws targeting individuals
with criminal backgrounds (including those convicted of crimes
involving “moral turpitude”), polygamists, and women coming to the
United States for “immoral purposes.”41 Finally, after the assassination
of President McKinley in 1901, the immigration laws began actively to
penalize aliens for their political beliefs.42
However, the heart of the debate over immigration restriction in the
early twentieth century lay not in the protection of the labor market, public
finances, public morals, or the polity itself, but rather in something that
stood in the popular mind for all of these together; namely the protection of
the country’s ethnic/racial stock. Increasingly, the race of immigrants was
coming to do the work of “explaining” the class tensions, labor unrest, and
urban violence that afflicted late nineteenth- and early twentieth-century
America.
One should refrain from easy generalizations about the sources of the
racial theories that were increasingly marshaled to demonize the new immigrants,
who came increasingly from Southern and Eastern Europe, as well
as more distant countries such as Japan and India. European Americans
had access to their rich “internal” experiences with racialized others, to be
sure, but also to earlier experiences with Irish and Chinese immigrants,
not to mention to the fund of racial thinking that had accompanied the
centuries-long European colonial experiences in Europe, Asia, Africa, and
the Americas. All of these sources fed into the new “scientific” racial
39 U.S. Congress, House, Industrial Commission, 1901, Vol. 15, p. lviii.
40 Immigration Act of August 3, 1882 (22 Stat. 214).
41 Act of March 3, 1875 (18 Stat. 477); Immigration Act of March 3, 1891 (26 Stat. 1084);
Immigration Act of February 20, 1907 (34 Stat. 898).
42 Immigration Act of March 3, 1903, (32 Stat. 1203, Section 2).
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 193
sensibilities and knowledges of the late nineteenth and early twentieth
centuries.
The fear of the “race suicide” of “Nordics” resulting from the introduction
of more prolific “inferior races,” an idea propagated energetically
by the Eastern-elite-dominated Immigration Restriction League, acquired
considerable currency after 1900. The massive report on immigration submitted
to Congress by the Dillingham Commission (1910–11) shared this
general sensibility by considerately including a Dictionary of Races or Peoples.
The Dictionary exemplified the new “scientific” understanding of race; in
classifying immigrants “according to their languages, their physical characteristics,
and such marks as would show their relationship to one another,
and in determining their geographical habitats,” the Commission identified
dozens of carefully hierarchized “races” of immigrants.43
Predictably, the most virulent attacks were reserved for Asian immigrants
in theWest. By 1905, the Asiatic Exclusion League had been organized to
bar the new immigration from Japan and India. Attempts to segregate
San Francisco schools sparked a diplomatic crisis between Japan and the
United States, resulting in the Gentlemen’s Agreement of 1907, according
to which Japan agreed voluntarily to restrict the immigration of Japanese
laborers.
The Asiatic Exclusion League also lobbied fiercely for the exclusion of
Indian immigrants, erroneously labeled “Hindoos.” In the absence of any
statutory provision explicitly prohibiting the entry of Indians, motivated
administrators put generally applicable provisions of immigration law to
creative racist ends. Immigration officials began to interpret the “public
charge” provision to exclude Indian immigrants on the ground that
strong anti-Indian prejudice in California would prevent them from getting
a job, and thus render them “public charges.” When this discriminatory
use of the “public charge” provision was challenged in federal court, it was
upheld.44
The increasing racialization of immigration law had especially adverse
effects on female immigrants. In general, the immigration law of the period
reinforced patriarchal ideas about gender roles. As an observer noted in
1922: “In the main, in the eyes of the law, a man is man, while a woman
is a maid, wife, widow, or mother.”45 This made single or widowed female
immigrants especially vulnerable to aspects of immigration law such as the
43 Dillingham Commission Report, Vol. 5, Dictionary of Races or Peoples, Senate Document
662, Session 61–3 (Washington, DC: Government Printing Office, 1911), 2.
44 In re Rhagat Singh, 209 F. 700 (1913). The U.S. Supreme Court eventually curtailed
immigration officials’ excessively broad interpretations of the “public charge” provision
in Gegiow v.Uhl, 239 U.S. 3 (1915).
45 “The Cable Act and the Foreign-BornWoman,” Foreign Born 3, no. 8 (December 1922).
Cambridge Histories Online © Cambridge University Press, 2008
194 Kunal M. Parker
public charge provisions. But the consequences were worse yet for racialized
immigrants. Chinese women had long experience with such attitudes. In
a bow to patriarchal attitudes, the ban on Chinese immigration did not
translate into a prohibition on the entry of wives. However, widespread
American stereotypes about Chinese prostitutes made Orientalist markers
of matrimony and class status – for example, bound feet suggesting the lack
of need to work – crucial for a Chinese woman hoping to secure admission.
Any evidence that the woman had worked might result in her classification
as a crypto-laborer and her being denied entry. Fears about prostitution
translated into greater interrogation and surveillance of Japanese, as well as
Eastern and Southern European female immigrants. The Dillingham Commission
devoted an entire volume to “white slavery” and sought to match
its findings to the racial characteristics of the immigrant stream. Jewish
women were seen as being especially vulnerable to the lure of prostitution
once they had been admitted.46
In the early twentieth century, as the composition of the immigrant population
changed, courts were compelled to confront once again the question
of racial ineligibility for U.S. citizenship. Although the Chinese had been
declared ineligible since the 1870s, there was considerable ambiguity as to
whether Japanese, Indian, and other immigrants who entered the United
States in the late nineteenth and early twentieth centuries fit within the
black-white binary of naturalization law. Between 1887 and 1923, the federal
courts heard twenty-five cases challenging the racial prerequisites to
citizenship, culminating in two rulings by the U.S. Supreme Court: Ozawa
v. United States (1922) and Thind v. United States (1923). In each case, the
Court’s decision turned on whether the petitioner could be considered a
“white person” within the meaning of the statute.
Taken together, these decisions reveal the shortcomings of racial science.
In earlier years, federal courts had relied on racial science, rather than on
color, and had admitted Syrians, Armenians, and Indians to citizenship as
“white persons.” In Ozawa, the U.S. Supreme Court admitted that color
as an indicator of race was insufficient, but resisted the conclusion that
no scientific grounds for race existed. It avoided the problem of classification
by asserting that “white” and Caucasian were the same and that the
Japanese were not Caucasian and hence not “white.”47 However, in Thind,
the Court was confronted with an Indian immigrant who argued his claim
to eligibility to citizenship on the basis of his Aryan and Caucasian roots.
46 Dillingham Commission Report, Vol. 37, pt. 2, Importation and Harboring of Women for
Immoral Purposes, Senate Document 753/2, Session 61–3 (Washington, DC: Government
Printing Office, 1911).
47 Ozawa v United States, 260 U.S. 128, 197 (1922).
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 195
Now the Court found that the word “Caucasian” was considerably broader
in scientific discourses than it was in non-scientific discourses. Rejecting
the petitioner’s claim to citizenship, it held that the words “white person”
in the naturalization law were words of “common speech, to be interpreted
with the understanding of the common man.”48 Racial science thus was
summarily abandoned in favor of popular prejudice.
If U.S. citizenship was racialized during this period, it was also deeply
gendered. Since the middle of the nineteenth century, male U.S. citizens
had been formally able to confer citizenship on their wives. However, the
law with respect to female U.S. citizens who married non-citizens had been
unclear. In 1907, Congress decided to remove all ambiguities by legislating
“that any American woman who marries a foreigner shall take the nationality
of her husband.”49 In other words, female U.S. citizens who married noncitizens
were not only unable to confer citizenship on their husbands, but
in fact lost their own U.S. citizenship as a consequence of their marriage.
In 1915, the U.S. Supreme Court upheld a challenge to this provision on
the basis of the “ancient principle” of “the identity of husband and wife.”50
In the case of native-born Asian American female citizens, this law had the
effect of rendering them permanently unable to reenter the community of
citizens. Having lost their citizenship on marrying an alien, they became
aliens racially ineligible for citizenship.
But quite in addition to being racialized and gendered, U.S. citizenship
revealed that it had adventitious uses. It could be shaped and manipulated as
a weapon of discrimination. As anti-immigrant sentiment mounted in the
early twentieth century, state legislatures increasingly made U.S. citizenship
a prerequisite to forms of employment and recreation, access to natural
resources, and the like, thereby causing the meanings of U.S. citizenship to
proliferate well beyond the sphere of the political (voting, political office,
service on juries, and so on). Driven by the politics of race and labor, citizenship
thus spilled into the social experiences of work and leisure in the
lived community.
State attempts to discriminate on the basis of citizenship were typically
dealt with as problems of “alienage law.” The constitutional question was
whether a state, in discriminating on the basis of citizenship, had gone
so far as to intrude on the federal government’s (by now) exclusive immigration
power. In general, the U.S. Supreme Court held that a state could
discriminate among citizens and aliens if the state was protecting a “special
public interest” in its common property or resources, a category that was
48 Thind v. United States, 261 U.S. 204, 215 (1923).
49 Act of March 2, 1907 (34 Stat. 1228).
50 Mackenzie v. Hare, 239 U.S. 299, 311 (1915).
Cambridge Histories Online © Cambridge University Press, 2008
196 Kunal M. Parker
interpreted over the years to include employment on public works projects,
hunting wild game, and operating pool halls.51
The U.S. Supreme Court also upheld alienage distinctions that were
aimed very clearly and directly at specific racialized immigrant groups.
In the early twentieth century, resentment of Japanese immigrants on the
West Coast increasingly centered on their success in agriculture. In response,
Arizona, California, Idaho, Kansas, Louisiana, Montana, New Mexico, and
Oregon attempted to restrict land ownership by aliens “ineligible to citizenship,”
a category carefully crafted to apply only to Asian immigrants, who
were the only ones legally incapable of naturalizing. When the alien land
laws were challenged, however, the U.S. Supreme Court upheld them.52
The fact that the legislation only affected some racialized groups was not
found to be a problem under the Equal Protection Clause of the Fourteenth
Amendment because the law was framed in neutral terms of discrimination
against non-citizens.
If the Chinese experience with citizenship had prefigured other Asian
immigrant groups’ experiences with citizenship, the events of the 1920s
revealed that the Chinese experience with blanket immigration restriction
also prefigured the experience of Asian and European immigrant groups.
Significant restrictions on immigration occurred only with the xenophobic
frenzy whipped up duringWorldWar I.
The context of suspicion fostered by the war enabled nativists to obtain
in the Immigration Act of 1917 some of the restrictionist policies they
had long advocated. A literacy test for adult immigrants was one of their
most important victories. The 1917 law also submitted to theWest Coast’s
demand for the exclusion of Indian immigrants. Hesitant to single Indians
out for exclusion on the grounds of race, however, Congress created an
“Asiatic Barred Zone” that included India, Burma, Siam, the Malay States,
Arabia, Afghanistan, parts of Russia, and most of the Polynesian Islands.53
In the end, it is unclear how much the literacy test affected European
immigration, in part because of the spread of literacy in Europe during the
same years.
51 Crane v. New York, 239 U.S. 195 (1915); Patsone v. Pennsylvania, 232 U.S. 138 (1914);
Clarke v. Deckebach, 274 U.S. 392 (1927). However, the U.S. Supreme Court did strike
down an Arizona law that required any employer of more than five employees to employ
at least 80 percent qualified electors or native-born citizens of the United States on the
ground that it would be inconsistent with the exclusive federal authority to “admit or
exclude aliens.” Truax v. Raich, 239 U.S. 33 (1915).
52 Truax v. Corrigan, 257 U.S. 312, cited in Terrace v. Thompson, 263 U.S. 197, 218, 221
(1923).
53 Immigration Act of February 5, 1917 (39 Stat. 874).
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 197
By 1920, the war-boom economy had begun to collapse and immigration
from Europe had revived, creating a propitious environment for greater
restriction. Accordingly, in 1921, the logic of immigration restriction that
had been formally applicable to almost all Asian immigrants since 1917 –
exclusion – was extended to European immigrants, albeit in the form of quotas
rather than complete restriction. The Quota Act of 1921 was described
by the commissioner general of immigration as “one of the most radical
and far-reaching events in the annals of immigration legislation.”54 Indeed
it was, but what is of interest here is its arbitrariness.
The Quota Act limited European immigration to 3 percent of the number
of foreign-born people of each nationality residing in the United States in
1910.55 The aim was to give larger quotas to immigrants from Northern
and Western Europe, and to reduce the influx of Southern and Eastern
Europeans. By 1923, the commissioner general of immigration pronounced
the Quota Act a success. He revealed that the percentage of Southern and
Eastern European immigrants had decreased from 75.6 percent of the total
immigration in 1914 to 31.1 percent of the total immigration in 1923.
For the same years, as a percentage of total immigration, immigration
from Northern and Western Europe had increased from 20.8 percent to
52.5 percent.
This change in the composition of the immigration stream did not, however,
satisfy nativists. The Immigration Act of 1924 represented a compromise.
It reduced the percentage admitted from 3 to 2 percent and made
the base population the number of each foreign-born nationality present
in the United States in 1890 instead of 1910. The Senate, balking at such
gross discrimination, allowed the new quota provided that a new “national
origins” test would be used beginning in 1927. The new national origins
test was only superficially fairer. It placed a cap on the total number of
immigrants, limiting admissions to 150,000 each year and using the 1920
census as the base. However, instead of using the number of foreign-born
as its measure, as the earlier quota laws had done, the law set the quotas
according to the proportion of each “national stock,” including both native
and foreign-born people. This favored “old stock” Americans over the new
immigrant population, leaving to immigration officials the nightmare of
calculating “national stocks.”
The 1924 Act also furthered the exclusion of Asians. Though the law
barred such aliens from entering the country as were, to use the euphemistic
phrase of the time, “ineligible for citizenship,” Japanese immigrants were
54 Annual Report of the Commissioner-General of Immigration, 1921, 16.
55 Act of May 19, 1921 (42 Stat. 5).
Cambridge Histories Online © Cambridge University Press, 2008
198 Kunal M. Parker
the real targets of the act because Chinese and Indian immigrants had
already been excluded.56
Thus, in the 1920s, for the first time in the history of immigration
restriction in the United States, the basic theory of exclusion shifted from a
matter of the shortcomings of the individual immigrant (poverty, criminal
background, health, etc.) to a matter of numerical restriction. Decisions to
admit immigrants and the battles that accompanied them would take place
in the abstract language of numbers. Of course, the grounds of exclusion
for poverty, disability, criminal background, political opinion, and the like
would continue in force, but these would henceforth serve to weed out
individuals who had first to demonstrate that they fit within a national
origins ,quota. The presumption that one could immigrate to the United
States had shifted to a presumption that one could not immigrate to the
United States.With this shift in presumptions came the figure of the “illegal
alien” and a vast stepping up of border control and deportation activity. For
the first time, Congress legislated a serious enforcement mechanism against
unlawful entry by creating a land Border Patrol.57
Imperialism and Immigration
If the growth of the national immigration regime resulted in the production
of the internal foreignness of American ethnic communities like the Chinese,
the history of immigration from areas in which the United States had
colonial/imperial involvements reveals how groups once treated as on the
inside could progressively be rendered as on the outside and progressively
brought within the purview of the immigration regime.
Although immigration statistics for the early twentieth century are notoriously
inaccurate, scholars estimate that at least one million and possibly as
many as a million and a half Mexican immigrants entered the United States
between 1890 and 1929. Mexico has also been the single most important
source country for immigration into the United States in the twentieth
century. However, it is not simply the numbers of Mexican immigrants,
but the peculiar history of Mexican immigration as one intertwined with
colonialism that warrants separate treatment.
With the Treaty of Guadalupe Hidalgo in 1848, Mexico ceded to the
United States more than half of its territory, comprising all or part of
56 Immigration Act of 1924 (43 Stat. 153). Filipinos were the only Asians unaffected by
the 1924 Act. As non-citizen U.S. nationals by virtue of their colonial status, Filipinos
were exempt from the law. Their immigration to the United States became restricted in
1934.
57 Act of February 27, 1925 (43 Stat. 1049).
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 199
present-day Arizona, California, Colorado, Kansas, Nevada, New Mexico,
Oklahoma, Texas, Utah, and Wyoming. This treaty also transformed the
lives of the estimated 75,000 to 100,000 Mexicans who lived in the ceded
territories. It expressly provided that Mexicans could move south of the
new international border or retain their Mexican nationality. If they had
done neither within one year of the treaty’s effective date, however, they
would be considered to have “elected” to become citizens of the United
States.58
The treaty’s extension of U.S. citizenship by fiat to the resident populations
of the ceded territories might have been the ironic consequence of a
State Department bureaucrat’s unsanctioned negotiations in Mexico City.
However, Americans disturbed by the prospect of admitting their racial
“inferiors” into the community of U.S. citizens (it should be recalled that
the treaty was negotiated almost a decade before the Dred Scott decision)
were comforted by the belief that the acquired territories were sparsely
settled and would be transformed by white migration. Mexican Americans
were confidently expected to disappear as a significant presence in the newly
acquired area.
Massive white immigration into the acquired territories during the second
half of the nineteenth century indeed had the effect of rendering Mexican
Americans numerical minorities, even as they experienced a sharp loss
in social, economic, and political power and became victims of racial discrimination
as non-whites. By the end of the nineteenth century, however,
this demographic situation began to change. A range of transformations –
the extension of railway networks, the introduction of the refrigerated boxcar,
the construction of irrigation projects, and so on – laid the foundation
for explosive economic growth in the American Southwest, and hence for
the region’s seemingly limitless demand for agricultural and industrial
labor. These changes took place just as conditions for Mexican peasants
were worsening during the waning years of the nineteenth century. As a
result, Mexicans began to pour into the Southwest.
At a time of rising nativism in the United States vis-`a-vis European and
Asian immigrants, it was precisely the colonial context of the acquisition of
the Southwest and the rhetorical uses to which it was put by labor-hungry
U.S. employers that saved Mexican immigrants, at least for a while, from
being formal targets of U.S. citizenship and immigration laws. In 1897,
a federal district court considering the question of Mexicans’ eligibility
for citizenship declared that “[i]f the strict scientific classification of the
anthropologist should be adopted, [the petitioner] would probably not be
classed as white.” However, the constitution of the Texas Republic, the
58 Treaty of Guadalupe Hidalgo, Article VIII.
Cambridge Histories Online © Cambridge University Press, 2008
200 Kunal M. Parker
Treaty of Guadalupe Hidalgo, and other agreements between the United
States and Mexico either “affirmatively confer[red] the rights of citizenship
upon Mexicans, or tacitly recognize[d] in them the right of individual naturalization.”
Furthermore, because these instruments had not distinguished
among Mexicans on the basis of color, all Mexicans would be eligible to
naturalize, regardless of color.59 Thus, the United States’ obligations to
its colonized Mexican American population redounded to the benefit of
Mexican immigrants.
Mexicans were also exempted from racial bars to immigration applicable
to Asian immigrants and the quotas applicable to European immigrants
from the 1920s on. Here, the reason seems to have been successful lobbying
by Western and Southwestern interests to keep the border open. Once
again, the history of Mexicans’ relationship to the Southwest was invoked.
By far the most influential arguments in favor of Mexican immigrants promoted
the idea that history had rendered Mexican immigrants familiar –
yet happily, temporary – sojourners in the United States. In 1926, Congressman
Taylor of Colorado noted that Americans had become used to
working with Mexicans after nearly a century of contact. “It is not at all
like we were importing inhabitants of a foreign country. We understood
each other. They have no influence whatever upon our habits of life or form
of civilization. They simply want work. . . . Generally speaking they are not
immigrants at all. They do not try to buy or colonize our land, and they
hope some day to be able to own a piece of land in their own country.”60 The
idea that Mexican immigrants were birds of passage was cited repeatedly
to assuage nativists’ fears that Mexicans might settle permanently in the
United States.
Eventually, however, Mexican immigrants would also fall victim to the
restrictionist tide, especially because Mexicans disappointed Americans by
inconveniently remaining in the communities in which they labored. By
the mid-1920s, a Mexican “race problem” had emerged in the Southwest.
Although Congress was unwilling to impose quotas on Mexican immigration
or to exclude Mexicans on racial grounds, it sought to restrict Mexican
immigration by administrative means. U.S. consuls in Mexico began to
enforce general immigration restrictions, refusing visas to Mexican laborers.
At the same time, the formation of the Border Patrol in 1925 led to
the first steps to curb Mexican illegal immigration. The official onslaught
against Mexican immigrants reached its peak during the 1930s when officials
of the U.S. Department of Labor, the Border Patrol, local welfare
59 In re Rodriguez, 81 Fed. 337 (W.D. Texas, 1897).
60 Ralph Taylor, in House Committee On Immigration, Immigration from Countries of
theWestern Hemisphere: Hearings, 1930, 237–38.
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 201
agencies, and other government bodies sought to secure the “voluntary”
return to Mexico of Mexican immigrants and their U.S. citizen children.
Scholars have estimated that between 350,000 and 600,000 individuals
were thus repatriated to Mexico.
The other immigration streams shaped by an imperial context were those
from the Philippines and Puerto Rico, territories acquired as a consequence
of the Spanish American War in 1898. But the late nineteenth-century
moment was very different from the mid-nineteenth-century moment of
the acquisition of the Southwest. In the high noon of racial theory, there
were real doubts about Americans’ ability effectively to ingest these noncontiguous
territories and their racially distinct populations.
Puerto Rico was clearly the more ingestible of the two; its population
numbered less than one million. Accordingly, in the Jones Act of 1917,
Congress enacted a bill of rights for Puerto Rico and granted U.S. citizenship
to Puerto Ricans.61 This was not the full membership enjoyed by
Americans on the mainland. Nevertheless, as a consequence of the Jones Act,
Puerto Ricans could move to the mainland United States and, on becoming
state residents, claim the civil, political, and social rights enjoyed by other
citizens.
The case of the Philippines was more troublesome. If American territorial
acquisitions in earlier periods had been premised on territories’ eventual
admission to statehood, admitting the populous Philippine islands to statehood
was unthinkable. The Filipino nationalist leader Manuel Roxas once
remarked that statehood would have resulted in fifty Filipino representatives
in Congress. Nevertheless, “benevolent” imperialism came with a price. If
they were not U.S. citizens, Filipinos were at least “American nationals.”
As “American nationals,” Filipinos were exempted from the quota acts and
were able to enter and reside within the United States.
Not surprisingly, nativists in the 1920s sought to close the loopholes in
immigration law that allowed Filipinos to enter the United States. However,
because there was a sense in Washington that the anti-Filipino movement
was merely a regional interest, Congress initially failed to act. Eventually,
the desire to exclude Filipinos grew so great that exclusionists actually
allied themselves with Filipino nationalists. They finally proved successful.
The Tydings-McDuffie Act of 1934 granted the Philippines independence
and stripped Filipinos of their status as “American nationals.”62 Filipino
immigration became governed by the national origins quota legislation. A
1935 Repatriation Act sought to remove Filipinos from the United States
by paying their transportation back to the Philippines on condition that
they give up any right of reentry into the country.
61Act of March 1, 1917 (39 Stat. 951). 62Act of March 22, 1934 (48 Stat. 456).
Cambridge Histories Online © Cambridge University Press, 2008
202 Kunal M. Parker
CONCLUSION
The attempt by petitioners in Ozawa and Thind to obtain classification as
“white” rather than as “African” for purposes of naturalization law reveals
a great deal about how nineteenth- and twentieth-century immigrants,
European and Asian, attempted to fit themselves into American racial hierarchies.
Racial jostling on the part of immigrants has a long history, punctuated
by dramatic and violent events such as the 1863 New York City
draft riots when Irish immigrants lynched African Americans to protest
their own conscription into the Union effort.
The rise of the national immigration regime was premised on the removal
of the structural internal foreignnesses of the antebellum period and the constitution
of U.S. territory as a homogeneous space of constitutional rights.
It translated, as has been suggested, into a fresh set of internal foreignnesses
as the immigration regime spilled over into the lived community in the
form of immigration raids and heightened surveillance of American ethnic
communities such as the Chinese.
However, the late nineteenth century also witnessed the emergence of
another form of internal foreignness: legally sanctioned, public and private,
formal and informal racial segregation. Perhaps this was not of the same legal
order as the efforts of states in the antebellum period to exclude portions
of the native-born population – free blacks – from their territories and to
assimilate them to the formal status of aliens. The passage of the CivilWar
amendments had made such kinds of discrimination illegal. Nevertheless,
in the decades that followed the Civil War, courts permitted other, newer
kinds of discrimination and segregation through the reinvigoration of the
public/private distinction or the spurious idea of “separate but equal.”63
By the early twentieth century, then, a multitude of internal spaces in
America – whether they involved shopping or transportation, residence
or recreation, employment or education – were thoroughly fragmented,
rendered open to some groups and closed to others. Closing off such spaces
was especially significant because it was precisely in the variegated spaces
of the new urban America that social membership would increasingly be
instantiated and realized. Although immigrant groups such as Jews and
Catholics, Asians and Latinos, were certainly victims of forms of segregation,
its greatest impact was on African Americans.
The boundaries of spaces closed off to African Americans were actively
patrolled through public laws and policies, judicially recognized private
devices such as the racially restrictive covenant or the shopkeeper’s absolute
“right to exclude,” the efforts of police and vigilantes, and the systematic
63 Plessy v. Ferguson, 163 U.S. 537 (1896).
Cambridge Histories Online © Cambridge University Press, 2008
Citizenship and Immigration Law, 1800–1924 203
infliction of petty humiliation and violence. African Americans confronted
borders – were made foreigners – as part of their everyday lives, but in
paradoxical ways. Although they might not be permitted to purchase homes
in certain neighborhoods, they were permitted to work there as domestics.
Although they could not be guests in certain hotels, they could labor in
hotel kitchens. Often, the object of segregation was simply to point to itself,
as when African Americans in the south were required to sit in designated
parts of public buses.
The irony of the struggle to desegregate residential and educational
spaces in America, especially in the urban North between 1940 and 1980,
was that it was often fought precisely against Jewish and Catholic immigrants
and their immediate descendants who had come to occupy intermediate
positions in America’s ethnic and racial hierarchies, if they had not
already become fully “white.” To be sure, it was often precisely members of
those immigrant groups who labored alongside African Americans, operated
establishments that served them, and supported the African American
struggle for civil rights. However, these immigrant groups also actively distanced
themselves from African Americans – for example, American Jews
who performed “blackface” – in order to negotiate their own social standing.
It was in the struggles between African Americans and white ethnic
Americans that one of the most egregious forms of twentieth-century internal
foreignness (residential and educational segregation) was dismantled,
even as it was simultaneously reconstituted in the form of suburbanization
and an ongoing “urban crisis.”
The African American experience in the United States, both before and
after the Civil War, might be taken as a model for thinking about immigration.
It suggests that foreignness has no intrinsic connection to whether
one stands inside or outside territory. That boundary is simultaneously produced
and transgressed, not least in the activities of the immigration regime
itself. The model calls for a measure of caution when we designate those
knocking at America’s gates as outsiders from other countries to whom we
owe nothing. American history tells us that the status of outsider has often,
even paradigmatically, been conferred on those most intimately “at home.”
Cambridge Histories Online © Cambridge University Press, 2008
7
federal policy, western movement, and
consequences for indigenous people,
1790–1920
david e. wilkins
In virtually every respect imaginable – economic, political, cultural, sociological,
psychological, geographical, and technological – the years from the
creation of the United States through the Harding administration brought
massive upheaval and transformation for native nations. Everywhere, U.S.
Indian law (federal and state) – by which I mean the law that defines and
regulates the nation’s political and legal relationship to indigenous nations –
aided and abetted the upheaval.
The nature of U.S. Indian law is, of course, fundamentally different from
the various indigenous legal and customary traditions that encompassed the
social norms, values, customs, and religious views of native nations. These
two fundamentally distinct legal cultures, and their diverse practitioners
and purveyors, were thus frequently in conflict. Important moments of
recognition, however, did take place, particularly the early treaty period
(1600s–1800), and later, there were infrequent, spasms of U.S. judicial
recognition. In Ex parte Crow Dog (1883) and Talton v. Mayes (1896), for
example, the U.S. Supreme Court acknowledged the distinctive sovereign
status of native nations by holding that the U.S. Constitution did not constrain
the inherent rights of Indian nations because their sovereignty predated
that of the United States.1 Perhaps the period of greatest European
acceptance occurred during the encounter era when indigenous practices
of law and peace, particularly among the tribal nations of the Northeast,
served as a broad philosophical and cultural paradigm for intergovernmental
relations between indigenous peoples and the various European and
Euro-American diplomats and policymakers with whom they interacted.
Whether tribal, based in indigenous custom and tradition, or Western,
based in English common law custom and tradition, law speaks to the basic
humanity of individuals and societies. In both cases, it provides guidance
for human behavior and embraces ideals of justice. Initially, therefore, law
1 109 U.S. 556; 163 U.S. 376.
204
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 205
was a powerful way for indigenous and non-indigenous leaders to forge
well-founded diplomatic relations.
This state of multicultural negotiations, of treaties and mutual respect,
would not be sustained. Gradually Euro-American attitudes of superiority –
legal, political, religious, and technological – became uppermost. Tribal
systems of law, policy, and culture came to be disrespected, displaced, and
sometimes simply destroyed. Shunted aside into the corners as colonized
peoples, native peoples seeking justice were required to use the same Anglo-
American legal system that had devastated their basic rights.
Since the early 1800s, U.S. Indian law has only occasionally acknowledged
the distinctive condition – tribal sovereignty – that structures every
indigenous community’s efforts to endure in their political and legal relationship
with the federal government and the constituent states. The
absence of genuine bilateralism – the lack of indigenous voice in law and
politics despite the written diplomatic record – has plagued the political
and legal relationship between tribal nations and the United States ever
since. Here we focus on the creation of this situation.
The greatest absence in the study of American legal history and federal
Indian law is the actual voice and presence of American Indians. That
daunting silence enables Western law practitioners to act as if their vision
and understanding of the law are all there is or ever was. Their presumption
is contradicted by the ways in which the treaty relationship unfolded and in
which indigenous peoples still struggle to practice their own legal traditions
in the face of overwhelming pressure to ignore or belittle those very traditions.
But the presumption is immensely powerful. How did U.S. law come
so to dominate, directly and indirectly diminishing the inherent sovereign
status of native nations and their equally legitimate legal traditions? The
short answer is that the reluctance or unwillingness to acknowledge the legal
pluralism of the continent stemmed from the inexorable drive of national
and state politicians, the legal establishment, business entrepreneurs, and
white settlers to ensure that nothing derail Euro-America’s expansion from
a fledgling national polity to an internationally recognized industrial state
wielding unprecedented power, domestically and abroad.
The law, as defined and exercised by those in power in federal, state, and
corporate offices, occasionally recognized indigenous sovereignty, resources,
and rights. Far more often it was employed to destroy or seriously diminish
them. Alexis deTocqueville, one of the first commentators to note the almost
fervid concern that Americans had with law and the legal process, observed
its application to indigenous affairs. “The Americans,” said de Tocqueville,
in contrast to the “unparalleled atrocities” committed by the Spaniards, had
succeeded in nearly exterminating the Indians and depriving them of their
rights “with wonderful ease, quietly, legally, and philanthropically, without
Cambridge Histories Online © Cambridge University Press, 2008
206 David E. Wilkins
spilling blood and without violating a single one of the great principles of
morality in the eyes of the world. It is impossible to destroy men with more
respect to the laws of humanity.”2
Coming to power during the bloody American Revolution and anxious to
establish the legitimacy of their new state in the court of world and American
settler opinion, U.S. policymakers, in constructing their framework for a
democratic society, fervently supported a social contract that theoretically
recognized the rights of virtually everyone. With sufficient flexibility of
interpretation, the same contract allowed the oppression of basic human
rights of women and minorities, indeed of any non-whites who lacked the
proper skin color, class, and social connections to profit from the expansion
of the state.
Native nations, because of their preexistence, political and economic
independence, and early military capability, won a degree of respect from
colonizing European nations and later the United States that African slaves
and women could not obtain. Simultaneously, however, the American public
stressed the tribal nations’ allegedly inferior cultural, political, technological,
and social status in relation to Euro-Americans. This schizophrenic
mindset evidenced itself in U.S. Indian law in three distinctive yet interrelated
paradigms or predispositions. The three are distinctive in the sense
that their foundations lie in different sources, time periods, and motives.
They are interrelated because underlying each is the same foundation of
colonial and ethnocentric/racist assumptions. The three paradigms can be
summarized by three keywords: treaties, paternalism, and federalism.
The treaty paradigm deems law the most effective instrument to ensure
justice and fairness for aboriginal people. Here, the federal courts and the
political branches formally acknowledged tribal nations as distinctive political
bodies outside the scope of U.S. law or constitutional authority. The
most basic assumption of this viewpoint was that treaty considerations
(i.e., ratified treaties or agreements) were the only appropriate and legitimate
instruments by which to engage in and determine the course of
diplomacy between indigenous communities and the United States. As
only nations may engage in treaties, the constituent states were reduced to
being observers and could not interfere in the nation-to-nation relationship
without federal and tribal consent.
When federal lawmakers and jurists acted in accordance with the treaty
paradigm, as they did in enacting the Northwest Ordinance of 1787 and
in cases such as Worcester v. Georgia (1831), The Kansas Indians (1867), and
2 Alexis de Tocqueville, Democracy in America, vol. 1, edited by J. P. Mayer (Garden City,
NY, 1969), 339.
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 207
Ex parte Crow Dog (1883),3 the United States was formally acknowledging
that tribes were separate and sovereign nations and that the treaties that
linked the two sovereigns, much more than being mere contracts, were
the supreme law of the land under Article VI of the Constitution. Under
this disposition, the federal government’s actions generally left indigenous
nations free of the constitutional constraints applicable to the states and to
the federal government itself. Early interactions under the treaty paradigm,
then, granted both explicit and implicit recognition to legal pluralism,
even though the language used in the various policies, laws, and cases still
sometimes contained racist and ethnocentric discourse that perpetuated
stereotypes about indigenous peoples.
The other two paradigms, of federalism and of paternalism, were far more
commonly used throughout the period under examination – and beyond –
to justify federal and state laws and court decisions that had devastating
consequences for indigenous collective and individual rights. The consequences
were so severe, in part, because neither of these frameworks gave
any consideration whatsoever to native values, laws, or morals.
When the United States operated in accordance with the paradigm of
federalism, the law was perceived as the prime mechanism for furthering
the political and economic development and territorial expansion of
the United States as a nation in conjunction with its constituent states.
This view of the law was maintained notwithstanding the simultaneous
presence on the North American continent – in fact and in law – of
aboriginal nations, each intent on maintaining its own political and
economic development and historic territories. The federalism paradigm
was inward-looking, concentrating its gaze on the Euro-American political
community. It treated tribal nations largely as obstacles to that entity’s
self-realization, otherwise unseen and unheard. This paradigm was very
much in evidence prior to the CivilWar.
When operating in accordance with the paradigm of paternalism, the
United States tended to portray itself as a deeply moralistic, civilized, and
Christian nation, virtually always above reproach. This view predominated
from the 1860s into the 1930s, when the federal government inaugurated
the Indian reservation program, established boarding schools, allotted
Indian lands, and forcibly sought to acculturate indigenous peoples.
Deeming Indian persons and nations culturally inferior, the law became an
essential instrument in moving them from their uncivilized or “primitive”
status to mature civility. The United States envisioned itself as a benevolent
“guardian” to its na¨ıve Indian “wards”; their cultural transformation was
3 31 U.S. (6 Pet.) 515; 72 U.S. (5 Wall.) 737; 109 U.S. 556.
Cambridge Histories Online © Cambridge University Press, 2008
208 David E. Wilkins
considered inevitable. The only question was whether the process would be
achieved gradually or rapidly.
Fundamentally, the various processes used by federal and state officials
and corporate powers under the three paradigms brought about the cultural
genocide, segregation, expulsion, and coerced assimilation of native peoples.
Of these, coercive assimilation – the effort to induce by force the merger
of politically and culturally distinctive cultural groups (tribal nations) into
what had become the politically dominant cultural group (Euro-American
society) – has been the most persistent process employed by U.S. lawmakers.
The most vigorous and unapologetic manifestations of forced assimilation
occurred during the latter part of the nineteenth century and into the 1920s.
The Supreme Court sanctioned the denial of treaty rights, the confiscation
of Indian lands, and a host of other coercive intrusions on the tribes by
its creation of a new and wholly non-constitutional authority, Congressional
plenary power, which it defined as virtually boundless governmental
authority and jurisdiction over all things indigenous. While federal power
is rarely wielded so crassly today, both the Supreme Court and the Congress
continue to insist that they retain virtually unlimited authority over tribal
nations and their lands.
The three paradigms or predispositions described here – treaties, federalism,
and paternalism – have successively filled the imaginative field
in which U.S. lawmakers and politicians operated during the nineteenth
century and after and, to a real extent, in which they still operate today.
Indigenous nations at the beginning of the nineteenth century were generally
recognized by the United States as political sovereigns and territorial
powers, even though they were usually deemed to be culturally and technologically
deficient peoples. Between 1790 and 1920, tribal nations and their
citizens experienced profound shifts in their legal and political status: from
parallel if unequal sovereigns to domestic-dependent sovereigns; from relatively
autonomous to removable and confinable entities, then to ward-like
incompetents with assimilable bodies; and then, finally, to semi-sovereign
nations and individuals entitled to degrees of contingent respect for their
unique cultural, political, and resource rights, but only through the condition
of attachment to land, which in turn meant continued subordination
to an overriding federal plenary control.
These oscillations in the fundamental legal and political status of indigenous
peoples confirm federal lawmakers’ and American democracy’s inability
or unwillingness to adopt a consistent and constitutionally based
approach to native peoples’ sovereignty and their distinctive governmental
rights and resources. The successive changes arise from Euro-American
perceptions of aboriginal peoples – albeit perceptions with all too real consequences
– rather than from the actualities of aboriginal peoples, how they
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 209
“really are.” They lead us to the conclusion that the United States has consistently
refused to acknowledge the de facto and de jure legal pluralism
that has always existed in North America. The federal government has still
to live up even to the potential outlined in many treaties, the Constitution,
and the Bill of Rights, far less the reality of what is written there.
As the discussion of the treaty paradigm will show, indigenous law and
sovereignty occasionally have been recognized in U.S. law. They continue
to play an important, if variable, role in structuring tribal political and
economic relations with the United States and the several states. A number
of commentators have observed that recognition and support of the
indigenous legal and cultural traditions of tribal nations are critical if a
democracy of law is ever to be achieved in the United States. Despite the
remarkable efforts of tribal nations to retain and exercise essential components
of their cultures and traditions, the political, legal, economic, and
cultural forces wielded by non-Indians have made it impossible for tribes
to act unencumbered. Yet their traditions remain “deeply held in the hearts
of Indian people – so deeply held, in fact, that they retained their legal
cultures in the face of U.S. legal imperialism, creating a foundation for a
pluralist legal system in the United States today.”4 It is unfortunate that
the Euro-American law that has occasionally supported tribal sovereignty
has, so much more often, diminished it.
I. PARALLEL SOVEREIGNS: TRADE, TRUST, AND TREATY
RELATIONS, 1790–1820
Cyrus Griffin, the President of Congress, announced on July 2, 1788, that
the Constitution had been ratified by the requisite nine states. Federal
lawmaking might then proceed. At that time, however, a significant body
of law was already in existence, developed by Great Britain, France, Spain,
Sweden, Russia, and Holland, and by the American colonies and by the
Continental Congress, in their respective dealings with one another and
with aboriginal nations. This body of multinational law incorporated many
of the basic political and legal promises that the United States would later
use to construct its relationship with indigenous governments. The United
States had inherited the idea of using law in its dealings with tribes from
predecessor European colonial governments.
Each of the colonial powers had exhibited distinctive political, social,
and cultural traits in their interactions with the various indigenous nations
they encountered, but certain principles and policies had been applied in
common by the end of the eighteenth century and would be continued by
4 Sidney Harring, Crow Dog’s Case (New York, 1994), 24.
Cambridge Histories Online © Cambridge University Press, 2008
210 David E. Wilkins
the United States. First was the doctrine of discovery, which had arisen in
the fifteenth century from Catholic Papal bulls and European monarchical
claims. The discovery doctrine held, in its most widely understood definition,
that European explorers’ “discovery” of lands gave the discovering
nation (and the United States as successor) absolute legal title and ownership
of the soil, reducing the indigenous occupants to mere tenants holding
a lesser beneficial interest in their original homelands. Conversely, discovery
also was defined as an exclusive and preemptive right that vested in the
discovering state nothing less than the right to be the first purchaser of any
lands the native owners might decide to sell. Here, discovery is a colonial
metaphor that gave the speediest and most efficient discovering nation the
upper hand in its efforts to colonize and exploit parts of the world hitherto
unknown to Europeans. It was a means of regulating competition between
vying European nations. Discovery also had symbiotic links to the doctrine
of conquest: the acquisition of territory by a victorious state from a defeated
entity in war.
Second came the trust doctrine, also known as the trust relationship. Like
discovery, trust arose in the early years of European discovery and settlement
of the Americas and can be traced back to Catholic Papal bulls. This doctrine
holds that European nations and their representatives, as allegedly superior
peoples, had a moral responsibility to civilize and Christianize the native
peoples they encountered. Discovery and trust are fundamentally related
concepts, with the “discoverer” having the “trust” obligation to oversee the
enlightenment and development of the aboriginal peoples, since natives
were not conceived as sufficiently mature to be the actual “owners” of their
own lands.
Third was the application of a doctrine of national supremacy in matters
of European (and later American) political and commercial relations with
tribal nations. The regulation of trade and the acquisition or disposition
of indigenous lands were to be managed by the national government and
not left to constituent subunits of government, or to land companies or
individuals.
Finally, because of the military and political capability and territorial
domain controlled by the native nations, diplomacy in the form of treaties
or comparable contractual arrangements was considered the most logical
and economical method of interaction with indigenous peoples.
Endorsement of these important principles and policies – discovery, trust,
federal supremacy, and diplomacy – was evident in several early actions by
U.S. lawmakers. A first instance occurred in 1787, when the Confederation
Congress enacted the Northwest Ordinance. The Ordinance defined
a Northwest Territory in the Great Lakes region and set up guidelines for
political and economic development of the region that would eventually
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 211
lead to the creation of new states. Simultaneously, and adversely, Article 3
of the Ordinance contained a striking and unusual passage on the moral or
trust relationship that the United States would follow in its dealings with
Indian peoples. It reads in part:
The utmost good faith shall always be observed towards the Indians, their lands
and property shall never be taken from them without their consent; and in their
property, rights and liberty, they never shall be invaded or disturbed, unless in just
and lawful wars authorized by Congress; but laws founded in justice and humanity
shall from time to time be made, for preventing wrongs being done to them, and
for preserving peace and friendship with them . . . 5
The Northwest Ordinance, that is, embraced fundamentally contradictory
policies. On the one hand, the United States sought to assure tribes
that their lands and rights would be respected, except when “just and lawful
wars” were fought. On the other hand, the lawmakers had already essentially
distributed those same lands to the present and future white settlers,
intent on continuing their inexorable march westward. The contradiction
would endure.
The new federal Constitution was adopted two years later, in 1789. It
contained four major constitutional clauses that directly implicated the
indigenous/U.S. relationship: Commerce, Treaty-Making, Property, and
War and Peace. These clauses affirmed that the national government –
and Congress in particular – had exclusive authority to deal with indigenous
nations in regard to trade and intercourse, diplomacy (to fight or
parley), and land issues. While each would prove significant, the Commerce
Clause, which empowers Congress to “regulate commerce with foreign
nations . . . states . . . and with the Indian tribes,” was the only source
of explicit powers delegated to the legislative branch. In theory, the clause
should not have extended to Congress any greater authority over tribes
than it exercised over states. In both historical and contemporary practice,
however, such has not been the case. As tribal dominion waned during the
course of the nineteenth century, the federal government used the Commerce
Clause to justify many new assertions of national authority over
tribes. It also developed an entirely novel non-constitutional authority –
plenary power – by which Congress, by definition, was granted absolute
control over all indigenous affairs. By the latter part of the century, these
legal tools enabled federal lawmakers to extend their reach over indigenous
affairs to remarkably oppressive levels.
Beginning in 1790, at the behest of the president, the constitutional provisions
most directly related to Indian affairs were given statutory expression
5 1 Stat. 50 (1789).
Cambridge Histories Online © Cambridge University Press, 2008
212 David E. Wilkins
in a series of laws later codified in 1834 as the Indian Trade and Intercourse
Acts.6 The acts devoted considerable attention to maintaining peaceful relations
with tribes by agreeing to respect Indian land boundaries and fulfill
the nation’s treaty and trust obligations to tribes. These comprehensive federal
Indian policies also contained clauses requiring federal approval of any
purchase of tribal land, regulated the activities of white traders in Indian
Country through licensing, and imposed penalties for crimes committed
by whites against Indians. Importantly, the laws were aimed at shoring up
alliances with the tribes and evidenced respect for treaty rights by restricting
states, traders, and private citizens from engaging with tribes on their
own account. The Trade and Intercourse Acts mainly embodied Congress’s
legal and constitutional role as the primary agent in charge of overseeing
trade and fulfilling the federal government’s treaty obligations. They had
very little impact on the internal sovereignty of indigenous nations.
In 1819, however, Congress stepped far beyond its designated role by
adopting legislation explicitly designed to “civilize” Indian peoples. Appropriately
entitled “An Act making provisions for the civilization of the Indian
tribes adjoining the frontier settlements,”7 the statute was significant for
three reasons. First, it meant that Congress had officially decided to seek the
cultural transformation rather than physical destruction of native peoples.
Second, it signaled a bifurcation in Congress’s responsibilities. While still
charged with appropriating funds to fulfill the nation’s treaty requirements
to tribes considered as separate entities, it had now opted to pursue a parallel
policy of civilization, assimilation, and absorption of the Indian into
the body politic of the United States. Finally, Congress was assuming the
power to legislate for persons who were not within the physical boundaries
of the United States; the majority of native peoples still lived outside the
demarcated borders of the United States.
The first responsibility, upholding treaty requirements, was constitutionally
grounded in the Treaty and Commerce clauses. The second responsibility,
or rather unilateral declaration, was an entirely different matter –
embraced by Congress wholly gratuitously, without reference to the
Constitution’s definition of its capacities. The intersection between what
Congress was legally required to do in relations with Indians and what it
chose to do created a powerful tension that has affected the legal and political
relationship of tribes and the United States since that time. What happens,
for instance, when there is a conflict between the two sets of responsibilities?
This is no empty question. As we shall see, tribes’ treaty-reserved rights to
designated communal land holdings, which it was Congress’s responsibility
to uphold, would be countermanded by the same Congress when it pursued
64 Stat. 729 (1834). 73 Stat. 516 (1819).
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 213
plans to “civilize” the community of land holders by allotting them those
same lands (or a fraction thereof) as individual Indian property holders.
II. EARLY FEDERAL RESTRICTIONS OF TRIBAL PROPERTY
AND SOVEREIGNTY: 1820s–1830
Even before the 1819 Civilization Act, Congress had signaled both in its ratification
of certain treaties and its passage of particular statutes that indigenous
peoples and their separate territories were within the reach of American
citizenship and law. At this time, Congressional intent was couched in noncoercive
terms to minimize the direct impact on tribal sovereignty. For
example, Article Eight of the 1817 Treaty with the Cherokee – an early
removal agreement – specified that those Cherokee who wished to remain
on lands surrendered to the federal government were to receive a life-estate
to a 640-acre individual “reservation” and also, if they desired it, American
citizenship. The provisions were repeated in Article Two of the Cherokee
Treaty of 1819.
In part because of the increasing numbers of whites moving onto tribal
lands, Congress also expressed its intention to enforce a measure of federal
criminal law inside Indian Country. An act of March 3, 1817, declared that
interracial crimes involving Indians and non-Indians committed within
Indian Country would be punished in the same fashion as the same offenses
committed elsewhere in the United States. The statute gave federal courts
jurisdiction over those indicted under its provisions. Importantly, it did not
apply to intraracial (Indian on Indian) crimes. Tribes were assured that no
extant treaty rights were to be adversely affected by the law.
Although U.S. Indian policy had been nationalized from the beginning
of the American Republic, federal Indian law grew unevenly in the face
of persistent legal wrangling between federal and state officials over which
level of government would in fact control the nation’s relationship with
tribes. Several of the thirteen original states, especially Georgia and New
York, homes to the powerful and politically astute Cherokee Nation and
Iroquois Confederated tribes, respectively, viewed themselves as superior
sovereigns both in relation to the indigenous nations residing within “their”
borders and to the federal government. The politicians in these two states
continued to negotiate treaties with the tribes as if the Commerce and
Treaty clauses did not exist or simply did not apply to their actions.
This amalgam of states, with their expanding populations and economies;
tribes, with their desire to retain their lands and treaty rights free of state
and federal intrusion; and the federal government, with its contradictory
impulse of supporting national and state territorial and economic expansion,
but also responsible to tribes under treaty and trust obligations, proved
Cambridge Histories Online © Cambridge University Press, 2008
214 David E. Wilkins
a most volatile mix. By the end of the 1830s, the volatility in tribal-federalstate
relations had worked out mostly to the detriment of the tribes: federal
and state sovereignty were reinforced, territorial expansion encouraged,
indigenous sovereignty and property rights weakened. Tribal rights and
lands were not, of course, disregarded entirely. They were, however, sufficiently
diminished that expropriation of Indian lands by land speculators,
individual settlers, state governments, and federal officials could continue
without letup. All of this was accomplished, in Alexis de Tocqueville’s
words, “in a regular and, so to say, quite legal manner.”8
Tocqueville’s “legal manner” – that is to say, the legal underpinnings of
the indigenous/non-indigenous relationship – was largely the construction
of the U.S. Supreme Court, led by Chief Justice John Marshall. Specific
decisions were absolutely basic to the Court’s achievement: Johnson v.
McIntosh (1823), which dealt with tribal property rights; Cherokee Nation v.
Georgia (1831), which determined tribal political status in relation to the
federal government; Worcester v. Georgia (1832), which focused on tribal
political status in relation to the states; and Mitchel v. United States (1835),
which debated the international standing of tribal treaty rights. In fact, the
cases would prove far more important for their long-run impact on tribal
sovereignty as precedents and as legal rhetoric than for the specific issues
each one addressed. At the time, and even as the national government’s
political branches were preparing to force the removal of native nations
from their ancestral lands, the federal judiciary’s rulings were supportive of
as well as detrimental to indigenous rights.
In McIntosh (1823), the Court institutionalized a revised doctrine of discovery
and engaged in a convoluted discussion of the doctrine of conquest.
The results were oppressive to the sovereignty and proprietary rights of
tribes. According to Chief Justice John Marshall, writing for the Court,
not only had the discoverer gained the exclusive right to appropriate tribal
lands, but the tribes’ sovereign rights were diminished and their right to
sell land to whomever they wished fatally compromised. Marshall acknowledged
that both the discovery and conquest doctrines were self-serving, yet
relied on them nonetheless. “However extravagant the pretension of converting
the discovery of an inhabited country into conquest may appear,” he
ruled, “if the principle has been asserted in the first instance, and afterwards,
sustained; if a country has been acquired and held under it; if the property
of the great mass of the community originates in it, it becomes the law of
the land, and cannot be questioned.”9 The Court transfo,rmed these extravagant
theories into legal terms for largely political and economic reasons: the
increasing indigenous resistance to land loss, uncertainty over what Spain,
8 de Tocqueville, Democracy in America, 324.
9 21 U.S. (8 Wheat.) 543, 591.
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 215
France, and Russia’s long-term intentions were on the continent, and its
own desire to formulate a uniform American law of real property. Still,
although it denied that tribes could alienate their lands to whomever they
wished, the Court conceded that the Indians retained a right of perpetual
occupancy that the United States had to recognize. It also determined that
the federal government had to secure Indian consent before it could extinguish
Indian occupancy title. In these respects the Court displayed a desire
to adhere, at least in theory, to just and humane standards that recognized
the prior existence of tribes and a measure of their property rights title, even
as it embraced the ethnocentric view of the technological and proprietary
superiority ofWestern nations.
In December 1823, some nine months after McIntosh, President James
Monroe acted at the international level to solidify American hemispheric
hegemony in a fashion that also confirmed the domesticated status of indigenous
property. Monroe’s message was propounded in his Annual Message
to Congress, becoming what would be called the Monroe Doctrine. Drafted
partially as a response to Russia’s intention to extend its settlements southward
from Alaska with an eye to joining with France, Austria, and Prussia
in an attempt to force newly independent Spanish-American republics to
return their allegiance to Spain, the Monroe Doctrine declared U.S. opposition
to European meddling in the Americas. The political systems of the
American continents were fundamentally different from those of Europe,
Monroe warned. The United States would consider “as dangerous to our
peace and safety” any attempt by European powers to extend their authority
in the Western hemisphere. Reciprocally, the United States would
not interfere with existing European colonies in the Americas or in the
internal affairs of Europeans, or participate in European wars of foreign
interests.
The combined effect of the McIntosh ruling and the Monroe Doctrine did
not bode well for indigenous property or sovereignty. Meanwhile, Eastern
states, clamoring for additional Indian lands and resources for their burgeoning
populations and to rid themselves of an indigenous presence, gained a
major ally when Andrew Jackson was elected president in 1828. The stage
was set for a major test of American democracy, federalism, and the doctrine
of separation of powers. The indigenous community that would bear the
brunt of much of this concentrated attention was the Cherokee Nation of
the Southeast.
III. THE CHEROKEE NATION, JOHN MARSHALL,
AND THE LAW
The Cherokee were one of the first native peoples to succeed in fusing ancient
tribal law ways with Anglo-American legal institutions. This acculturation
Cambridge Histories Online © Cambridge University Press, 2008
216 David E. Wilkins
process, in which theWestern legal system was modified to Cherokee needs,
was actually underway by the early 1820s. In that decade alone the Cherokee
crafted a constitution loosely modeled after that of the United States,
produced a written version of their language, and established the first tribal
newspaper. In 1827, they formally announced their political independence,
a fact already well understood by the federal government as evidenced by
the fourteen ratified treaties signed with the tribe. The Cherokee emphatically
declared that they were an independent nation with an absolute right
to their territory and sovereignty within their boundaries.
The Cherokee declaration enraged Georgia’s white population and state
officials. Driven by the recent discovery of gold on tribal lands, but compelled
even more by a conception of state sovereignty that precluded limitations
imposed by the federal government, let alone a tribal people, Georgia
legislators enacted a series of debilitating, treaty-violating laws designed
to undermine Cherokee self-government. These acts parceled out Cherokee
lands to several counties, extended state jurisdiction over the nation, and
abolished Cherokee laws.
Cherokee appeals to President Jackson and Congress to intercede failed,
and the tribe filed suit in the Supreme Court against Georgia, praying for
an injunction to restrain Georgia’s execution of the laws aimed at their legal
dismemberment. Chief Justice Marshall rendered the Court’s fragmented
and ambivalent ruling on March 18, 1831 (Cherokee Nation v. Georgia). A
more fascinating case could hardly be imagined, Marshall noted. But first the
Court had to ascertain whether it had jurisdiction to hear the case. Since the
Cherokee were suing as an original plaintiff, the Court had to decide whether
they constituted a “foreign state.” After lengthy ruminations, Marshall held
that the Cherokee Nation were not a foreign state and therefore could not
maintain an action in the federal courts.
If they were not a foreign state, what were they? Marshall refused to accept
either of the views of tribes available at the time – as foreign nations or
subject nations. As “subject” nations, they would have been at the mercy of
the states; as “foreign” nations, they would have been independent of federal
control. Instead, Marshall generated an extra-constitutional political status
for tribes by characterizing them as “domestic dependent nations.” This
diluted and ambiguous status has had a lasting effect on all tribes, even
though technically it applied only to the Cherokee. First, the description
precluded tribes from bringing original actions to the Supreme Court. And
second, since they were denied status as “foreign nations,” the tribes were
effectively barred from benefits accorded to fully recognized sovereigns
under international law.
Building on the legal construct of “discovery” that he had articulated in
McIntosh, Marshall said that tribes occupied territory to which the United
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 217
States asserted a superior title. He then added extremely problematic wording
that would prove highly detrimental to tribes. Tribes were “in a state
of pupilage. Their relation to the United States resembles that of a ward to
his guardian.”10
Overall, the Court was highly fragmented. Six Justices (the seventh,
Justice Duvall, was absent) presented four different sets of views on tribal
status. Justice Johnson held that tribes lacked sovereignty but possessed an
inherent political power that could mature into sovereignty later. Justice
Baldwin simply said tribes had no sovereignty. Justices Thompson and
Story believed that tribal status paralleled that of foreign nations. Justice
McLean joined Marshall in his description of tribes as domestic dependent
nations.
On the jurisdictional question the majority was thus against the Cherokee.
On the merits, however, the Court divided four to two for the Cherokee.
The Chief Justice, in fact, insinuated that he sided with the minority on
the merits – he encouraged Justices Thompson and Story to write out their
dissenting views. The Chief Justice even suggested a method of getting a
case properly before the Court in the future.
Marshall would have the opportunity to reveal his innermost feelings
sooner than he anticipated. Worcester v. Georgia (1832), the third of the
Court’s seminal Indian cases, is often hailed as the most persuasive and
elaborate pronouncement of the federal government’s treaty-based relationship
with tribal nations. Interestingly, the Cherokee were not direct parties
to this suit. And whileWorcester is generally considered the strongest defense
of tribal sovereignty, it may be understood more accurately as a case that
supports federal sovereignty over state sovereignty. The principals in the
case were Christian missionaries led by Samuel A. Worcester and Elizur
Butler, and the State of Georgia. Georgia had enacted a law in 1831 that
prohibited whites from entering Cherokee country without first securing a
state license.Worcester and Butler had entered Cherokee territory without
state authorization, but with tribal and federal approval. They were arrested
and sentenced to four years in prison for violating state law. The missionaries
immediately retained lawyers who brought suit against Georgia in
federal court on the grounds that Worcester and Butler were agents of the
United States. This raised the question of federal supremacy over state law.
Here was the test case for which Marshall had been waiting.
Unlike his ambiguous opinion in Cherokee Nation, Marshall emphatically
declared that all of Georgia’s Indian laws violated the Constitution, federal
statutes, and the treaties between the United States and the Cherokee.
Lifting text almost verbatim from Justice Thompson’s dissent in Cherokee
10 30 U.S. (5 Pet.) 1, 17.
Cambridge Histories Online © Cambridge University Press, 2008
218 David E. Wilkins
Nation on the international status of tribes, Marshall held that treaties and
the law of nations supported Cherokee sovereignty and independence, even
though the Cherokee were no longer as powerful militarily as they had been
and were now under the political protection of the federal government.
Worcester supposedly settled the issue of federal preeminence over state
power regarding Indian tribes. The Chief Justice based much of his defense
of federal power on his view of Indian tribes “as distinct, independent political
communities.”11 He noted that the War and Peace, Treaty-Making,
and Commerce Clauses provided the national government with sufficient
authority to regulate the nation’s relations with tribes. Marshall also
attempted to rectify his previous equivocations on the doctrine of discovery,
which he now said was nothing more than an exclusive principle limiting
competition among European states that could not limit Indian property
rights. He also clarified the Court’s view of the actual political status of
tribes. In Cherokee Nation, tribes were called “domestic dependent nations,”
not on par with “foreign” states. In Worcester, however, tribes were referred
to as “distinct, independent communities,” properly identified and treated
as nations.
Although the Court overturned Georgia’s actions and orderedWorcester’s
release, he remained in prison and was released only when a later deal
was struck. More significantly and tragically, however, the majority of the
Cherokee people and more than 100,000 other Indians representing more
than a dozen tribes were eventually coerced into signing treaties that led to
their relocation to Indian Territory west of the Mississippi River.
Three years later, in Mitchel v. United States (1835), the Supreme Court
issued another important opinion on indigenous rights. It has received little
attention from legal and political commentators, in large part because most
writers have concentrated their attention on the so-called Marshall trilogy –
McIntosh, Cherokee Nation, and Worcester. In Mitchel, possibly because he was
near retirement (he stepped down in July 1835), Marshall opted not to
write the decision and assigned it to Justice Henry Baldwin.
Mitchel should be added to that short list of Supreme Court rulings that
exhibit some support for tribal sovereignty and indigenous land rights.
The Court’s holding fundamentally contradicts, without expressly overruling,
the doctrines espoused in McIntosh. The ruling asserted the following
key principles: first, the doctrine of discovery lacks credibility as a
legal principle; second, tribes are possessors of a sacrosanct land title that
is as important as the fee-simple title of non-Indians; third, tribes have
the right to alienate their aboriginal property to whomever they wish;
fourth, the alleged inferiority of tribal culture does not impair aboriginal
11 31 U.S. (6 Pet.) 515, 559.
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 219
sovereignty; and fifth, tribes are collective polities and they and their members
are entitled to international law’s protections of their recognized treaty
rights.
Tribes emerged from the Marshall era with a contradictory political status.
They had been labeled both domestic dependent nations and distinct
and independent political communities. The assertion that tribes were independent
polities most closely approached their actual situation. But the
cases’ confused and contradictory analogies would prove highly problematic,
resulting in persistent confusion about exactly where – if anywhere –
tribal nations fit in the American constitutional landscape.
The long-term consequences of Marshall-era principles – discovery, the
analogy of wardship, and domestic indigenous status – have been their
distinct diminution of tribal sovereignty. Other Marshall era ideas – the
supremacy of Indian treaties, the independence of tribal communities, the
exposure of discovery, the exclusive jurisdiction of the federal government,
and the sacredness of Indian title – offer tribes means to retain some measure
of legal and political sovereignty.
IV. TRIBAL SOVEREIGNTY AND WESTERN EXPANSION,
1835–1860s
The three decades between Mitchel (1835) and the inception of the American
CivilWar (1861) were tumultuous years in American history, marked from
the beginning as an era of massive Indian removal. These were the opening
years of “Manifest Destiny,” when the United States acquired political
control of large parts of the FarWest and, unexpectedly, encountered a new
Indian frontier. The new territories included Texas (1845), Oregon (1846),
more than one million square miles of the Southwest and West obtained
from Mexico by the Treaty of Guadalupe Hidalgo (1848), and an additional
29,640 square miles acquired from Mexico in the Gadsden Purchase (1853).
Within the span of a decade, the size of the United States had increased by
73 percent.
These vast conquests and purchases resulted in the physical incorporation
into the United States of scores of previously unknown native nations.
The inevitable cultural and territorial collision resulted in a Congressional
policy of containment, specifically the establishment of Indian reservations.
Between the 1830s and 1850s, the reservation policy remained in an experimental
stage. It would not be implemented fully until the 1860s. In fact,
treaties rather than Congressional legislation formed the basis of the law
during this era of rapid expansion. That said, the broad outline of U.S. Indian
policy – still visible – can be found in two comprehensive laws enacted by
Congress, on June 30, 1834. The first measure was the final act in a series
Cambridge Histories Online © Cambridge University Press, 2008
220 David E. Wilkins
of statutes that regulated trade and intercourse with tribes.12 The second,
enacted the same day, provided for the organization of the Department of
Indian Affairs.13 By adopting these laws, Congress developed a set of institutions
and procedures that clarified what had been a thoroughly ill-defined
structural relationship between the United States and tribal nations.
By the late 1840s, two additional statutes had been enacted that were to
have a lasting effect on tribes. The first amended the 1834 Non-Intercourse
Act that had organized the Department of Indian Affairs.14 The new measure
made two significant changes in federal Indian policy. First, it stiffened
and broadened preexisting Indian liquor legislation, which had long
outlawed liquor in Indian country (a prohibition that would remain in
effect until 1953). Second, it signaled a profound change in the manner
and to whom the federal government would distribute moneys owed to
native nations. Previously those funds were distributed to tribal chiefs or
other leaders. Section 3 declared that moneys owed to Indian nations would
instead be distributed directly to the heads of individual families and other
individuals entitled to receive payments. Ostensibly designed to reduce the
influence of white traders on tribal leaders, this amendment, in effect, gave
federal officials tremendous discretionary authority on the question of tribal
membership, insofar as the disposition of funds was concerned. According
to legal scholar Felix Cohen, this was the first in a series of statutes aimed
at individualizing tribal property and funds in a way that diminished the
sovereign character of tribal nations.
The second act (1849) established the Department of Interior.15 It contained
a provision calling for the transfer of administrative responsibility
for Indian affairs from the War Department to the new department. Supporters
of this move believed, prematurely, that Indian warfare was ending
and that responsibility for Indian affairs should therefore be placed in civilian
hands. Congress retained constitutional authority to deal with tribal
nations, but the legislature more often deferred to the president and the
executive branch, especially in the sensitive area of Indian treaties, which
were being negotiated by the dozens during this period.
Justice Roger Taney and Indian Law
Coinciding with the emergence of a more activist national state – legislative
and executive – on tribal questions, the Supreme Court under Marshall’s
successor, Chief Justice Roger Taney, began to produce legal doctrines that
confirmed the suppression of the treaty paradigm in favor of “federalism.”
124 Stat. 729 (1834). 134 Stat. 735 (1834).
149 Stat. 203 (1847). 159 Stat. 395 (1849).
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 221
Taney enunciated the Court’s embrace of this new perspective on tribal
political status in United States v. Rogers (1846). Ironically, this unanimous
decision, like the Marshall cases, also involved the Cherokee, even though
they were not directly a party to the suit.
William S. Rogers, a white man residing within Cherokee Territory, had
been indicted in a federal circuit court for the murder of Jacob Nicholson,
also a white man. The crime had occurred in Cherokee Country. A confused
circuit court sent the case to the Supreme Court on a certificate of division.
Taney, writing for a unanimous court, dramatically rewrote the history of
the legal and political relationship between tribes and the United States.
Contrary to Marshall’s fact-based Worcester opinion, Taney misrepresented
the basis of Cherokee title to their lands, proposing that their lands had
been “assigned to them” by the federal government and that they held title
only with the “permission” of the United States. The Cherokee and the
scores of other tribes then negotiating treaties with the United States were
no doubt shocked to hear Taney use the discovery doctrine in a way that
essentially denied any native proprietary rights at all. Removal, the Court
implied, not only vacated any rights Indians thought they might have had
in their original territories but it also offered them no substitute rights in
the “Indian territory” to which they had been forced to move.
Rogers was also the first Indian law opinion to outline explicitly the Court’s
“political question” doctrine. Political question doctrine holds that it is not
the province of the courts to render rulings on matters deemed essentially
“political” in nature. These are matters best left to the legislative and executive
branches. Describing Indians as an “unfortunate race,” Taney stated
that, even if Indians had been mistreated, “yet it is a question for the
law-making and political department of the government, and not the judicial.”
16 Along with the absence of land rights went the absence of conventional
legal means of redress. The political question doctrine would continue
to plague Indian legal efforts until it was formally disavowed in the 1980
Supreme Court ruling United States v. Sioux Nation.
Rogers is an appropriate representative of Supreme Court cases emphasizing
the federalism paradigm, by which federal dominance over tribes was
confirmed in virtually every respect – property, political status, and ethnic
identity. It is worth noting that ten years after Rogers, Chief Justice Taney’s
infamous Dred Scott opinion (1857) would refer to Indians as historically “a
free and independent people, associated together in nations or tribes” and
treated as foreign governments “as much so as if an ocean had separated the
red man from the white.”17 The description underscores the transformation
to which Rogers had contributed.
1645 U.S. (4 How.) 567, 572. 1760 U.S. (19 How.) 393, 404.
Cambridge Histories Online © Cambridge University Press, 2008
222 David E. Wilkins
The Taney Court’s doctrines were particularly harmful to tribal sovereignty
because that Court was much more concerned than its predecessor
with protecting state authority within U.S. federalism. Historically, states’
rights activists have generally been less supportive of tribal rights because
of the geopolitical relationship between states and tribes (illustrated in
Georgia’s conflict with the Cherokee). Nevertheless, at this point most
tribal nations existed outside the scope of Anglo-American law. Before midcentury,
the law’s impact had been felt mostly by the Eastern tribes whose
experience with Euro-Americans dated to the pre-Revolutionary period.
Western expansion would rapidly terminate this geographical isolation. The
gradual encirclement of tribes by non-Indians, increased immigration, the
Civil War and Reconstruction, and burgeoning industrialization – fueled
in part by transcontinental railroads – all produced the circumstances in
which the federalism paradigm would wreak legal havoc on native nations.
V. ORIGIN AND SCOPE OF FEDERAL PLENARY (ABSOLUTE)
POWER: 1871–1886
From the late 1860s through the early twentieth century, the United States –
Congress in particular – was openly bent on the destruction of native
nations as identifiable cultural, sociological, and political bodies. The era
of Congressional unilateralism vis-`a-vis indigenous peoples began during
Reconstruction; its clearest expression was a rider inserted in the Indian
Appropriation Act of March 3, 1871, which provided “That hereafter no
Indian nation or tribe within the territory of the United States shall be
acknowledged or recognized as an independent nation, tribe, or power with
whom the United States may contract by treaty.”18 Congressional unilateralism
culminated in 1906 in systematic efforts to terminate the sovereign
status of the Five Civilized Tribes in Indian Territory. Throughout, Congress
wielded self-assumed and virtually unrestrained powers over Indians that
could never have survived constitutional muster had they been asserted
against American citizens.
The year 1871 is important for a second reason besides Congressional
repudiation of formal treaty-making. In May of that year, two months after
the repudiation of treaty-making, the U.S. Supreme Court ruled in The
Cherokee Tobacco case19 that the 1868 Revenue Act, which contained a provision
imposing federal taxes on liquor and tobacco products in the United
States, had implicitly abrogated an 1866 Cherokee Treaty provision by
which Cherokee citizens were exempted from federal taxes.
18 Stat. 566; Rev. Stat. § 2079, now contained in 25 U. S. C. § 71.
19 78 U.S. (11Wall.) 616 (1871).
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 223
For tribes, the conjunction was catastrophic. The treaty repudiation rider
effectively froze tribes in political limbo. They were no longer recognized
as polities capable of engaging in treaty-making with the federal government,
yet they remained separate sovereignties outside the pale of the U.S.
Constitution. Meanwhile, Cherokees who were not then American citizens
were now required to pay taxes to the federal government despite their noncitizenship,
their express treaty exemption, and their lack of Congressional
representation. Tribes and individual Indians were now bereft of legal or
political protection. The federal government could explicitly or implicitly
abrogate treaty provisions, and tribes had no recourse other than turn to
the very Congress that had stripped them of recognition. Following Rogers,
the Supreme Court deferred to the political branches on Indian matters,
declaring in effect that Congressional acts would prevail as if the treaties
were not even documents worthy of consideration.
In its 1871 rider, Congress had guaranteed the terms of treaties already
negotiated. The Cherokee Tobacco decision almost immediately put that guarantee
in doubt, announcing that treaty rights generally secured at the expense
of significant amounts of tribal land and the loss of other valuable
properties and freedoms could be destroyed by mere implication. The case
established the “last-in-time” precedent (that is, later statutes may override
earlier treaties) and also the rule that tribes are always to be considered
“included” in Congressional acts unless they are specifically “excluded” in
the language of the statute. And it disavowed the basic principle that specific
laws, like treaties that generate special rights, are not to be repealed
by mere implication of general laws.
With the treaty process essentially stymied and extant treaties now subject
to implicit disavowal, and with white settlers and land speculators
flooding into the far reaches of the West driven by powerful economic
motives and a sense of racial superiority, federal lawmakers struggled with
how best to support what they deemed the inevitable spread of capitalism
and Protestantism while still providing some degree of respect and protection
for tribal peoples and their dwindling lands. A loose coalition of
individuals and institutions that would come to be called the “Friends of the
American Indian,” consisting of law professors, Christian leaders, reformers,
leaders of the bar, and a few members of Congress, stood up against the
powerful economic and political interests intent on destroying, or at least
diminishing dramatically, the rights and resources of indigenous people.
This loose alliance of native supporters, Petra Shattuck and Jill Norgren
have written, “linked adherence to principles of rationality and morality
with the pragmatic needs of manifest destiny. Their debates proved a forceful
and convincing counterpoint to the popular clamor for the abrogation
of the legal and moral commitments of the past.”
Cambridge Histories Online © Cambridge University Press, 2008
224 David E. Wilkins
The Friends of the American Indian may have helped ameliorate federal
policy, but they did not alter its direction (nor did they wish to). Assimilation
dominated federal Indian policy and law during the 1870s and into the
first two decades of the twentieth century. It rested on consistent adherence
to five basic goals: first, transform Indian men and women into agriculturalists
or herders; second, educate Indians in the Western tradition; third,
break up the tribal masses by means of individual allotment of tribal lands,
in the process freeing non-allotted land for white settlement; fourth, extend
U.S. citizenship to individual Indians; and fifth, supplant tribal customary
law with Euro-American law. Save for the latter, these ideas had already been
well in evidence, but their implementation had been spasmodic. From the
1870s on, with Indians essentially immobilized on reservations and rendered
weak in the face of federal power by wars, alcohol, diseases, and displacement,
the guardian-like U.S. government and allied institutions –
notably the churches – could develop a more systematic and thorough
approach to the increasingly ward-like status of indigenous peoples.
In the 1880s, federal efforts to assimilate Indians took a variety of forms.
Prime among these were attempts to extend American law to reservations,
subdivide the Indians’ communal estate, and grant the franchise to individual
Indians. First, let us consider the application of Euro-American criminal
law to Indian Country.
Prior to the 1880s, as we have seen, relations between tribes and the
United States were largely determined by treaties and the policies outlined
in the trade and intercourse acts. Internal tribal sovereignty, especially
crimes between Indians, was largely untouched by federal law. The idea of
imposing federal criminal jurisdiction, however, slowly gained momentum
as Western expansion led to the encirclement and permanent penetration
of tribal lands by non-Indians. This “de facto” assimilation required a de
jure response, said the quasi-political Board of Indian Commissioners in
1871. Indians had to be brought under the “domination of law, so far as
regards crimes committed against each other” or the federal government’s
best efforts to civilize native peoples would be constrained.20
The first major case from the West involving the extension of Euro-
American law into Indian Country arose directly as a result of the ever
burgeoning numbers of whites settling on Indian lands. United States v.
McBratney (1882)21 involved the murder of one white man by another
within the boundaries of the Ute Reservation in Colorado. The Supreme
Court ruled that the equal footing doctrine – which holds that states newly
20 United States Board of Indian Commissioners. Annual Report (Washington, DC, 1871),
432.
21 104 U.S. 621.
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 225
admitted into the Union were on an “equal footing” with the original states
insofar as their political status and sovereignty were concerned – and the
absence of explicit statutory language providing for federal rather than
state jurisdiction regarding tribal lands gave state authorities jurisdiction
over the crime. Ignoring Ute sovereignty and denying federal jurisdiction,
the Court turned the Worcester principle of state non-interference in tribal
territory on its head. Operating from its version of the federalism paradigm,
it permanently transformed the tribal-state relationship by indicating that
subject matter and identity, not geography, would determine questions of
state jurisdiction.
The issue of Indian-on-Indian crime was next to arrive at the Supreme
Court. The landmark case Ex parte Crow Dog (1883) dealt with a Sioux
leader, Crow Dog, sentenced to death for the murder of a chief, Spotted
Tail. The high court, using the treaty paradigm, unanimously held that
the federal government lacked jurisdiction over crimes committed by one
Indian against another. The decision was an important, if stilted, statement
on tribal sovereignty. It served as the catalyst to jurisdictional changes
advocated by those anxious to have federal law supplant tribal law, the
final tipping-point toward a half-century of assimilation. A mere eighteen
months later, Congress repudiated the treaty-based Court decision by
attaching a legislative rider to the general appropriation act of March 1885
that extended federal criminal jurisdiction over Indians in matters involving
seven major crimes – murder, manslaughter, rape, assault with intent
to kill, arson, burglary, and larceny.22
Congress’s direct attack on tribal sovereignty was not fatal to tribal
self-determination, but enactment of the major crimes rider set a precedent
for future Congressional intrusions. There was, however, some doubt
as to the act’s constitutionality. This became the central issue in United
States v. Kagama (1886),23 one of the most important Indian law decisions
issued by the Court. Kagama was a Hoopa Indian (northern California)
convicted of killing another Indian on the Hoopa Reservation. Kagama
and his attorneys argued that the Major Crimes Act was unconstitutional
and should be voided because Congress’s Commerce Clause power did not
authorize it to enact laws regulating Indian-on-Indian crimes occurring
within Indian Country. The Supreme Court upheld the constitutionality of
the Major Crimes Act, but rejected both the Commerce Clause and Property
Clause arguments suggested by the government’s lawyers. Instead, the
Court denied tribal sovereignty by fashioning a set of arguments grounded
in federalism and U.S. nationalism and steeped in ethnocentrism. The Court
embraced geographical incorporation: because the United States “owned”
2223 Stat. 362, 385 (1885). 23118 U.S. 375.
Cambridge Histories Online © Cambridge University Press, 2008
226 David E. Wilkins
the country, and because Indians lived within its boundaries, the United
States could extend an unbridled power over Indians, based on the doctrine
of discovery. The justices also embraced Indian wardship: Indian dependency
and helplessness necessitated unlimited exercise of federal guardianship
– what would later be termed “plenary” power. In other words, the
Court determined that, based on its ownership of land, the federal government
had virtually unfettered power over tribes. And in declaring Indians
“wards of the nation,” indigenous peoples had been rendered subject to a
plenary Congressional authority to protect and defend its “dependents,”
exercised as Congress saw fit.
Ironically, in Kagama the Supreme Court held a federal statute applying to
Indians to be constitutional while rejecting the only available constitutional
clauses that would have rendered it constitutional. That the court could, in
effect, step outside the Constitution to hold a law constitutional is quite a
remarkable feat. Why it did so, however, is clear. It sought to legitimize
the Congressional policy of coercive assimilation and acculturation of tribal
citizens into the American polity. The Court developed the extra-legal
sophistry of unbounded proprietary authority and wardship to further the
assimilative process while at the same time acting to “protect” tribes from
uninvited and intrusive state attacks on tribes and their dwindling resources.
Having addressed the subject of criminal jurisdiction, the federal government
then acted on the question of extending federal citizenship to Indians.
Many of the “Friends” – reformers and policymakers – believed that it
was unfair to impose Euro-American norms of criminality and punishment
without allowing Indians access to the full benefits and responsibilities
accompanying American citizenship. Hence they advocated extending the
franchise to Indians.
The first major test of whether the United States was prepared to follow
the reformers’ lead came in Elk v.Wilkins (1884).24 John Elk had voluntarily
emigrated from his tribe (his tribal affiliation was never mentioned) and
moved to Omaha, Nebraska. After a time, Elk went to register to vote,
claiming that the Fourteenth and Fifteenth Amendments gave him U.S.
citizenship. His registration application was rejected by Charles Wilkins,
the city registrar, on the grounds that Elk, as an Indian, was not a citizen of
the United States. The case found its way to the Supreme Court, where Elk’s
constitutional claims were rejected. As an American Indian he belonged to
an “alien nation.” The majority maintained that, even if individual Indians
met basic citizenship requirements, as Elk had done, they still could not
be enfranchised unless Congress passed a law authorizing such a change in
their legal standing.
24 112 U.S. 94.
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 227
Congressional reaction to Elk was swift, though ill focused. Some reform
groups argued that the solution to the Indian “problem” was unfettered and
immediate citizenship. Others declared that U.S. citizenship, a valid goal,
should be a gradual process tied to individualized property ownership. The
two camps compromised (at Indian expense) by embracing the allotment
of much of Indian Country.
The General Allotment [Dawes] Act,25 passed the year after the Supreme
Court’s Elk decision, intensified Congress’s cultural and proprietary assault
on indigenous peoples. Most observers suggest that this act – actually a
detailed policy directive – and the multiple amendments and individual
allotting agreements passed in its wake over the next two decades, constitute
the single most devastating federal assault on indigenous nations.
Most white philanthropists, and those federal lawmakers concerned to maintain
the nation’s position as a liberal and democratic polity, agreed that
tribal social structures founded on common stewardship of land were the
major obstacle to Indians’ “progress” toward civilization. These “Friends”
firmly believed in the need to break up the reservations, distribute small
individual plots of land to tribal members, and then require the allotted
Indian to adapt to Euro-American farming life. The allotments themselves
were to be held in trust. For twenty-five years they could not be
sold without express permission of the secretary of the interior. This was
deemed a sufficient period for the individual Indian to learn the arts of a
civilized yeoman farmer. U.S. citizenship accompanied receipt of the allotment.
Tribal land not allotted to members was declared “surplus.” This
“extra” land was sold to non-Indians, whose settlement among the Indians,
it was believed, would expedite their acquisition of white attitudes and
behavior.
Tribal lands, already dramatically depleted through land cession treaties
and agreements, were further reduced by the allotment policy and the subsequent
individual allotting agreements. The allotment policy was, in the
words of President Theodore Roosevelt, “a mighty pulverizing engine to
break up the tribal mass.” By 1934 when it was finally stopped, 118 of 213
reservations had been allotted, resulting in the loss of another ninety million
acres of tribal land. What then ensued was in many ways even worse –
removal of allotments from trust-protected status by forced fee patent, sale
by both Indian landowners and the United States, probate proceedings
under state inheritance laws, foreclosure, and surplus sale of tribal lands.
Fundamentally, the entire allotment and post-allotment program had disastrous
economic and cultural consequences for native peoples, which are
still felt by both allotted tribes and individual Indians today.
25 24 Stat. 388 (1887).
Cambridge Histories Online © Cambridge University Press, 2008
228 David E. Wilkins
VI. THE UNIQUE LEGAL STATUS OF THE PUEBLOS
AND THE FIVE CIVILIZED TRIBES
Tribal nations are uniquely constituted social, political, and cultural entities.
As we have seen, the consistent failure to recognize that reality has
meant that federal efforts to develop a coherent and consistent body of legal
principles to deal with the tribes were never very successful. But there were
exceptions. Not all tribes were brought under the federal umbrella or were
viewed the same way by the federal government. Two groupings of aboriginal
peoples considered “exceptional” and who became the focus of a great
deal of Western law thus merit specific discussion: the Pueblo Nations
of present-day New Mexico (actually twenty-two distinctive indigenous
communities) and the so-called Five Civilized Tribes26 – the Cherokee,
Chickasaw, Choctaw, Creek, and Seminole.
The Pueblos
The Pueblos are distinctive in part because of their particular culture and
language and because of their long historical relationship with the Spanish
and, later, the Mexican governments. Written agreements with Spain in
the form of land grants, later acknowledged by the Mexican government,
unquestionably affirmed Pueblo ownership, not just occupancy, of
their lands. Pueblo land grants were both encompassed and recognized by
the United States under the provisions of the 1848 Treaty of Guadalupe
Hidalgo. One of the Hidalgo Treaty’s provisions specified that Mexican
citizens might choose either Mexican or U.S. citizenship. The Pueblo Indians,
by choosing to remain in their homeland, were said by some federal
commentators to have implicitly accepted U.S. citizenship. This federal
citizenship status was first affirmed by the Supreme Court in United States v.
Ritchie.27
Pueblo connections to previous Spanish and Mexican authorities, their
apparently enfranchised status, and their generally peaceful demeanor
toward American settlers and the federal government raised the question
whether the Pueblos were to be considered “Indian tribes” within the meaning
of existing federal statutes, such as the 1834 Trade and Intercourse Act,
which were designed to protect tribal lands from white encroachment.
Because of the Pueblos’ ambiguous legal status and less confrontational
26 The phrase “civilized” became part of the Five Tribes after their forced removal to presentday
Oklahoma. Once they resettled, the members of these nations made tremendous
social and political changes within their societies and were soon labeled “civilized” to
distinguish them from the so-called wild tribes of the Western plains area.
27 58 U.S. (17 How.) 525 (1854).
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 229
comportment, increasing numbers of Mexican-American and Anglo-
American settlers became squatters on Pueblo land grants. The Pueblos
resented these intrusions and, with the support of their Indian agents and the
federal government as their trustee, sought to have the trespassers evicted.
The matter came before the Supreme Court in United States v. Joseph (1877),28
in which the Court was asked to decide whether the Taos Pueblo constituted
an Indian “tribe” under the meaning of the 1834 Intercourse Act. If
they were, federal officials could expel the interlopers. If they were not, the
government had no such authority, leaving the Pueblos to deal with the
squatters as best they could by themselves.
The court found that the Pueblos were far more “peaceful, industrious,
intelligent, honest, and virtuous” than the neighboring “nomadic” and
“wild” Navajo and Apache Tribes. Therefore, they could not be classed with
the Indian tribes for whom the intercourse acts had been passed. Being far
too “civilized” to need federal guardianship, the Pueblos could decide for
themselves who could live on their lands. The justices opted not to address
definitively the issue of whether or not Pueblo individuals were American
citizens, but they did acknowledge that the Pueblos’ Spanish land grants
gave them a title to their lands that was superior even to that of the United
States.
In 1913, a year afterNewMexico gained statehood, Pueblo status was dramatically
reconfigured by the Supreme Court in United States v. Sandoval.29
So long as New Mexico had only territorial status, the Pueblos had been
of peripheral concern to the federal government. With statehood, the subject
of intergovernmental relations and Pueblo status required clarification.
Congress had provided in New Mexico’s Enabling Act that the terms
“Indian” and “Indian Country” were to include the Pueblos and their lands.
These provisions were incorporated in the state’s constitution as well.
Although a sizeable body of statutory and judicial law had held that
the Pueblo were not to be federally recognized as Indians for purposes of
Indian-related legislation, by 1913 the number of whites inhabiting Pueblo
territory had increased strikingly, and federal policy was now focused on the
coercive assimilation of all Indians. A general guardian/ward relationship
had become the guiding policy assumption of many federal officials: all
tribal people were viewed as utterly dependent groups in need of constant
federal tutelage to protect them from unscrupulous whites and from their
own vices.
In Sandoval, the Supreme Court found that the civilized, sober, and industrious
Pueblo culture of its 1877 decision had somehow become “primitive”
and “inferior” and utterly dependent on the U. S. government. Relying on
2894 U.S. 614. 29231 U.S. 28.
Cambridge Histories Online © Cambridge University Press, 2008
230 David E. Wilkins
a legal paradigm steeped in paternalism and deferring to Congressional
actions designed to protect the Pueblos from whites selling liquor, the
Court went to extraordinary lengths to show that, although the Pueblo
people remained “industrially superior” to other tribes, they were still “easy
victims to the evils and debasing influence of intoxicants because of their
Indian lineage, isolated and communal life, primitive customs and limited
civilization.” The Supreme Court proceeded to reconfigure Pueblo legal
status, holding that their alleged cultural debasement necessitated federal
trust protection of their lands from unscrupulous liquor traders.
The Five Civilized Tribes
As important as the Pueblo were in the development of Federal Indian
law, the Five Civilized Tribes were even more significant. Each of the Five
Tribes and members of those nations had figured prominently in the federal
government’s development of legal principles that enervated and devastated
tribal sovereignty. The Cherokee Nation had been at the forefront of legal
activity virtually from the outset – from the pivotal Marshall cases in the
1820s and 1830s to United States v. Wildcat in 191730 – but between 1870
and 1920, individual tribal members, particular tribes and combinations
of the various five tribes were involved in far more federal cases than any
other indigenous nation.
Because they were perceived as more “civilized” than all other tribes
except the Pueblo, and because they had insisted on fee-simple title to their
lands in Indian Territory through the treaties they had signed under the
provisions of the 1830 Indian Removal Act, the Five Civilized Tribes were
originally exempted from the Major Crimes Act of 1885 and the General
Allotment Act of 1887. But although the exemptions were treaty-endorsed
and extra-constitutional they would not last indefinitely: a multitude of
interests – territorial and state governments, individual settlers and land
speculators, federal policymakers, railroad companies, and others – were all
clamoring for access to the Five Tribes’ lands and resources and for control
over the rights of the Tribes and their citizens.
From the late 1880s to the early 1900s, when the combined force of
these interests finally brought about the legal dismemberment of the governments
of the Five Tribes and the allotment and subsequent dispossession
30 The Cherokee Nation or members of that tribe were involved in several important cases
between these dates: The Cherokee Tobacco, 78 U.S. 616 (1871); Cherokee Nation v. Southern
Kansas Railway Co., 135 U.S. 641 (1890); Talton v. Mayes, 163 U.S. 376 (1896); Stephens v.
Cherokee Nation, 174 U.S. 445 (1899); Cherokee Intermarriage Cases, 203 U.S. 76 (1906);
Muskrat v. United States, 219 U.S. 346 (1911); and Choate v. Trapp, 224 U.S. 665 (1912).
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 231
table 1. Major tribal entities represented in federal court cases
involving tribal sovereignty and plenary power (1870–1920)
Number of times tribes
Tribes represented appear in cases
A. 5 Civilized Tribes
Civilized Tribes (Collectively)* 6
Cherokee† 9
Cherokee & one other tribe 10
Creeks 6
Creeks & one other tribe 6
Chickasaw 2
Chickasaw & one other tribe 12
Choctaw 2
Choctaw & one other tribe 8
Seminole 2
Seminole & one other tribe 6
Total Five Tribes: 69
B. All Other Tribes
Sioux (all bands) 11
Chippewa (all bands) 8
Osage 4
Shawnee 3
Yakima 3
Others 9
Total Other Tribes: 38
Total All Tribes: 107
∗ Collecti,vely means that all Five Tribes were directly involved.
† In the majority of these cases an individual member of a tribe
is a party, rather than a tribe.
of much of their land, American law was deployed in ways that generally
diminished but occasionally supported the nations’ sovereignties. In
Cherokee Nation v. Southern Kansas Railway Company (1890), for example, the
Cherokee national government challenged an 1884 congressional law that
had granted the Southern Kansas Railway a right-of-way through Cherokee
territory. Drawing on its federalism and paternalism paradigms, the
Supreme Court held that the Cherokee could not prevent the federal government
from exercising its powers of eminent domain to take Indian lands.
Justice John Harlan, relying on the wardship and dependency phrases established
in previous Court rulings, held that their “peculiar” and “inferior”
status deprived them of enforceable rights to their property. Even the fact
that the Cherokee Nation held fee-simple title was “of no consequence”
Cambridge Histories Online © Cambridge University Press, 2008
232 David E. Wilkins
to the Court because the Cherokee were “wards of the United States, and
directly subject to its political control.”31
Although willing to dismiss the proprietary sovereignty of the Cherokee
and to accommodate relentless Euro-American pressures for assimilation
of Indians, when it came to certain practical effects of the twin forces
of Westward expansion and federal plenary power the Court was willing
to concede that the member nations of the Five Civilized Tribes might
continue to exercise some degree of internal autonomy – internal criminal
jurisdiction over their own members. In 1896, for example, on the same day
the Supreme Court decided Plessy v. Ferguson,32 establishing the “separate
but equal” doctrine that sanctioned state Jim Crow laws, the court held in
Talton v. Mayes33 that the U.S. Constitution’s Fifth Amendment did not
apply to the Cherokee Nation because their sovereignty existed prior to
the Constitution and was dependent on the will of the Cherokee people,
not of the American public. Harlan was the lone dissenter (as he was in
Plessy). Decisions like Talton were residues of the old treaty paradigm that
affirmed tribal nations’ political sovereignty, a status no other racial or
ethnic minority group in the United States had ever possessed.
But Talton was an aberration, and it was countered by much more powerful
forces aimed at the inevitable absorption of indigenous lands, resources,
identities, and rights into the American body politic. Here the guiding
principle was federalism: whether the states or the national government
would be the dominant entity authorized to ignore or curtail Indian treaty
rights or sovereign authority. Take, for example, yet another 1896 case,
Ward v. Race Horse,34 examining the state of Wyoming’s claim to enact
and enforce fish and wildlife laws curtailing the treaty-reserved hunting
rights of the Shoshone-Bannock of the Fort Hall Indian Reservation. In a
major states’ rights decision, the Court ruled thatWyoming’s game regulations
superseded the Shoshone-Bannocks’ 1868 treaty rights. Indian treaty
rights were “privileges” that could be withdrawn or overridden by federal
or state law. Specifically, Article Four of the 1868 treaty had been abrogated
(implicitly) by Congress because it conflicted with Wyoming’s Admission
Act. If Talton appeared to acknowledge the old treaty paradigm, Race Horse
dealt it a paralyzing blow, not only vacating earlier case law but also elevating
state authority over tribes’ vested rights and indeed over the federal
government’s vaunted guardianship of the Indians.
Having recast the juridical foundation for tribal-state relations and taking
its cue from the coercive and absolutist tone of Congressional lawmakers,
the Supreme Court moved to establish, clearly and unambiguously, the
new reality of tribal-federal relations. The vehicle was LoneWolf v. Hitchcock
31135 U.S. 641, 657. 32163 U.S. 537.
33163 U.S. 376. 34163 U.S. 504.
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 233
(1903), a suit brought by the Kiowa, Comanche, and Apache nations against
the secretary of interior in an effort to avoid having their lands allotted
without consent.35 The tribes contended that allotment of their lands,
as provided in legislation adopted by Congress in 1900, violated Article
Twelve of the 1867 Treaty of Medicine Lodge. For the Court, Justice Edward
D. White stated that the1867 treaty provision had been abrogated by the
1900 statute, even though he acknowledged that a purported agreement to
modify the treaty provision on which the statute was loosely based lacked
proper tribal consent.
Lone Wolf, often called the Court’s second Dred Scott decision, was a nearperfect
synthesis of the Court’s “plenary power” and “political question”
doctrines. White inaccurately stated that Congress had exercised plenary
authority over tribes “from the beginning” and that such power was “political”
and therefore not subject to judicial review. These statements were
merely judicial rationalizations, but they were in line with the reigning
policy toward Indians embraced by the federal government: Indians were
dependent wards subject to a sovereign guardian – the federal government.
White attempted to camouflage the blow by describing the government’s
actions as those of a “Christian people” confronted with the travails “of an
ignorant and dependent race.” Congress, he said, must be presumed to act
“in perfect good faith” toward the Indians. But Lone Wolf was a devastating
assault on tribal sovereignty. The Court’s refusal even to examine
Congressional acts that abrogated property rights established by treaty was
particularly oppressive. Lone Wolf meant that treaties simply had no effect
whenever Congress decided to violate their provisions. Yet, the hundreds
of ratified treaties and agreements negotiated with the United States, not
the Federal Constitution, constituted the foundation for most indigenous
rights.
In the company of so much else that had been transpiring in American
law and policy, Lone Wolf confirmed a bitter reality: sporadically, Congress
or the Court might acknowledge that their only legitimate powers vis-
`a-vis tribal nations were those expressly outlined in the Constitution or
agreed on with indigenous peoples. But in practice no branch of the federal
government recognized any real limit to its powers.
VII. PROGRESSIVISM, CITIZENSHIP, & INDIAN RIGHTS:
1904–1920
During the Progressive era, federal Indian policy, in particular those aspects
overseen by the Office of the Commissioner of Indian Affairs, was increasingly
managed by men who viewed themselves as dedicated guardians of
35 187 U.S. 553.
Cambridge Histories Online © Cambridge University Press, 2008
234 David E. Wilkins
Indian peoples and their ever-decreasing property base. These individuals,
however, were confronted with contradictory federal goals: adamant
commitment to the full-tilt assimilation of Indians and their remaining
resources predicated on the idea that indigenous peoples should be free of
governmental supervision; and an equally adamant commitment to the
maintenance of hegemonic guardian/ward relations with Indians, with
attendant micromanagement of indigenous lands and resources, leaving
Indians and tribal governments in an internally colonial relationship. That
said, federal Indian policymakers were somewhat influenced by the Progressive
ideals of social activism, elimination of economic and political corruption,
and greater citizen involvement in governance, and consequently they
offered qualified support for policies that recognized a degree of Indian selfrule
and, as important, granted grudging respect for indigenous culture.
Support for Indian education, in particular, enabled students to remain at
home instead of being sent to distant boarding schools.
The first two decades of the twentieth century also saw sporadic outbursts
of federal judicial and indigenous activism that, occasionally, resulted in
protection for Indian treaty and property rights of both individual Indians
and national tribes. These victories were achieved even though the paternalistic
policy of assimilation remained entrenched. Still, the combination of
staunch tribal resistance, federal officials willing to support a greater degree
of tribal self-rule, and Indian students who had returned from boarding
schools with ideas on how to improve their tribes’ standing in relation to
the federal government and American society formed the basic building
blocks for developments in the 1930s and beyond that would enable native
nations to recover some of their proprietary, cultural, and political rights.
During the Progressive period, the dominant federal themes of allotment,
assimilation, guardian/ward status, and citizenship were supplemented by
other ad hoc developments – affirmation of tribal sovereignty, protection of
Indian treaty rights, and recognition of federal exclusive authority in Indian
affairs. In 1904, for instance, the Supreme Court ruled in Morris v. Hitchcock36
that the Chickasaw Nation, one of the Five Civilized Tribes, could
lawfully extend its taxing authority over whites who resided on their lands.
A year later, the Court handed down two very different but related opinions
on Indian rights. First, in In re Heff (1905),37 it held that Indian allottees
became American citizens once their allotments had been approved. Therefore,
federal laws that prohibited the sale of liquor to Indians were invalid –
allotted Indians could buy and sell liquor as freely as any other American.
The Commissioner of Indian Affairs admitted that the decision was “eminently
logical,” given prevailing federal Indian policy; he nonetheless
36194 U.S. 384. 37197 U.S. 488.
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 235
warned that it “places the ignorant, incapable, and helpless Indian citizens
at the mercy of one class of evil doers.”38
Congress reacted to Heff by passing the Burke Act, circumventing the
Heff principle without entirely overthrowing it by withholding federal citizenship
from allotted Indians for the duration of the twenty-five year trust
period or until allottees secured a patent in fee (a certificate like a deed
vesting legal ownership) to their allotment. The secretary of interior was
granted authority to issue patents in advance of these statutory qualifications
if, in his sole opinion, the Indian allottees were competent and capable of
managing their own affairs. Congress presumably intended secretarial discretion
to be used in a reasonable and not arbitrary fashion. In fact, the act
led to the rapid alienation of much Indian allotted land. As Vine Deloria,
Jr. and Clifford M. Lytle have put it, “Citizenship, thereupon became a function
of the patent-in-fee status of land and not an indication that Indians
were capable of performing their duties as citizens.”
The second major ruling in 1905 was United States v.Winans.39 This was
the first case to arrive at the Supreme Court calling on the judiciary to
interpret a common treaty provision reserving to a number of tribes in the
Northwest their rights to fish at places of historical significance. The Court
ruled (White dissenting) in support of tribal fishing rights reserved through
treaty provisions. For the Court, Justice Joseph McKenna recited one of the
more popular treaty rights doctrines – that treaties should be interpreted
the way Indians would interpret them. A treaty must be construed as “that
unlettered people” would have understood it since it was written in a foreign
language and was drafted by a military power that was superior to that of
tribes. Second, the Court dramatically reaffirmed federal supremacy over
the states in certain areas and weakened the equal footing doctrine, which
held that newly admitted states were on an “equal footing” with the original
states in all respects, especially regarding sovereignty and political standing.
The Court declared that it was within the power of the United States
to preserve for native peoples their remaining rights such as fishing at
their usual places, holding that this was not an unreasonable demand on a
state. Third, and most important, McKenna announced the reserved rights
doctrine, by which tribes retained all rights and resources not specifically
ceded in treaties or agreements.
McKenna’s opinion came a mere two years after the devastating LoneWolf
ruling in which the Court had clearly deprived tribes of their treaty-reserved
property rights. How should this disparity be explained? A pragmatic reading
of LoneWolf suggests that the Court was implicitly acknowledging that
38 United States Commission of Indian Affairs, Annual Report (Washington, DC, 1906), 28.
39 198 U.S. 371.
Cambridge Histories Online © Cambridge University Press, 2008
236 David E. Wilkins
many whites had already established homesteads on the tribes’ claimed
lands. Relocating these non-native squatters, although perhaps the proper
legal action, would have created massive political and economic problems
for the state and the squatters, and also for the federal government (notably
the president, who had already authorized settlement). In justifying Congressional
confiscation of tribal reserved treaty lands, the Court had also
baldly added that, once “civilized” and “individualized,” Indians simply
would not need all the land reserved to them.
Winans was much less threatening. It involved no major national political
issues. No white had to be removed nor was the power of the president or
of Congress at stake or even being challenged. At issue was the supremacy
of a federally sanctioned treaty over a state’s attempts to regulate resources
covered by the treaty. First, the Court appeared to understand that fishing
represented far more than a simple commercial enterprise for the Yakima
Nation – in a very real sense it was their entire life. Second, to allow a state
regulatory authority over activities guaranteed by treaty could have gravely
injured the status of treaties as the supreme law of the land, in effect privileging
state sovereign powers over those of the federal government.Winans,
therefore, was a crucial and timely acknowledgment that a tribe’s sovereign
property and cultural rights, recognized and specifically reserved in treaties,
warranted a measure of respect and would occasionally even be enforced by
the federal government. In any case, there was no contradiction between
the decisions in Winans and Lone Wolf. Both decisions underscored national
authority. Lone Wolf reinforced the federal power to decide what was best
for native people; Winans reinforced the supremacy of federal (and treaty)
law over state law. But Winans does offer compelling evidence of a growing
consciousness among some federal justices and policymakers – the continuing
twin federal policy goals of land allotment and Indian individualization
notwithstanding – that tribes were sovereign entities in possession of substantive
property rights that were enforceable against states and private
citizens, if not the federal government.
Three years later, in Winters v. United States (1908),40 the reserved rights
doctrine was extended – implicitly – to include the water rights of native
peoples.Winters considered whether a white Montana landowner could construct
a dam on his property that prevented water from reaching a downstream
Indian reservation. The Supreme Court ruled against the landowner.
First, the reservation in question had been culled from a larger tract of land
that was necessary for “nomadic and uncivilized peoples”; second, it was
both government policy and the “desire of the Indian” that native peoples
be culturally and economically transformed and elevated from a “nomadic”
40 207 U.S. 564.
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 237
to an “agrarian” lifestyle; third, transformation could only occur if tribal
lands were severely reduced in size, making them more amenable to agricultural
pursuits (and precluding alternatives); finally, since the lands were
“arid,” they would be agriculturally useless without adequate irrigation.
The Court’s four points were not enunciated as law, but they recognized
the reality of history and indicated the Court’s capacity to generate plausible
arguments to protect precious tribal reserved rights. The court also cited
the Indian treaty rule of interpretation, declaring that any ambiguities in
the document should be resolved in the Indians’ favor. The equal footing
argument on which the landowner had relied was also dismissed, the Court
noting that it would be strange if, within a year of the reservation’s creation,
Congress, in admitting Montana to statehood, would have allowed the
Indians to lose their water rights, particularly since it was the government’s
policy to force Indians to adopt an agrarian lifestyle. In effect, the Court was
saying that, when the United States entered into diplomatic relations with
tribes or when it unilaterally created reservations, it appeared to be guaranteeing
tribes the water necessary to provide for their needs, present and
future.
In both Winans and Winters the federal government acted “on behalf of”
or as the “guardian” of these two tribal nations. This was laudable in one
sense, but also raised the important question of who actually “won” in these
rulings: the federal government or Indian tribes? In addition to litigation
for the protection of vital natural resources – fish and water – most Indian
legislation and litigation of this period, much of it involving amendments
and subsequent interpretations of those amendments to the General Allotment
Act, arose from a powerful determination on the part of the federal
bureaucracy to maintain a vague form of trust protection for Indian property.
Rather than acknowledging and affirming complete Indian ownership
of these still dwindling resources, Indians were treated as mere attachments
to their lands, which meant that the Interior Department’s policies
and programs often conflicted with policies of the Congress aimed at
facilitating Indian self-improvement.41
It should also be noted that Indians remained largely marginalized from
the public policy process in virtually every era and arena thus far examined
except one – claims against the federal government taken before the Court
of Claims that had been established in 1855. Tribes had begun to file suits
shortly after the court was established. In 1863 Congress amended the law
to deny the right to file lawsuits to Indian tribes with treaty claims. It would
not be until 1920 that a number of bills were introduced, at the behest of
41Vine Deloria, Jr., “The Evolution of Federal Indian Policy Making,” in Vine Deloria, Jr.,
ed., American Indian Policy in the Twentieth Century (Norman, OK, 1985), 248.
Cambridge Histories Online © Cambridge University Press, 2008
238 David E. Wilkins
individual tribes and friendly congressmen, that allowed some tribes to
sue the government in the Court of Claims for monetary compensation over
lands that had been taken or over treaty violations. However, tribes still had
to secure Congressional authorization before they could file, which required
a significant amount of lobbying and a sympathetic Congressional member
or delegation to advocate on their behalf. Of course, tribes were virtually
without any political or economic power during these years, and they were
largely at the mercy of the Bureau of Indian Affairs’ personnel who had
dominated their lives and property since the 1880s. The Department of
Interior itself frequently tried to prevent the introduction of Indian claims
bills because it feared that the claims process would uncover evidence of
rampant bureau incompetence and malfeasance then plaguing the agency.
Eventually, Congress authorized an estimated 133 filings. In the actions
that resulted, tribes won monetary recoveries in less than one-third of the
cases.
While some individual tribes pursued tribal specific claims against the
United States, broader intertribal and pan-Indian interest groups were also
being formed in the early 1900s to pursue Indian policy reform, Indian
religious freedom, and improvements in indigenous welfare. The Society
of American Indians (SAI) was organized in 1911 at Ohio State University.
SAI’s founding was triggered by the experiences, both positive and negative,
of Indian graduates of the federal government’s boarding schools started in
the 1870s. In its form, leadership, and goals, SAI was similar to contemporary
white reform organizations and to the developing African American
movements of the Progressive era. Its most dynamic leaders, including
Charles O. Eastman and Arthur C. Parker, were largely well-educated
middle-class Indians whose objectives included lobbying for a permanent
court of claims, improving health care, promoting self-help, and fostering
the continued assimilation of Indians while encouraging racial justice.
In Alaska, two gender-based native organizations were also born during
this period – the Alaska Native Brotherhood, founded in 1912, and the
Alaska Native Sisterhood founded in 1915. These were the first significant
political and social intertribal organizations in Alaska before statehood. In
their early years, the two organizations focused primarily on self-help and
full American citizenship rights for natives. They also sought protection of
their natural resources.
Two other indigenous movements affirmed the upsurge in Indian
activism. First, peyote religion grew phenomenally from the late 1800s
to the early 1900s. A truly intertribal faith, peyote use helped its practitioners
improve their health and combat the ravages of alcohol. The Native
American Church of Oklahoma was formally incorporated in 1918. Second,
the Pueblo peoples of New Mexico continued their individual and collective
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 239
struggle to protect their remaining lands from non-Indian squatters. The
adverse effects of the Sandoval decision of 1913 had spurred their collective
political mobilization, culminating in 1919 in formation of the All Indian
Pueblo Council.
Citizenship
The issue of American citizenship for Indians bedeviled federal lawmakers
during the Progressive years. As we have seen, the Supreme Court’s ruling
in Heff that Indian allottees automatically became U.S. citizens and thus
were no longer subject to federal plenary authority had been legislatively
thwarted by Congress with the enactment of the Burke Act in 1906. Subsequent
Supreme Court rulings narrowed Heff, and it was finally expressly
overruled, in dramatic fashion, in United States v. Nice (1916).
Nice affirmed what many federal lawmakers had been advocating for
some years, namely that Indian citizenship was perfectly compatible with
continued Indian wardship. According to Justice Willis Van Devanter,
Congressional power to regulate or prohibit liquor traffic with Indians
derived both from the Commerce Clause and from extra-constitutional
sources, namely the dependency relationship that existed between Indians
and the United States. It rested with Congress, said Van Devanter, to determine
when or whether its guardianship of Indians should be terminated.
“Citizenship,” Van Devanter declared, “is not incompatible with tribal
existence or continued guardianship, and so may be conferred without
completely emancipating the Indians or placing them beyond the reach
of congressional regulations adapted for their protection.”42
Nice was decided three years before American Indian veterans of World
War I were given the opportunity to attain U.S. citizenship in 1919, and
eight years before Congress enacted the general Indian citizenship law of
1924, which unilaterally declared all remaining non-citizen Indians to be
American citizens. Both the veterans’ law and the general citizenship law
provided that the extension of citizenship would not affect preexisting
treaty-based Indian property rights. It became evident, however, that
Indians were not full citizens, notwithstanding Congressional declarations
to that effect. The citizenship they had secured, whether under prior treaty
or later Congressional statute, was attenuated and partial. The provisions of
both the 1919 and 1924 laws guaranteeing prior property rights of Indians
as citizens of their own nations proved insufficient to protect the cultural,
political, civil, and sovereign rights of individual tribal citizens. And since
tribes, qua tribes, were not enfranchised, they remained beyond the pale
42 241 U.S. 591, 598.
Cambridge Histories Online © Cambridge University Press, 2008
240 David E. Wilkins
of constitutional protection from the federal government. Paternalistic in
tone and substance, Nice had mandated perpetual federal guardianship over
citizen Indians, still considered incapable of drinking liquor without federal
supervision and approval.
Nice was and remains a legal travesty. Indians were consigned to a continuing
legal and political limbo: they were federal and state citizens whose
rights were circumscribed by their status as “wards.” For tribal members
to receive any non-tribal rights or privileges as citizens, they often had to
exhibit an intent to abandon tribal identity. At that point they might –
though not necessarily – be considered worthy or “competent” to receive
American political rights and privileges. The question of what precisely
Indians gained with American citizenship and of whether the United States
even had constitutional authority to declare Indians to be citizens unilaterally
without their express consent remain problematic.
Meanwhile, Congress, ever the insistent guardian, acted in 1921 to formalize
and provide a comprehensive funding mechanism for Indians, their
seemingly perpetual wards. Prior to 1921, Congress and the Bureau of
Indian Affairs had expended monies for Indians largely on the basis of treaty
provisions or of specific statutes that addressed particular tribal needs. The
Snyder Act,43 however, granted general authority to the BIA under the
Interior Department’s supervision to spend Congressionally appropriated
money “for the benefit, care, and assistance of the Indians throughout the
United States.” This money was to be used for a wide variety of purposes –
health and education, resource projects such as irrigation, and so forth. This
was the first generic appropriation measure designed to meet the tremendous
socioeconomic needs of Indians wherever they resided.
The Indian Reorganization Act
Congress in the 1920s was unwilling to concede that its broad, variegated
assimilation campaign was a failure, even though continual tribal complaints
and white interest group criticism of federal Indian policies seemed
to show otherwise. But events had already been set in motion that would
culminate in a wholesale reordering of priorities, under the 1934 Indian
Reorganization Act (IRA).44 The IRA expressed Congress’s explicit rejection
of the allotment policy and the harsh coercive assimilation tactics that
the BIA had used since the 1880s. The legislation was drafted by Felix
Cohen under the supervision of John Collier, who had spent considerable
time in New Mexico fighting for Pueblo land and water rights and who
would later become Commissioner of Indian Affairs. The IRA had several
4342 Stat. 208. 4448 Stat. 984 (1934).
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 241
objectives – to stop the loss of tribal and individual Indian lands, provide
for the acquisition of new lands for tribes and landless Indians, authorize
tribes to organize and adopt a constitutional form of government and form
corporations for business purposes, and establish a system of financial credit
for tribal governments and individual business entrepreneurs – but was
also beset by severe weaknesses. It did little to clarify the inherent political
status of tribes. It failed to devise any real constraints on federal power, particularly
administrative power, vis-`a-vis tribal nations and their citizens.
A critical, if uneven, attempt by the federal government to rectify some of
the damage caused by the more horrific policies and laws it had imposed on
native nations for nearly half a century, the IRA produced mixed results,
which continue to affect tribal nations today.
Most dramatically, the IRA was effective in putting a halt to the rapid
loss of indigenous land. It reminded all parties that tribal peoples were
substantially different from other minority groups because they continued
as cultural and political nations with inherent powers of self-governance
and distinctive cultural and religious identities. But the IRA’s avowed goal
of energizing native self-rule was not fully realized. Some tribal nations
took the opportunity to create tribal constitutions and establish bylaws.
However, their documents had of necessity to include clauses that reminded
tribal leaders of the discretionary authority of the secretary of the interior
to dictate policy to tribes and overrule tribal decisions. Tribes that resisted
efforts to institutionalize their governing structures along the constitutional
lines suggested by federal officials were sometimes pressured to acquiesce
by Collier and his associates.
Beyond the IRA
The conclusion ofWorldWar II and John Collier’s resignation as Commissioner
of Indian Affairs in 1945 signaled the beginning of another profound
shift in federal Indian policy and law – from tribal self-rule to termination
of the federal government’s trust responsibilities toward a number of tribes.
Termination was officially inaugurated as policy in 1953 by a joint resolution
of Congress. Ironically, liberals supported termination as a means to
free tribal peoples from racially discriminatory legislation and BIA regulations,
whereas conservatives viewed it as a means to relieve Indians from
the “retarding” effects of the IRA’s policies, which were interpreted as
hampering Indian rights as American citizens. American Indians had their
own views on termination. Some tribes believed that it would mean full
emancipation and would create opportunities for them to thrive politically
and economically; others suspected that termination was a maneuver by
which the United States would “legally” disavow its historic treaty and
Cambridge Histories Online © Cambridge University Press, 2008
242 David E. Wilkins
trust obligations, clearly violating the inherent rights of tribes and the
federal government’s commitment to the rule of law.
Termination was accompanied by a relocation program that sent thousands
of reservation Indians to major urban areas. Congress also enacted
Public Law 280, which authorized several states to extend criminal and
some civil jurisdiction over Indians and Indian Country. All proved controversial.
By the 1960s, grievances arising from termination, relocation,
and the extension of state jurisdiction had combined with the influence
of the broader civil rights movement and the environmental movement
to fuel a surge in activism both in urban areas and on reservations. The
resurgence of native pride, indigenous activism, the appearance of a generation
of legally trained Indians, and shifts in personnel on the Supreme
Court and in Congress brought a series of important political, legal, and
cultural victories in native nations’ struggle to regain a genuine measure of
self-determination.
Much of the 1960s indigenous revival arose out of events like the fishing
rights battles of the Northwest tribes, the ideas generated by Indian
leaders at the American Indian Chicago Conference in 1961, the birth and
subsequent rapid expansion of the American Indian Movement in 1968,
the Alcatraz takeover in 1969, the Trail of Broken Treaties in 1973, and
the Wounded Knee occupation, which also occurred in 1973. Congress
responded to these developments by enacting the Indian Self-Determination
and Education Assistance Act in 1975, among other laws. But these native
victories engendered a vicious backlash among disaffected non-Indians and
some state and federal lawmakers that led to Congressional and judicial
attacks aimed at further abrogating treaties, reducing financial support for
tribal programs, and other punitive responses. The Supreme Court also
began to issue rulings that negated or significantly diminished tribal sovereignty,
notably Oliphant v. Suquamish (1978), which directly limited the
law enforcement powers of tribes over non-Indians committing crimes on
Indian lands.
Since Oliphant, tribes have witnessed a parade of federal actions that at
times have supported tribal sovereignty (the Indian Self-Governance Act
of 1994) and at other times significantly reduced tribal powers, especially
in relation to state governments (the Indian Gaming Regulatory Act of
1988). More distressing for tribes was the Rehnquist Court’s fairly consistent
opposition to inherent authority, which has been continued by the
Roberts Court. Tribal governments have had their jurisdictional authority
over portions of their lands and over non-Indians effectively truncated,
and the federal trust doctrine has been defined very narrowly in a way
that reduces U.S. financial obligations to tribal nations and their citizens.
In addition, the Supreme Court’s rulings have elevated state governments
Cambridge Histories Online © Cambridge University Press, 2008
Federal Policy, Western Movement, and Consequences 243
to a nearly plenary position in relation to tribal governments and the U.S.
Congress itself, without dislodging or reducing the long entrenched federal
plenary power over tribes and their resources.
Nevertheless, these have been dynamic times during which many native
nations have made great strides in several arenas: cultural and language
revitalization, land consolidation, and the development of more appropriate
legal codes are notable examples. Gaming revenues have given tribes a small
but seemingly secure foothold in the nation’s political economy. Tribes have
also proven willing, through increased electoral participation, to engage in
state and federal political processes in an effort to protect their niche in the
market.
CONCLUSION
Two centuries of contact between indigenous nations and the United States
have caused profound and irrevocable changes in the proprietary, sovereign,
cultural, and legal rights of tribal nations, just as it has meant massive
changes in the laws, policies, and attitudes of Euro-Americans as well. Sustained
cultural interactions between Europeans and indigenous peoples in
North America began, we have seen, with a measure of cooperative military
and economic respect. But both Europeans and later Euro-Americans generally
acted from a perspective that denied the full proprietary rights and
cultural sovereignty of tribal nations. So, despite the existence of dual legal
traditions at the outset, and a diplomatic record that formally acknowledged
the separate legal and political traditions of native nations, Euro-Americans
soon began to act in ways that generally offered little respect for the customs
and legal traditions of Indian peoples.
Euro-American legal traditions attained dominance over indigenous peoples
in North America largely as a result of cultural ethnocentrism and
racism. More instrumentally, Euro-American law facilitated U.S. westward
expansion and settlement, as well as industrial development. The virtual
exclusion of indigenous perspectives or customary legal traditions from U.S.
legal culture after 1800 enabled American legal practitioners and policymakers
to attain a hegemonic status vis-`a-vis tribal nations. Nevertheless,
federal lawmakers and Supreme Court justices have occasionally acted to
recognize indigenous rights and resources, as evidenced in land claims,
sacred site access, and co-management of certain vital natural resources.
The U.S. Supreme Court has tended to use one or some combination of
three paradigms when called on to adjudicate disputes involving Indians:
treaties, paternalism, and federalism. Not only the issues and tribes involved
but also the diplomatic record, the relationship between the state and the
federal governments, and ideologies of governance vis-`a-vis native peoples
Cambridge Histories Online © Cambridge University Press, 2008
244 David E. Wilkins
have interacted to determine at each moment how the Court would decide
any given case.
The relationship between tribes and the U.S. federal government continues
to be without clear resolution. Further, because interracial and intercultural
disputes are nearly always resolved in federal courts where legal
principles like plenary power, the discovery doctrine, and the trust doctrine
still lurk in the cases and precedents, tribes can never be assured that
they will receive an impartial hearing. The United States has sometimes
recognized and supported tribal sovereignty; at other times, it has acted
to deny, diminish, or even terminate their sovereign status. Such indeterminacy
accords imaginative tribal leaders and non-Indian leaders a degree
of political and legal flexibility. Involved parties may successfully navigate
otherwise difficult political terrain by choosing appropriate indigenous
statuses that can benefit their nations and citizens. But it also deprives aboriginal
peoples, collectively, nationally, and individually, of clear and consistent
standing regarding the powers and rights they can exercise. Hostilities
may have decreased, but cultural, philosophical, and political-legal tensions
still cloud the relationship between tribal nations and the federal and state
governments.
Cambridge Histories Online © Cambridge University Press, 2008
8
marriage and domestic relations
norma basch
On the eve of the American Revolution, domestic relations law, as it would
come to be called in the nineteenth century, encompassed a whole constellation
of relationships between the male head of the household and the
subordinates under his control. These included his wife, children, servants,
apprentices, bound laborers, and chattel slaves, designated by William
Blackstone as those in lifetime servitude. Although Blackstone did not
create this conception of household relations, he incorporated it into his
Commentaries on the Laws of England, the era’s most influential legal primer,
where it appeared under the rubric of the law of persons. Based as it was
on a belief in the fundamental inequality of the parties and the subordinate
party’s concomitant dependency, the law of persons lay at the heart of subsequent
challenges to domestic relations law in general and to marriage law
in particular. By categorizing the law of husband-wife as analogous to other
hierarchical relationships, it generated parallels that would become sites of
contestation. According to the law of persons, both marriage and servitude
were “domestic relations,” and both mandated a regime of domination and
protection to be administered by the male head of the household.
The law of persons cut a broad but increasingly anachronistic swath in
the legal culture of the new republic and in the economic transition from
household production to industrial capitalism. As a result, one change in
domestic relations law over the course of the nineteenth century involved
the gradual narrowing of the relations under its aegis. Whereas “family” had
once comprehended the extended household profiled in the law of persons,
by the time Anglo-Americans were reading the first editions of Blackstone,
it tended to refer to a small kin group living under the same roof. Blackstone
was in this instance already dated. The decline of apprenticeships,
the increase in independent wage-earners, and the separation of home and
work generated further changes. Although employers owned their employees’
labor, their legal relationship to free laborers gradually slipped from
the category of domestic relations. Slavery, of course, was eradicated as a
245
Cambridge Histories Online © Cambridge University Press, 2008
246 Norma Basch
legal category with the Civil War and Reconstruction. Yet, elements from
the old paradigm of the extended hierarchical household continued to exert
discursive power. The industrial employer drew on the preindustrial master’s
claim to his servant’s personal services to buttress his own claim to
authority over independent wage-workers. The correspondences between
wifehood and servitude also remained popular. They were deployed not
only by slaveholders eager to extol their benevolent dominion over their
extended “families” but also by women’s rights advocates intent on decrying
the wife’s degrading bondage. Still, in the long passage to legal modernity,
domestic relations focused increasingly on marriage and parenting.
The other critical shift in domestic relations law over the course of the
century consisted of inroads into the male-dominated corporatism of marriage.
By the end of the century both wives and children enjoyed a greater
measure of legal individuality, children came more frequently under the
protection of their mothers or the state, and divorce was on the rise. At
the same time, a belief in the sanctity of lifelong monogamy and in the
husband’s natural authority received renewed legal and rhetorical support
while the drive to restrict birth control and abortion generated novel curbs
on reproductive freedom.
The end-of-the-century picture of domestic relations law, then, is ambiguous.
Although the principle of male headship was clearly compromised
by the challenges of prior decades, it continued to resonate in the treatises,
legislatures, and courtrooms of the nation. As the byproduct of diverse concerns,
temporary coalitions, and economic exigencies, the changes in domestic
relations law did not so much dismantle the principle of male headship
as modify it, often in favor of the state. The changes, moreover, were complicated
by jurisdictional diversity and doctrinal inconsistency. Thanks to
federalism, the states controlled family governance, and in accord with
Franco-Spanish legal models as well as the dominant English model, they
created marital regimes that could differ dramatically from place to place.
Given the ambiguity, diversity, and inconsistency of state marital regimes,
any effort to chart change nationally, much less assess its relation
to the gender system, is fraught with problems. The web of affection and
reciprocity that defined the marriage bond for most Americans did not
encourage a hard calculus of gendered power. But although husbands and
wives did not typically regard themselves as winners or losers in these deeply
gendered legal regimes, it is entirely appropriate for us to sift and weigh the
gendered distribution of marital power. The legal institution of marriage,
after all, was one of the preeminent arbiters of gender roles, and it was
reshaped by the same great political, economic, and social convulsions as
other areas of law. Yet while revolution, industrialization, and the burgeoning
marketplace left their impress on the contract of marriage as surely as
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 247
on commercial contracts, the strict demarcation of marriage from other
contracts made for very different results. In a century that elevated the
concept of contract to unprecedented heights, marriage was a contract, as
jurists were fond of pointing out, unlike any other. The emblem of
harmony and stability in a shifting, competitive world, marriage was the
irrevocable contract that made all other contracts possible.
The separation of the marriage contract from other kinds of contracts
was critical to the legal formation of marriage as an institution. It not only
enabled the state to dictate the terms of marriage to potential spouses as
opposed to having them set their own terms, but it relegated marriage
to a realm that was rhetorically distinct from the world of commerce and
politics. The separation of the marriage contract, however, was never complete.
Feminist critics of contractualism have argued that because the marriage
contract silently underpinned the social contract, that mythical agreement
marking the founding of modern civil society, it at once concealed
and provided for the subordination of women to the political fraternity
of men. Thus the classic story of the social contract, which is a story of
freedom, repressed the story of the marriage contract, which is a story of
subjection.
How, though, could a liberal democracy with its ethos of self-ownership
and contractualism and its rejection of monarchy and arbitrary power continue
to invest authority in the independent white, male head of the household
at the expense of the persons deemed subordinate to him, including
the person of his wife? In the long run, it could not. In the shorter run –
over the course of the nineteenth century – it could do so, but only with
strenuous cultural work and considerable legal innovation that masked and
checked the challenges liberalism presented to the patriarchal family. The
story of domestic relations law, in short, is one of the evolving tensions
between male headship with its protections and constraints on the one
hand and liberal individualism with its hazards and privileges on the other.
We begin with the tensions unleashed by revolution.
I. MALE HEADSHIP, FEMALE DEPENDENCE,
AND THE NEW NATION
In 1801 James Martin, the son of loyalist parents, sued the state of Massachusetts
for the return of his deceased mother’s confiscated property.
During the Revolution his mother, Anna Gordon Martin, and his father,
William Martin, had fled Boston for the British-held New York City, and
with the defeat of the British in 1783, they moved their household to
England. Their loyalty to the Crown, however, came at a price. Anna, the
daughter of a wealthy Massachusetts merchant and landowner, had inherited
Cambridge Histories Online © Cambridge University Press, 2008
248 Norma Basch
real estate that was sold at auction under the provisions of the state’s wartime
confiscation statute. Because William Martin had served as an officer with
the British forces, neither his allegiance to the Crown nor his defiance of the
patriot cause was ever in doubt; he was listed by the state among persons
who had joined the enemy. But the confiscated property had belonged to
Anna, not William, and her defiance of the new political order was not as
clear.
At issue in the case launched by the son was the time-honored principle
of male headship and female subordination incorporated into the Anglo-
American law of husband and wife. The case represented a pivotal moment
in the post-Revolutionary redefinition of marriage. Was the principle of
male headship, which comported with both the precepts of Christianity and
the pre-Revolutionary gender system, altered in any way by revolution and
war? Did Anna flee the country withWilliam as a result of a wife’s marital
obligation to subject herself to her husband, or did she act as an independent
sympathizer of the loyalist cause? Deeming the confiscation of the wife’s
property an improper and overly broad reading of the state’s wartime statute,
James Martin’s attorneys supported the first scenario, which assumed the
husband’s coercive power over his wife. In the traditional view of marriage
on which the core of their case rested, a wife’s primary allegiance was to her
husband, who mediated any relationship she may have had to the state and
to the world at large. Anna, in this view, wa,s without volition regarding
her political options. If William had commanded her to leave, she had no
choice but to obey him.
Attorneys for the state of Massachusetts working to validate the confiscation
and sale of Anna’s property supported an alternative scenario that
assumed her independence in choosing to leave, thereby investing her with
a direct relationship to the state. Their argument suggests the radical possibilities
of applying anti-patriarchal ideology to the law of husband and
wife. In Massachusetts, where the state also withheld dower, the so-called
widow’s thirds, from wives who had fled, the exigencies of revolution seem
to have unsettled the common law unity of husband and wife, the reigning
paradigm for marriage.
The Martin case, with its competing paradigms of marital unity and marital
individuality, provides a framework for considering the contradictions
unleashed by revolution and prefigures the pressures a nascent liberalism
would exert on the patriarchal model of marriage. In the eyes of the law, the
husband and wife were one person, and that person was the husband. This
was the renowned legal fiction of marital unity from which the wife’s legal
disabilities flowed. Inasmuch as the wife’s legal personality was subsumed
by the husband, she was designated in the law-French of the common law as
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 249
a femme covert, or covered woman, and her status in marriage was called her
coverture. But in the eyes of the Massachusetts confiscation statute, husband
and wife were two persons with individual choices regarding the Revolutionary
cause. Since attorneys for the state along with the legislators who
drafted the confiscation statute could envision wives as independent actors,
we can see how the anti-patriarchal impulses of the Revolution might be
directed toward marriage. That all four judges of the Massachusetts Supreme
Judicial Court voted to sustain James Martin’s claim against the state on the
basis of his mother’s coverture, however, exemplifies the widespread acceptance
of the English common law model of marriage by post-Revolutionary
jurists.
The English common law model of marriage as it was upheld by the Massachusetts
judiciary and as it had been outlined in Blackstone’s Commentaries
was much more than an emblem of the patriarchal order. It encompassed
those functions of marriage that judges and legislators would embrace long
after the Martin case. These included the definition of spousal obligations,
the regulation of sexual desire, the procreation of legitimate children,
and the orderly transmission of property. But although such earthy
and materialistic concerns have figured in family law from Blackstone’s
day to the present, Blackstone’s reading of marriage was problematic for
early nineteenth-century Americans, who were often uncomfortable with
his rationales for its legal rules. His blunt insistence that the primary purpose
of marriage was the creation of lawful heirs slighted the personal
happiness they associated with matrimony and the harmonious influence
they believed it exerted on the whole society. And while they affirmed the
principle of male headship, they could no longer do so on precisely the same
terms Blackstone had used in the law of persons.
The striking ambivalence in nineteenth-century responses to Blackstone’s
vision of marriage is instructive. Commentators could not accept him without
qualifications and caveats, but neither could they reject him entirely.
Editors writing glosses on the Commentaries and jurists creating new treatises
expressed the need to unshackle marriage somehow from the harshest
provisions of the common law. A growing interest in the welfare of illegitimate
children, for example, was at odds with Blackstone’s celebration of
the common law’s capacity to bar the inheritance of bastards. Those who
were distressed with the wife’s legal disabilities confessed incredulity at
Blackstone’s insistence that the female sex was a great favorite of the laws
of England. Yet critics typically envisioned changes in the legal status of
married women as exceptions to the provisions of coverture, which functioned
as an enduring component in the definition of marital obligations.
Blackstone’s depiction of the law of husband and wife, then, continued to
Cambridge Histories Online © Cambridge University Press, 2008
250 Norma Basch
serve as a blueprint for understanding the rudiments of the marriage contract,
and the rudiments of the marriage contract were strikingly unequal
with regard to rights and responsibilities.
The wife’s legal disabilities as outlined in the Commentaries were formidable.
Any personal property she brought to marriage belonged to her
husband absolutely while the management of her real property went to him
as well. She could neither sue nor be sued in her own name nor contract
with her husband, because to do so would constitute the recognition of her
separate legal existence. Once married and under her husband’s coercion,
she was no longer even responsible for herself in criminal law. Indeed, the
only crack in the bond of marital unity according to Blackstone lay in a
theory of agency derived from the wife’s capacity to act on behalf of her
husband; because the husband was bound to supply her with “necessaries,”
she could contract with third parties in order to secure them.
The husband’s responsibilities in this paradigm of male headship were no
less formidable than the wife’s disabilities. In addition to the support of the
family, they included any debts his wife brought to the marriage. But while
the husband’s responsibilities were entirely in keeping with nineteenthcentury
notions of manliness, his corresponding power over the person and
property of his wife was not easily reconciled with a companionate model
of marriage. If the wife was injured by a third party, the husband could
sue for the loss of consortium; she, by contrast, enjoyed no corresponding
right since the “inferior” owned no property in the company or care of
the “superior.” Similarly, because a wife acted as her husband’s agent, the
husband was responsible for her behavior, and just as he had a right to
correct an apprentice or child for whom he was bound to answer, so must he
have a comparable right to correct his wife. Blackstone’s insistence that wife
beating was an ancient privilege that continued to be claimed only by those
of the lower ranks could not have provided much solace to those who found
it antithetical to the notion of marriage as an affectionate partnership.
Post-Revolutionary Americans responded selectively to this legal model
of marriage in which the wife was obliged to serve and obey her husband
in return for his support and protection. Some elements, like the husband’s
obligation to support and protect the wife, coalesced with the goals of
an emerging white middle class devoted to a breadwinner ethos. Others,
like the husband’s right to chastise the wife, conflicted with enlightened
sensibilities. And still others, like the wife’s dower, her allotment from her
husband’s property if he predeceased her, emerged as an impediment to the
sale of real estate and the flow of commerce.
As the problem of dower suggests, the provisions outlined by Blackstone
for intestate succession, which spelled out custody rights as well as property
rights and carried the logic of coverture into the end of marriage, could be
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 251
controversial. If the wife predeceased the husband and had a surviving child,
the husband continued to hold all her realty as a tenant by the curtesy of
England, a right known as the husband’s curtesy. As the natural guardian
of the children, he was entitled to all the profits from her realty. The only
circumstance in which the deceased wife’s realty reverted to her family of
origin while the husband was alive was if there were no living children.
Although the husband’s right to the custody of the children was automatic,
a wife who survived her husband could lose custody by the provisions of his
will. As for her interest in his property, her dower consisted of a tenancy in
only one-third of his realty.
Still, even though dower was less generous than curtesy, it was one place
where the common law vigorously protected the wife’s right to some support.
During the marriage, the wife’s dower right loomed over the husband’s
realty transactions and provided her with some leverage. Because a portion
of all the realty a husband held at marriage or acquired during the life of
the marriage was subject to the wife’s dower, he could not sell it without
her consent and separate examination, a procedure designed to ensure she
was not coerced into giving up potential benefits. Dower was a fiercely protected,
bottom-line benefit of the English common law. A husband could
exceed the terms of dower in his will, but if he left less than the traditional
widow’s thirds, the widow could elect to take dower over the will, a
prerogative known as the widow’s right of election.
Here in broad strokes was the model of male headship and female dependence
embedded in the law of persons. The husband adopts the wife together
with her assets and liabilities and, taking responsibility for her maintenance
and protection, enjoys her property and the products of her labor. Giving
up her own surname and coming under his economic support and protective
cover, the wife is enveloped in a cloak of legal invisibility. Real
marital regimes diverged significantly from this formal English model on
both sides of the Atlantic before as well as after the American Revolution.
Thanks to exceptions carved out in equity, some wives managed to own
separate estates, and others enlarged the notion of agency beyond anything
Blackstone could have imagined. Changes, however, did not always benefit
the wife. Although dower in some jurisdictions expanded to include personal
property, the separate examination, a potential source of protection,
was increasingly ignored.
The deepest gulf between the Blackstonian paradigm and its post-
Revolutionary incarnation pivoted on the narrow purpose Blackstone
imputed to marriage, an institution viewed from the late eighteenth century
onward as a good deal more than a conduit for the transmission of
wealth. As James Kent, the so-called American Blackstone, put it in his
own Commentaries in the 1820s, “We may justly place to the credit of the
Cambridge Histories Online © Cambridge University Press, 2008
252 Norma Basch
institution of marriage a great share of the blessings which flow from the
refinement of manners, the education of children, the sense of justice, and
the cultivation of the liberal arts.”1 Marriage, as Kent suggested, was a
capacious institution, a source of both individual socialization and national
improvement, and as it came to rest on a foundation of romantic love, its
purpose began to include the emotional satisfaction of the marital partners.
By the 1830s, middle-class Americans were celebrating marriage as
the realization of an intimate and impassioned bond between two uniquely
matched individuals who shared their innermost thoughts and feelings.
Coverture, an organizing principle in the post-Revolutionary gender
system, was in conflict with the great expectations attached to marriage. A
man’s freedom to marry and become head of a household clearly defined his
manhood, but a wife’s dependency and subservience did not satisfactorily
define her womanhood. The purpose of marriage always had included the
procreation of lawful heirs, but thanks to a more intimate and egalitarian
vision, it now encompassed the happiness and well-being of the husband and
wife as well as the nurture and education of the next generation of citizens.
Jurists, essayists, poets, and novelists idealized marriage as a loving and
harmonious partnership that embodied core national values and required
the participation of wives and mothers no less than that of husbands and
fathers.
It is precisely because marriage embodied core national values and because
the happy and orderly union of man and wife represented the happy and
orderly union of the new nation that those forms of social organization
regarded as threats to marriage were discouraged as a matter of public
policy. This was true for Native American kinship systems, which accepted
premarital sex, matrilineal descent, and polygamy and divorce. As white
settlers drove Indians out from their ancestral lands in the course of westward
expansions, the Bureau of Indian Affairs offered property and citizenship to
“heads of households” who were prepared to give up their tribal affiliations
and non-Christian marital arrangements.
Public officials could at least imagine assimilating Indians who embraced
a Christian version of monogamy into the national polity; they did not
extend that vision to African Americans. Although slaves often “married,”
their unions were devoid of recognition by state authorities because prospective
spouses were regarded as without the capacity to consent. A master at
any time could sell one partner away from the other and make a mockery of
the Christian vow, “’til death do us part.” Indeed, so at odds were slavery and
the institution of marriage that a master’s consent to a slave’s legal marriage
was deemed an act of manumission, an assumption that would make its way
1 James Kent, Commentaries on American Law, 4 vols., 11th ed. (Boston, 1867), 2:134.
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 253
into arguments in the Dred Scott case. Moreover, although long-standing
interracial unions existed, especially in the antebellum South, they did so
informally and in the face of statutory bans on interracial marriages designed
to keep the number of such unions small.
Changes in the legal and social construction of domestic relations after
the Revolution were modest. As love and nurture and the needs of children
assumed greater import, a modified conception of coverture that upheld the
husband’s responsibilities and respected the wife’s contributions satisfied
the needs of an emerging middle class. One radical consequence of severing
the bonds of empire, as we will see, was the legitimization of divorce. At the
same time, lifelong monogamy, a metaphor for a harmonious political union,
was celebrated as the wellspring of public morality and national happiness.
Coverture, which exerted enormous legal and discursive power, continued
to sustain the gender order while the legal disregard for slave and interracial
unions continued to sustain the racial order.
II. TYING AND UNTYING THE KNOT
What constituted a legitimate union and how and for what reasons could
it be dissolved were questions impinging on the private lives of many
couples who viewed marriage in more fluid terms than state authorities.
These vexing questions made their way into the presidential campaign of
1828 when supporters of the incumbent, John Quincy Adams, accused
his opponent, Andrew Jackson, of having lived with his wife, Rachel, in an
illicit relationship. The Jacksonians dismissed the accusation as a petty legal
misunderstanding that had been unearthed for purely partisan purposes. In
their version of the story, Andrew Jackson had married Rachel Donnelson
Robards in Natchez in 1791 on the presumption that she had been divorced
from Lewis Robards by the Virginia legislature, only to discover that what
they both believed was a formal divorce decree was merely an authorization
for Robards to sue for a divorce in a civil court. Robards did not pursue
this option until 1793 in the newly admitted state of Kentucky, which had
previously fallen under the jurisdiction of Virginia. In 1794, after a final
decree had been issued and the Jacksons came to understand they were not
legally married, they participated in a second marriage ceremony. Now in
1828 their innocent mistake was being exploited by men so desperate to
prop up the candidacy of the unpopular president that they were willing
to collapse public/private boundaries and ride roughshod over the intimate
recesses of the Jacksons’ domestic life.
The Adamsites proffered a more sinister version of the so-called Robards
affair, which they documented with Robards’s Kentucky divorce decree.
According to the decree, the defendant, Rachel Robards, had deserted the
Cambridge Histories Online © Cambridge University Press, 2008
254 Norma Basch
plaintiff, Lewis Robards, and was living in adultery with Andrew Jackson.
Substituting the treachery of seduction for the innocence of a courtship
undertaken in good faith, they accused Jackson not only of the legal lapse
of living with his lady in a state of adultery but also of the moral lapse of
being the paramour in the original divorce action. The stealing of another
man’s wife, a crime that violated the sexual rights of the first husband, was
an indication of Jackson’s inability to honor the most elemental of contracts.
Raising the prospect of a convicted adulteress and her paramour husband
living in the White House, the Adamsites equated a vote for Jackson with
a vote for sin.
As debate in the campaign turned increasingly on the legitimacy of
probing a candidate’s intimate life in order to assess his fitness for public
office, it also exposed the tensions between the private nature of marriage
and the role of state intervention. The irregularity of the Jacksons’ union
raised a number of questions. Was their 1791 crime that of marrying and
participating in bigamy or that of not marrying and living in sin? To what
extent and with what degree of precision could the state define the making
and breaking of the marriage bond? How could it enforce its definitions
across a legally diverse and geographically expanding national landscape?
And given the prevailing pattern of westward migration into sparsely settled
and loosely organized territories, just how important was the letter of the
law in legitimating a union like theirs?
The Jacksonian defense rested on the assumption that in the case of the
Jacksons’ union, an overly formalistic insistence on the letter of the law was
unjust. Underscoring the frontier setting in which the pathological jealousy
and emotional instability of Lewis Robards played out, Jackson supporters
defended their candidate on the basis of his adherence to the spirit of the
law if not the letter. Here was a man who in marrying a deserted and
endangered woman showed he had the courage to do the right thing. In a
pamphlet designed to demonstrate community approval for his “marriage,”
prominent neighbors and friends attested Rachel’s innocence in ending her
first marriage and the general’s chivalry in saving her from Robards. The
propriety of the Jacksons’ union, as one Tennessee neighbor put it, was “the
language of all the country.”
But it was the letter of the law that concerned the supporters of Adams,
who argued that if the Jacksons had married in Natchez in 1791, they
would have produced proof of that marriage and provided it to the world.
The Adamsite preoccupation with legal formalism was essential to their
rationale for exposing the affair in the first place, and in their view the
fault-based foundation employed by the law in adjudicating breaches of
the marriage contract made it the perfect arbiter of the rules for conjugal
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 255
morality. To permit marriage to end as a matter of individual inclination
or even community approval was to threaten the entire social structure.
However important the Adamsites’ reservations were, they were not
enough to defeat the very popular Andrew Jackson. If the majority of the voters
could tolerate the prospect of the convicted adulteress and her paramour
husband living in the White House, it is probably because they refused to
see the Jacksons in those terms. Legal records suggest that the irregularities
in the Jacksons’ matrimonial saga were not so rare. Legislative petitions
indicate that numerous men and women tried to put a swift and inexpensive
end to their unions by appealing to extra-legal community codes and
turning to the legislature with the signed approval of friends and neighbors.
Others simply walked away from their unions and began marriage
anew. Court records of spouses who divorced themselves and “remarried”
and subsequently ran afoul of the law probably constitute the tip of the very
large iceberg of self-divorce and pseudo-remarriage.
Public debate over informal marriages and extra-legal divorces reflected
the nagging contradictions between state intervention and contractual freedom,
but even legal formalists who favored the closest possible state regulation
of marriage understood that the rules for exiting marriage were
far more important than those for entering it. As a result, when the legal
system moved toward redefining marriage and defining divorce, the terms
on which these parallel trends developed could not have been more different.
Whereas American courts came to recognize a so-called common law
marriage, a consummated union to which the parties had agreed, they were
not about to recognize self-divorce. Common law marriage put the best
face on an existing arrangement, legitimated children from the union, and
brought the husband under the obligation of support. Self-divorce, or even
too-easy divorce, menaced the social order.
Common law marriage originated in Fenton v. Reed, an 1809 New York
decision validating a woman’s second marriage so as to permit her to collect
a Revolutionary war pension, although her first husband had been alive at
the time of her remarriage. Elizabeth Reed’s story was a familiar one. She
claimed her first husband had deserted her, and hearing rumors of his death,
she took a new partner. The decision, attributed to James Kent, held that
although the second marriage was invalid until the first husband died, after
his death no formal solemnization of the second marriage was required for
its authenticity. Bigamy, which is what the second marriage was, may have
been one of the least prosecuted crimes on American statute books until
the Gilded Age. The innovation called common law marriage, moreover,
which freed weddings from state control and even licensing, had little to
do with the English common law and did not go unchallenged. Ultimately,
Cambridge Histories Online © Cambridge University Press, 2008
256 Norma Basch
however, it triumphed, and its triumph exemplified the judiciary’s commitment
to an instrumentalist approach to domestic relations in which
the law functioned as a tool for socially desirable innovation, rather than
as repository of inherited customs and precedents. Employing a distinctly
contractarian ideology, courts and legislatures united to endorse a private
construction of matrimony in order to ensure that the marriage was valid for
those who wanted and needed it to be valid. In an effort to protect marriage
as a public institution, the law endorsed a private and voluntary version of
its legitimization.
Divorce was a different matter entirely. Resting as it did on the concept of
a serious breach in the marriage contract, it warranted a far more determined
use of state authority. Jurists could not advocate divorce by mutual consent
much less by unilateral decision, because the underlying justification for
rescinding an innocent spouse’s marriage promise hinged on the assumption
that the reciprocal promise had been broken by the guilty spouse. Fault
played a pivotal role in the legal construction of divorce. Even the omnibus
clauses in early divorce statutes, catchall phrases providing broad judicial
discretion in decreeing divorces, assumed a fault that was too unique or
elusive to be defined by statute, but that could be readily apprehended by
the judiciary.
The statutory implementation of fault divorce (there was no other kind
until well into the twentieth century) in the wake of the American Revolution
had been swift and widespread. Colonies whose divorces had been
overruled by the Privy Council in the political turmoil of the 1770s provided
for divorce in their new state statutes. Other states followed suit, and
by 1795 a disaffected spouse could end a marriage in a local circuit court
in the Northwest Territory. Grounds varied widely, and some states limited
decrees to the jurisdiction of the legislature. Nonetheless, by 1799 twelve
states in addition to the Northwest Territory had recognized the right of
divorce.
In instituting divorce in spare and simple statutes, it seems as if
eighteenth-century legislators embraced a solution without fully understanding
the problem. Not only did they neglect to address some thorny
substantive and procedural issues, but they could not anticipate the number
of spouses who would come to rely on the divorce process. Fault, the
legal bedrock of divorce law, was difficult to prove and often contradictory
to litigants’ best interests. For those who wanted the terms of their
marital dissolutions to be as easy as possible, mutual consent was appealing
because it was swift and inexpensive and comported nicely with the
pursuit of happiness. It is not surprising that nineteenth-century commentators,
who were more experienced with the divorce process than their late
eighteenth-century counterparts, read a great deal more into divergent legal
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 257
grounds. The nineteenth-century advocates of a liberal divorce code argued
that narrow grounds strictly construed encouraged both lying in petitions
and extra-legal solutions. Their opponents countered that broad grounds
liberally construed subverted the biblical one-flesh doctrine and marriage
itself.
In retrospect it is evident that the decision to accept divorce in the first
place regardless of its legal particularities constituted a paradigmatic revolution
in marriage. The old common law fiction that the husband and
wife were one and the husband was the one could no longer exert the
same authority once a wife could repudiate her husband in a court of law.
Perhaps because it was assumed that divorce would be rare, its initial acceptance
proved less controversial than the working out of its particularities.
In any case, on the threshold of the nineteenth century the notion that
divorces could be decreed for egregious violations of the marriage contract
had acquired statutory legitimacy, and it had done so with remarkably little
opposition.
Divorce subsequently became the lightning rod for a wide-ranging debate
about marriage and morals that reverberated through the nineteenth century
and beyond. Jurisdictional diversity was a big part of the problem.
As litigants shopped for more hospitable jurisdictions, interstate conflicts
became inevitable. On the one hand, the stubborn localism of domestic
relations law in the face of jurisdictional contests reflected a deep distrust of
centralized authority over the family. On the other hand, the dizzying array
of grounds and procedures embodied a disturbing range of moral choices.
By mid-century, many states, especially those in the West and the area
now called the Midwest, recognized adultery, desertion, and cruelty as a
grounds, with cruelty and its shifting definitions remaining controversial.
Also most states at this juncture, including new states entering the Union,
provided for divorce in civil courts. Yet striking exceptions persisted. New
York, for example, recognized only adultery as a ground, Maryland limited
divorce to the jurisdiction of the legislature, and South Carolina refused to
recognize divorce at all. Legislative decrees, which ebbed under the weight
of mounting criticism and state constitutional prohibitions, did not disappear
entirely from states providing for divorce in the courts, and residence
requirements and their enforcement varied from state to state.
Legal disparities exposed American divorce as an incoherent amalgam
of precepts and precedents based on the frequently conflicting foundations
of the Judeo-Christian tradition and liberal contract theory. In a
staunchly Protestant nation, albeit of competing sects, divorce represented
the disturbing amplification and diversification of an action derived from
the English ecclesiastical courts. At issue was which of the many divorce
statutes reflected Protestant morality? The rules for ending marriage could
Cambridge Histories Online © Cambridge University Press, 2008
258 Norma Basch
run anywhere from South Carolina’s decision to make no rules to Iowa’s
decision via an omnibus clause to abide by whatever rules the judiciary
deemed appropriate. By the time Joel Prentice Bishop’s 1852 treatise on
marriage and divorce appeared, the breadth of that spectrum was problematic.
As Bishop put it, at one extreme there was the view that marriage was
indissoluble for any cause; it was favored in modern times as “a religious
refinement unknown to the primitive church.” At the other extreme, there
was the view that marriage was a temporary partnership to be dissolved at
the will of the two partners; it was held not only “by savage people, but
some of the polished and refined.”2
Migratory divorce, an action in which a spouse traveled out of state to
secure a decree, demonstrated both the ease with which litigants could
manipulate the divorce process and the readiness of the judiciary to uphold
the sovereignty of local law. As a result, the divorce standards of a strict jurisdiction
like New York were endangered by the laxity of a liberal jurisdiction
like Vermont. The practice of migratory divorce, which emerged early in
the century between neighboring states, only intensified as transportation
improved. By the 1850s, Indiana, with its loose residence requirements and
broad grounds, became a target for the critics of migratory divorce. Once
railroad lines were united in a depot in Indianapolis, the clerk of the Marion
County Court claimed he received at least one letter a day inquiring if a
disappearing spouse had applied there for a decree. These roving spouses,
husbands more often than not, became emblems for the hypocrisy of the
divorce process and the immorality of its rules.
Migratory divorce, however, was nowhere near as important a check on
each state’s regulation of matrimony as the indifference or resistance of
resident husbands and wives. State efforts to control marriage and divorce
were not always successful in the face of a couple’s determination to act as
if they were free to govern their own marital fate. Some spouses agreed to
end their marriages in ways that exhibited little reverence for the principle
of fault; others participated in contractual separation agreements despite
the antipathy of the judiciary; and still countless others walked away and
started married life anew without any reference to or interference from the
state. These widespread extra-legal practices confounded the tidy categories
in the law of marriage and divorce. Yet legal constructions of marriage and
divorce grew ever more important not only because they could help resolve
property and custody conflicts and delineate the married from the unmarried
but also because by mid-century they were emerging as compass points for
the moral course of the nation.
2 Joel Prentice Bishop, Commentaries on the Law of Marriage and Divorce and Evidence In
Matrimonial Suits (Boston, 1852), chap. 15, sec. 268.
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 259
III. THE MARRIED WOMEN’S PROPERTY ACTS
When Thomas Herttell introduced a married women’s property bill in the
New York legislature in 1837, he supported it with an impassioned speech.
In a year of financial panic marked by numerous insolvencies, one strand
of his argument revolved around the instability of the antebellum economy.
Long an advocate of debtor relief, Herttell addressed the trend toward
boom-and-bust economic cycles and the problem posed by an improvident
husband who wasted his wife’s patrimony on high-risk speculation. Thanks
to the husband’s total control of marital assets, a wife’s property, he averred,
could be lost at the gaming table or spent on alcohol while she was immobilized
by her contractual incapacity. The second strand of his argument,
an assault on the anachronisms and fictions of the common law in general
and on Anglo-American marital regimes in particular, was largely legal in
its thrust. He warned that the married woman’s trust, the equitable device
created to bypass some of the restrictions of coverture and to protect the
wife’s property from the husband’s creditors, was riddled with gaps and
ambiguities. In an effort to garner support for his bill, he changed its title
from an act to protect the rights and property of married women to an act
to amend the uses, trusts, and powers provisions in the New York Revised
Statutes.
Although debtor relief and trust reform undoubtedly met with some
legislative approval, the third strand of his argument, a boldly rightsconscious
diatribe against the wife’s dependent status at common law, put
him in radical territory. “Married women equally with unmarried males
and females,” he proclaimed in an appeal to the familiar triad of Anglo-
American rights, “possess the right of life, liberty, and PROPERTY and are
equally entitled to be protected in all three.”3 When Herttell asserted the
“inalienable right” of married women to hold and control their property and
insisted that any deprivation of that right was both a violation of the Bill of
Rights and a symptom of the unjust exclusion of women from the political
process, he was upending the gender rules of classical liberal theory. Liberal
theorists from John Locke to Adam Smith never regarded wives as free as
their husbands. On the contrary, they at once assumed and affirmed the
wife’s subordination and counted marriage together with the benefits of the
wife’s services among the rights of free men. Abolitionism, however, with
its appeals to the self-ownership of free men generated notions about the
self-ownership of married women that were antithetical to the principle of
3 Thomas Herttell, Argument in the House of Assembly of the State of New York the Session of
1837 in Support of the Bill to Restore to MarriedWoman “The Right of Property,” as Guaranteed
by the Constitution of this State (New York, 1839), 22–23.
Cambridge Histories Online © Cambridge University Press, 2008
260 Norma Basch
coverture. It was precisely this synergy between critiques of bondage and
critiques of marriage that made its way into Herttell’s remarks. Because the
wife at common law was constrained to function as a servant or slave to her
marital lord and master, he observed, she was herself a species of property.
Only her husband’s inability to sell her outright saved her from the status
of unqualified slavery.
That Herttell made his remarks in a state that would launch the women’s
rights movement in 1848, the same year it passed a married women’s property
statute, illustrates how the nascent drive for women’s rights converged
with the reform of marital property. His speech, printed in a pamphlet
financed by a bequest from his wife’s will, became one in a series of ten popular
pamphlets distributed by the women’s movement in the years before the
CivilWar. But married women’s property reform also represented narrowly
economic motives as exemplified in early Southern statutes. The Mississippi
statute of 1839, which preceded the first New York statute by nine
years, insulated the slaves a wife owned at marriage or acquired by gift or
inheritance from the reach of her husband’s creditors. Mississippi’s failure to
give the wife independent control over her human property meant that the
family remained a unified community of interests ruled by a male patriarch.
The desire to maintain the family as a male-headed community of interests
was not limited to the South or to common law jurisdictions. In civil law
jurisdictions like Louisiana, Texas, and California, which recognized marital
assets as a community of property owned by both spouses, the control and
management of the community typically went to the husband. The notion
that the interests of husbands and wives were not the same or even worse
antagonistic alarmed legislators across the nation, who tended to equate
investing wives with legal and economic independence with introducing
discord into the marital union. Wives who were competitive rather than
cooperative were depicted as amazons in the marketplace who subverted
the sacred bond of matrimony. In the first phase of reform, then, most states
failed to give women explicit control over their property. The effect of these
early statutes, which were focused on the property a woman acquired by
gift or inheritance, was to transform the married woman’s separate equitable
estate into a separate legal estate. As a result, the statutes democratized an
option once reserved for the wealthy and legally sophisticated by rendering
it accessible, but they did not significantly alter coverture.
The second phase of reform encompassed a married woman’s earnings
and recognized the wife as a separate legal actor. The New York statute of
1860 extended the concept of a separate estate to include property from a
wife’s “trade, business, labor or services” and empowered her to “bargain,
sell, assign, and transfer” it. The Iowa State of 1873 permitted a wife to
receive wages for her “personal labor” and to maintain a legal action for it in
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 261
her own name. Between 1869 and 1887 thirty-three states and the District
of Columbia passed similar statutes. In moving beyond inherited property
to include a wife’s individual earnings and in empowering the wife to sue
and be sued with regard to her separate property, the second phase of reform
clearly undermined the common law fiction of marital unity.
Here again judicial hegemony over the law of husband and wife was
evident, but in contrast to the earlier instrumentalism displayed in the
recognition of common law marriage, the adjudication of the earning acts
embodied a turn to formalism in which judges weakened or nullified a
married woman’s right to earnings by invoking old common law principles
as self-contained, inflexible, and even scientific. At issue was the definition
of the wife’s separate earnings, which typically came from labor performed
at home, such as taking in boarders, producing cash crops, raising chickens,
and selling eggs. The judiciary persistently classified such activities as
coming under the wife’s traditional obligation of service. In a suit for tort
damages brought two years after the Iowa earnings act, the court upheld a
husband’s right to all of his wife’s household labor. Because the customary
ways in which women earned money tended to be excluded from the reach
of the earnings acts, a wife’s labor at home on behalf of third parties fell
within her obligation to serve as her husband’s “helpmeet.” When a husband
takes boarders into his house or converts his house into a hospital for
the sick, ruled the New York Court of Appeals in 1876, the wife’s services
and earnings belong to the husband. Even a wife’s labor in a factory could
be construed as belonging to the husband in the absence of evidence the
work was performed on her separate account. Coverture, then, was challenged
but far from eradicated by the second wave of legislation; in fact
its legal authority remained formidable. As one member of the judiciary
put it when he excluded rent from a wife’s real estate from the category
of a separate estate, “The disabilities of a married woman are general and
exist in common law. The capabilities are created by statute, and are few in
number, and exceptional.”4
Because courts tended to treat the wife’s legal estate, like her equitable
one, as exceptional, they continued to place the wife under the husband’s traditional
power and protection. What were third parties – creditors, debtors,
retailers, and employers – to assume? That in the absence of indications
a married woman’s property fit into this exceptional category, she came
under the disabilities of coverture. There was also a quid pro quo behind
the husband’s continued authority. He enjoyed his marital rights by virtue
of his marital duties, and the duty to support remained his, regardless of the
amount of his wife’s earnings or assets. Because he was the legally designated
4 Nash v. Mitchell, 71 N.Y. 199 (1877), 203–4.
Cambridge Histories Online © Cambridge University Press, 2008
262 Norma Basch
breadwinner and therefore responsible for his wife’s “necessaries,” he had a
right to her services, earnings, and “consortium” (affection, company, and
sexual favors). The breadwinner ethos grew ever more important in a market
economy in which home and work were separated, the wife’s household
labor was devaluated, and her economic dependence was palpable.
The market yardstick of value, which afforded little room for recognizing
the value of the wife’s household services, was reinforced and updated in
tort law.Wrongful death statutes, passed in the second half of the century,
reproduced the model of husbands working outside the home for wages
and wives remaining at home and economically dependent. Some states
barred recovery of damages by a husband for his wife’s wrongful death,
thereby inverting the customary gender asymmetry of the common law.
In states that permitted recovery by husbands, damages were limited since
establishing the value of domestic services was more difficult than establishing
the value of lost wages. Wifely dependency was the legal norm in
torts as well as in property, and the prevailing ground for recovery in this
nineteenth-century innovation in tort law was the wife’s loss of her husband’s
support and protection. It is noteworthy that this change in tort
law explicitly addressed and implicitly prescribed a wife’s dependence at
precisely the time wives were acquiring new forms of legal independence.
Coverture was transfigured in the second half of the nineteenth century,
but the authority of the “superior” and the dependency of the “inferior”
so prominent in the contours of Blackstone’s law of persons remained a
leitmotif in American marriage law. In a sanitized and sentimentalized
Victorian incarnation, coverture continued to define what a man should be
as a husband and what a woman should be as a wife. Yet one enduring
legacy of the drive for married women’s property rights was the conflicting
visions of marriage it unleashed. Although the drive began as an effort to
clarify debtor-creditor transactions, protect the family from insolvency, and
recognize the waged labor of wives, it evolved into a contest that spiraled
far beyond the provisions for marital property. And what made the contest
so acrimonious was that every participant identified the legal construction
of marriage as the foundation of the gender system.
Conservatives anxious to hang on to the traditional configuration of marriage
underscored the protection and “elevation” it afforded women and the
stability and prosperity it brought to the nation. Where conservatives saw
protection, women’s rights advocates saw subjection, which they regarded
as a symptom of male depravity and the source of women’s political exclusion.
Giving husbands property rights in both their wives’ assets and bodies,
they reasoned, made marriage the key institution through which men established
their authority over women. For utopian socialists, the problem with
traditional marriage also pivoted on the evil of men owning women, but
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 263
they viewed it not so much as a symptom of male depravity as a consequence
of the whole unjust system of private property.
Liberal women’s rights advocates like Elizabeth Cady Stanton, however,
believed that property rights, which were part of the problem, could be
part of the solution if they invested wives with the same self-ownership
and independence society had granted to free, white men. No matter that
self-ownership was in conflict with the protections afforded by coverture: it
was difficult for the law to compel a delinquent husband to provide them.
A woman with a good husband might thrive under his protection, but
thanks to the codes of an androcentric legal system, a woman with a bad
husband could find herself destitute. A wife’s well-being, in short, should
not depend on the benevolence of her husband.
Although this was an unsettling argument in the heyday of female domesticity
and the breadwinner ethos, it invoked the rights associated with the
modern liberal state. When women demanded property rights in the name
of those private islands of self-ownership that were the hallmark of liberal
individualism, they were not only rejecting the doctrine of marital unity,
they were exploring and exposing the way provisions in the marriage contract
excluded them from participation in the social contract. The radical
challenge provided by using the argot of classical liberal theory to subvert
the legitimacy of its own gender rules was not limited to women’s rights
pamphlets; it radiated into the mainstream of public discourse where it
coalesced with the ideology of abolitionism and began to erode the moral
authority of coverture.
IV. THE BEST INTERESTS OF THE CHILD
The contractualism at the root of the marriage bond was more muted in the
bond between parent and child. The ideal of self-ownership so evident in the
women’s rights movement could hardly be applied to children, who were
in fact unavoidably dependent. Yet changing views of children contributed
to the legal transformation of the family from a male-headed community of
interests to a cluster of competing individuals. Children achieved a measure
of legal individuality in a series of shifts that at once reflected and shaped
the transition in their status from mere appendages of a father’s will to
discrete beings with special needs. Mothers, if they were morally fit and
economically secure, were increasingly designated as the ones whom nature
had endowed to meet those special needs.
The widely publicized 1840 d’Hauteville suit – a bitter contest over the
custody of a two-year-old son – is a case in point. Characterizing the mother,
the judge declared, “her maternal affection is intensely strong, her moral
reputation is wholly unblemished; and . . . the circumstances of this case
Cambridge Histories Online © Cambridge University Press, 2008
264 Norma Basch
render her custody the only one consistent with the present welfare of
her son.”5 Denial of Gonzalve d’Hauteville’s challenge to Ellen Sears
d’Hauteville’s custody of their only child was by no means the only resolution
available to the court. Given the wife’s refusal to return to her husband’s
ancestral home in Switzerland after giving birth to their son in Boston, the
ruling was incompatible with a father’s presumptive right to custody, as
well as the fault-based premise for custody in divorces and separations.
Despite the ruling, the rights and entitlements of fathers were theoretically
in force in 1840. A mother’s voluntary separation from her husband
without cause typically blocked her claim to custody, and fathers in most
jurisdictions retained the right to appoint a testamentary guardian other
than the mother. It is precisely because American family law in 1840 supported
the principle of paternal authority thatWilliam B. Reed, Gonzalve
d’Hauteville’s attorney, built his case around the sovereignty of the husband
as it was spelled out ,in Blackstone’s Commentaries. Still, as Reed must have
sensed when he reviewed the fluid, evolving nature of American family law,
depending on the legal fiction that the husband and wife were one and the
husband was the one was no longer enough in a culture that valorized relations
based on affection and elevated the bonds of family to new emotional
heights. Appealing to the tender ties of parenthood, Reed imbued Gonzalve
d’Hauteville with a love no less vibrant or unselfish than that of the mother.
No one can say, he argued, “with whose affections a child is most closely
entwined, and whether the manly fibres of a father’s heart endure more or
less agony in his bereavement than do the tender chords which bind an
infant to a mother’s breast.”6
Ironically, in using the image of an infant at its mother’s breast in an
effort to equate fathers with mothers, Reed was employing one of the most
evocative tropes of the day and one that esteemed a mother’s “natural”
capacity for nurture at the expense of a father’s traditional authority. The
intensifying emphasis on a child’s innocence and vulnerability and the
Victorian conception of childhood as the critical stage in an individual’s
moral development contributed to the creation of new institutions, the
most important of which was the common school. Others included orphan
asylums, children’s aid societies, and various homes of refuge all devoted to
the cause of child welfare. The heightened focus on child nurture, which
placed mothers at the very center of familial relations, found its way into the
legal system. Although the father’s common law rights were still presumed,
as the d’Hauteville case with its judicial homage to motherhood indicates,
that presumption was nowhere as strong at mid-century as it had once been.
5 Samuel Miller, Jr., Report of the d’Hauteville Case (Philadelphia, 1840), 293.
6 Miller, Report, 195.
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 265
Torn between applying the common law rights of the father and “the
best-interests-of-the-child” doctrine, the judiciary moved toward favoring
the mother in custody battles. On the assumption that children who were
young or sickly were in particular need of a mother’s care, maternal custody
also rested on a tenet that came to be called “the tender years doctrine.”
Judges tied custody to gender as well as to age so that boys beyond a certain
age might go to their fathers while girls of all ages tended to be placed
with their mothers. Believing in fundamental differences between mothers
and fathers, judges essentialized women as nurturers and, in so doing, were
predisposed to place children in their care.
Legislatures also participated in the trend toward maternalism. Some
states enacted statutes authorizing women to apply for a writ of habeas corpus
to adjudicate the placement of a child, a move that turned custody from
a common law right into a judicial decision. Notions of spousal equality
associated with a loving and companionate model of marriage informed the
statutory language used in the reform of custody. The Massachusetts legislature
pronounced the rights of parents to determine the care and custody of
their children as “equal.” In 1860, largely as a result of sustained campaigns
by women’s rights advocates, the New York legislature declared a married
woman the joint guardian of her children, with the same powers, rights,
and duties regarding them as her husband.
Spousal equality and gender-specific roles were not mutually exclusive. In
the drive for maternal custody, women’s rights advocates mixed demands
for equality with essentialist assertions of difference in almost the same
breath. But as a decision rendered in the wake of the New York statute
equalizing custody illustrates, neither arguments for equality or difference
were effective when judges were determined to resist what they regarded
as the excessive democratization of the family. When Clark Brook applied
for a writ of habeas corpus for the return of his son from his separated wife,
it was granted because she had left him without his consent and he had
not mistreated her. In an appellate court ruling that relied on assumptions
in the law of persons and avoided the language of Victorian maternalism,
JusticeWilliam Allen insisted that the underlying quid pro quo in marriage
had not been abrogated by the statute. Because a husband was bound to
support his children, he enjoyed a right to their labor. If the new law had
truly authorized the wife’s custody, it also would have imposed on her the
responsibility of support. Allen read the law as giving the wife a custody
right she might exercise with her husband while she was living with him,
but not away from him or exclusive of him.
The statute, which Allen claimed did not destroy the husband’s traditional
marital rights at the option of the wife, was repealed in 1862. That
is not to say the courts reverted to paternal rights. On the contrary, the
Cambridge Histories Online © Cambridge University Press, 2008
266 Norma Basch
trend in decisions moved inexorably toward maternal custody. Maternal
custody, however, was achieved not so much as a matter of maternal rights
but as a matter of judicial discretion, which paved the way for enlarging
state authority over the family. In the nineteenth century, courts replaced
the father’s absolute custody rights with their own discretionary evaluation
of the child’s welfare, thereby instituting a modern relationship between
the family and the state. The common law was routinely cited and then
frequently overruled in the name of “tender years” or “the best interests of
the child.” The ultimate authority over the family, however, was now the
judiciary.
One exception to the purely discretionary nature of maternal rights was
the changing law of bastardy, which gave custodial rights to the mother
and improved the degraded common law status of the illegitimate child. At
common law, as Blackstone noted, a bastard was fatherless as far as inheritance
was concerned. He could inherit nothing since he was viewed as the
son of nobody and was therefore called filius nullius or filius populi. To regard
him otherwise, as the civil law did by permitting a child to be legitimized
at any time, was to frustrate the main inducement for marriage: to have
legitimate children who would serve as the conduits for the perpetuation
of family wealth and identity. Those without property needed to marry and
have legitimate children in order to fix financial responsibility and ensure
that their offspring would not become public burdens.
American law departed dramatically from the common law provisions for
bastardy. Over the course of the nineteenth century courts and legislatures
alike designated the illegitimate child as a member of the mother’s family
and gave mothers the same custodial rights the common law had conferred
on married fathers. Criminal punishment for producing an out-of-wedlock
child disappeared, and although putative fathers were expected to support
the child, they lost any claim to their custody. As a New Hampshire court
ruled in 1836, the father could not elect to take custody of his child instead
of paying for the child’s support, an option that had been available in early
America. Mothers of illegitimate children enjoyed a special legal status so
long as they remained unmarried to the father and could provide support
for their children. As a consequence of the legally recognized bond between
mother and child, by 1886 thirty-nine states and territories provided the
out-of-wedlock child with the right to share in a mother’s estate. Yet the
nineteenth-century American rejection of the common law stigma imputed
to bastardy had its limits; in many jurisdictions an illegitimate child could
not share in the estate of the mother’s kin or defeat the claims of legitimate
children. The judiciary, meanwhile, tried to legitimize as many children
as possible by recognizing common law marriages and even marriages that
were under an impediment. By 1900 more than forty states declared that
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 267
children of voided marriages or marriages consummated after their births
were legitimate.
The enhanced status of the mother of the illegitimate child and indeed
the child could be undone by financial need. The close bond in the newly
legalized family unit of mother and child, like the corporate unity in the
traditional family, protected the family from state intervention only as
long as there was economic support. Humanitarian attitudes toward all
children – be they legitimate or illegitimate – could not prevent overseers
of the poor from removing a child from the family and placing it in or
apprenticing it to another family. This could occur at ages as young as four
or five. Two contradictory impulses were at work in the legal construction
of bastardy: one was the humanitarian and egalitarian desire embedded in
Enlightenment thinking and spelled out in accord with Thomas Jefferson’s
plan in a 1785 Virginia inheritance statute to make all children equal in
status; the other was the age-old concern for the taxpayer’s pocketbook. It
is not surprising that some elements of bastardy law reflected the anxiety
of local taxpayers or that bastardy hearings revolved around the putative
father’s obligation of support. Putative fathers were often subject to arrest
and property restraints until they agreed to provide support. And although
some reformers argued for eradicating all distinctions between legitimate
and illegitimate children, the fear of promiscuity and the threat it posed
to the institution of marriage blunted the full realization of that goal. By
the early twentieth century needy illegitimate children came increasingly
under the purview of welfare agencies and social workers at the expense of
the intimate bond between mother and child created in the Early Republic.
The other critical shift regarding children and their legitimacy was
the mid-century formalization of adoption. Adoption law, in contrast to
bastardy law, created a family devoid of blood ties. Adoption had taken
place prior to statutory recognition through informal arrangements and
private legislative acts. The Massachusetts adoption statute of 1851, however,
which became the model for many other states, provided for the transfer
of parental authority to a third party, protected the adoptee’s inheritance,
and conferred on adopters the same rights and responsibilities as biological
parents. While the aim of that statute was to make the child’s relationship
to its adoptive parents the same as that of a biological child, not all states
followed that precise pattern. Even in those that did, the judiciary often
made distinctions between natural and adopted children.
In decisions that echoed the judicial distinctions regarding the inheritance
rights of illegitimate children, judges frequently defeated the stated
intent of statutes to make adopted and biological children equal. Though
legislatures initiated formal adoption, it was the courts that monitored it
and shaped it. In circumstances where the adoptive child competed for an
Cambridge Histories Online © Cambridge University Press, 2008
268 Norma Basch
inheritance with biological offspring, courts tended to favor the biological
offspring, making the adopted child’s status “special” rather than equal.
Adoption, after all, was unknown at common law and was therefore subject
to strict construction. And in the process of permitting artificial parents
to take the place of natural ones and of making the judiciary the arbiter
of parental fitness, adoption provided yet another pathway for the state to
intervene in the family. Of course, most intact and self-supporting families
avoided the scrutiny of the state. But in adoption, custody awards, and the
law of bastardy, the doctrine of “the best interests of the child” transformed
parenthood into a trusteeship that could be abrogated by the state through
judicial decision making.
V. RECONSTRUCTION AND THE FREEDMAN’S FAMILY
Despite the growing authority of the state in specific areas of domestic relations,
the paradigmatic legal unity of the family not only coalesced with
the celebration of the household as a harmonious sanctuary from the outside
world, but it did, in fact, serve as a buffer against government interference.
Family unity, however, depended on the hierarchical ordering of its members.
It is noteworthy that, before the CivilWar, marriage and slavery were
the two institutions that marked the household off from the state and identified
its inhabitants as either heads of households or dependents. Given all
the evocative analogies between slavery and marriage that dotted antebellum
culture along with the shared foundation of the two institutions in the
law of persons, it was difficult to consider slavery after the war without considering
marriage or to address race without addressing gender. Although
slavery was involuntary and marriage was contractual, both were domestic
relations, and the parallels that had been invoked by feminists and slaveholders
for their competing agendas re-emerged during Reconstruction. As
the Reconstruction amendments revolutionized the relation between the
states and the federal government, they turned the complex intertwining
of race and gender into a permanent feature of American constitutional discourse.
From Civil War pensions to the policies of the Freedmen’s Bureau,
moreover, the federal government began to demonstrate a growing presence
in the institution of marriage.
The debate over the Thirteenth Amendment exemplifies the new confluence
of gender and race at the constitutional level. When a Democrat in the
House protested the amendment’s failure to compensate slaveholders for
the loss of their slaves, he reminded his colleagues of the prerogatives they
enjoyed as husbands, fathers, and employers. A husband’s right of property
in the services of his wife, he insisted, is like a man’s right of property in the
services of his slave. In another appeal to patriarchal prerogatives, Senator
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 269
Lazarus Powell, Democrat of Kentucky, warned that the original wording
in the amendment making all “persons” equal before the law would impair
the powers held by male heads of households. Republicans also registered
their concern with the gender-neutral language in the amendment, which
Michigan senator, Jacob Howard, noted with alarm would make the wife as
free and equal as her husband. When Charles Sumner, the Senate’s staunchest
abolitionist, withdrew his support from the inclusive language in the
original draft, it signaled the Congressional commitment to the traditional
contours of marriage.
Congress wanted only to extend the marriage contract as it presently
existed to former slaves, a policy the wartime government had already put
into place for the first slaves to reach the Union lines. Able at last to make
labor contracts, freedmen and freedwomen were also able to make marriage
contracts, a long-denied civil right that constituted a sweeping change
in their status. In refusing to recognize the autonomy of the black family,
slavery had rendered it open to disruption, separation, and the sexual whims
of the master. As Harriet Beecher Stowe demonstrated to the world in Uncle
Tom’s Cabin, the separation of mother and child was one of slavery’s most
horrific transgressions. But it was fathers who were pivotal in the legal
transformation embodied in the right to marry since implicit always in
the male slave’s degradation was his inability to control and protect the
members of his own family. Thus it was to freedmen as heads of households
that the Freedmen’s Bureau directed its reforms, including its original plan
to transform ex-slaves into property holders by giving them land.
In the summer of 1865, the Freedmen’s Bureau issued “Marriage Rules,”
which authorized procedures for both dissolving and legalizing the unions of
former slaves and declared an end to extra-legal unions. In the following year
Southern states passed statutes and in some cases constitutional amendments
that either declared the unions of former slaves legal or required their formal
registration; extra-legal cohabitation was typically declared a misdemeanor
punishable with fines. Legal marriage, however, was a radical departure from
the norms of the antebellum plantation. Given the enforced instability
of slave unions, the marital regimes devised by slaves often consisted of
informal marriage, self-divorce, and serial monogamy. Because marriage
was a civil right and a potential source of familial protection, many couples
rushed to formalize their unions immediately after the war; in 1866 in North
Carolina alone, where registration was mandated, more than 9,000 couples
in seventeen counties attested their readiness to tie the knot officially. But
defining which union was the legal one could be problematic, and disputes
surfaced in the courts in the form of inheritance, bigamy, and divorce suits.
Some freedpersons opted to resume prior unions rather than formalize their
current union, whereas others simply failed to comply with either the rules
Cambridge Histories Online © Cambridge University Press, 2008
270 Norma Basch
or values of the new marital regimes. Those lower-class whites who like
some Northern counterparts believed the partners in a union and not the
state were in charge of their marital arrangements failed to comply as well.
Providing former slaves with the right to marry carried different meanings
for different groups. For Reconstruction Republicans, as the agenda
pursued by the agents of the Freedmen’s Bureau indicates, it represented
the formation of male-headed nuclear families and was inextricably linked
to the party’s paramount goal of turning former slaves into wage-workers.
Accordingly the labor contracts drafted by the Bureau supported coverture
by awarding a wife’s wages to her husband even as it recognized the
freedman’s wife as a wage-worker. For freedmen, the right to marry was
a mark of manhood and a symbol of citizenship, and their authority over
the family unit carried the promise of insulating its members from outside
interference. The new integrity that formal marriage conferred on the family
became a legal tool for keeping children out of involuntary apprenticeships.
Asserting their rights as heads of households, freedmen regularly went to
court to block the implementation of apprenticeship provisions in Black
Codes. For former masters, who had once counted slaves as members of their
households, marriage was a way to assign economic responsibilities since
the state had assumed the authority they had once held as slaveholders but
not their obligations. Placing the unions of former slaves under the aegis of
the state also afforded ex-Confederates a pathway for consolidating white
power by instituting bans on interracial marriages.
As for freedwomen, who were urged to submit to the bonds of matrimony
as they were liberated from the bonds of slavery, the right to marry was a
mixed blessing. Those who gratefully accepted the privileges of white womanhood
gave up full-time work for full-time wifehood and motherhood. For
most, labor outside the household was an economic requirement and not a
choice. Wifely subservience, however, was a choice, and marital contestations
in county court records reveal that freedmen sometimes anticipated a
deference their wives were not prepared to give. By virtue of their experiences
as slaves, freedwomen were neither as acculturated to nor as accepting
of the uneven distribution of marital power as middle- and upper-class
white women. Yet to pursue a suit for domestic violence in legal regimes
that still rested on the assumption that the husband represented the wife,
they were compelled to cast themselves as helpless victims whose spouses
had overstepped the farthest limits of patriarchal power.
The most pernicious constraints emanating from state control over the
unions of freedpersons consisted in using marriage laws to uphold “racial
purity,” a policy that impinged on both sexes and prevailed in theory on
both sides of the color line. Its real effect was to reinscribe racial hierarchies.
Statutory prohibitions of “miscegenation,” a word coined in 1864 that came
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 271
to stand for any interracial sexual union, flew in the face of a contractual
conception of matrimony and its attendant protections. Interracial couples
battled anti-miscegenation laws by appealing to the Civil Rights Act of
1866 and the equal protection clause of the Fourteenth Amendment. Yet
apart from two short-lived exceptions, they failed in all the fifteen suits
to reach the highest state appellate courts. Marriage, intoned the Supreme
Court of North Carolina in 1869, although initiated by a contract, was a
“relation” and an “institution” whose ground rules had never been left to
the discretion of the spouses. Inasmuch as whites and blacks alike faced the
very same prohibitions, the court continued, such laws did not favor one
race over the other. The court also defined marriage as a “social relation,”
thereby placing it beyond the ken of the rights enumerated in the Civil
Rights Act and recognizing that full social equality between the races had
never been a part of the Republican vision of Reconstruction.
Drawing on a national judicial trend to treat marriage as something of
a hybrid, Southern courts quelled challenges to anti-miscegenation laws
largely by defining marriage as a status. This was precisely the tack taken
by the Texas Court of Appeals in 1877 when Charles Frasher, a white man
wedded to a black woman, appealed his conviction on the grounds that such
statutes were abrogated by the Fourteenth and Fifteenth Amendments and
the 1866 Civil Rights Act. In defining marriage as a status, the court determined
that the regulation of marriage was properly left to the discretion
of the state of Texas. “[I]t therefore follows as the night follows day,” it
declared, “that this state may enforce such laws as she may deem best in
regard to the intermarriage of whites and Negroes in Texas, provided the
punishment for its violation is not cruel or unusual.”7 Similar bans, which
were supported by an increasingly pseudo-scientific body of racist literature
and were directed at intermarriage with Asians, appeared in Western
jurisdictions and proliferated. By 1916, twenty-eight states and territories
prohibited interracial marriage.
Marriage law also contributed to the debasement of African Americans
through its systematic adherence to gender hierarchy. Although construing
the family as a male-headed community of interests offered some protection
to its members, female dependency provided a handy reference point for the
disfranchisement of black men. Using the words “wives” and “women”
interchangeably, senators reluctant to enfranchise African American men
in the early days of Reconstruction invoked the constitutional status of white
women as the perfect example for distinguishing the rights of citizenship
from the political privilege of voting. Southern Redeemers, working state
by state, did the work of disfranchising African American men and restoring
7 Frasher v. State, 3 Tex. App. 263, 276–77 (1877).
Cambridge Histories Online © Cambridge University Press, 2008
272 Norma Basch
white supremacy, but the move had been prefigured by senators underscoring
the circumscribed political status of women as wives.
Despite the triumph of states’ rights in the regulation of domestic relations,
one lasting effect of Reconstruction was the federal government’s
intervention in marriage. There were already precedents. In 1855 Congress
declared that a free, white woman of any nationality became a citizen automatically
on marrying a male American citizen, and the child of any male
American citizen was a citizen regardless of its birthplace. The Morrill Act
of 1862, aimed at Utah Mormons, established the power of the federal government
to regulate marriage in the territories. Reconstruction significantly
amplified federal intervention in marriage. It was the federal government
that took the lead in both offering marriage to freedpersons and distinguishing
legal marriage from extra-legal unions, redefined as adultery and
fornication. It was the federal government that reinforced the paradigm
of wives as dependents in its pensions for Civil War widows and instituted
governmental surveillance of the pensioners’ marital qualifications.
And it was the federal government’s aggressive promotion of a narrowly
traditional ideal of monogamy that set the stage for a full-scale assault on
Mormon polygamy.
VI. POLICING MONOGAMY AND REPRODUCTION
IN THE GILDED AGE
In the aftermath of the Civil War, a renewed commitment to the irrevocability
of the federal union was bound up in public discourse with a renewed
commitment to lifelong monogamy. As Abraham Lincoln had warned in a
much-quoted domestic trope, “a house divided against itself cannot stand.”
Divorce, then, with its distinctly contractual foundations, its broadly divergent
grounds, and its implicit acceptance of serial monogamy came under
serious attack. Addressing a national divorce rate that was very low by
current standards but clearly on the rise after the war, and decrying the
seductions of secularism and modernity, conservative Protestants appended
entire worldviews to “the divorce question.”
The comments of Henry Loomis, a Connecticut clergyman, exemplify the
way moral critics deployed lifelong monogamy as the critical marker for a
Christian nation while equating divorce with national decay. Whereas true
Christians viewed marriage as a divine institution and the foundation of
civil society, Loomis observed, the “infidel or socialist” view of marriage was
based on the idea that marriage should continue only at the pleasure of the
partners. Given the historic ties between marriage and government, it was
understandable, he conceded, that the nation’s separation from England had
nourished the acceptance of divorce. But now responsible Christians of the
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 273
nineteenth century were reversing dangerous Enlightenment experiments,
and the “infidel theory of the state” so popular at the time of revolution
was giving way to a respect for divine authority. The infidels, the freethinkers,
and the free lovers, whom Loomis placed in direct opposition to
anti-divorce Christians, belonged to a meandering stream of American radicalism
that ran all the way from the Enlightenment anti-clericalism of a
Tom Paine through the utopian socialism of a Robert Owen to the homegrown
anarchism of a Steven Pearl Andrews. Yet the demarcation he created
was too tidy by far since the infidel theory he condemned received its most
ardent expression in the voices and practices of unorthodox Christians.
Spiritualism’s rejection of marital tyranny, the Church of the Latter Day
Saints’ devotion to plural marriage, and the Oneida Perfectionists commitment
to “complex marriage” all challenged Loomis’s definition of Christian
marriage.
Loomis was joined in his anti-divorce sentiments by a host of local allies.
Critiques by New England clergymen, including Theodore Woolsey, the
president of Yale, became part of a larger campaign that evolved from an
effort to eradicate Connecticut’s omnibus clause into an organized legal
crusade to make divorce less available. The New England Divorce Reform
League, with Woolsey serving as president, became the leading edge of
a movement for a uniform national divorce code. Its executive secretary,
the Congregationalist minister Samuel Dike, took the League to national
prominence by mixing clergymen, lawyers, and social scientists on the
executive board. Dike then convinced Congress to fund a national survey
on marriage and divorce, which was compiled by Secretary of Labor Carroll
D.Wright and remains a remarkable statistical guide for the years between
1867 and 1902.
Dike’s refusal to remarry a congregant whose divorce failed to meet his
own religious scruples led to the loss of his church and became a catalyst
for his reform activities. Denominational conventions often addressed the
vexing theological dilemma of remarriage and the apparent gulf between
secular law and the New Testament. Yet Christian precepts were central
to Anglo-American marital regimes as illustrated by the casual verbal substitution
of the biblical one-flesh doctrine for the legal fiction of marital
unity. References to Scripture dotted discussions of divorce in state legislatures,
and jurists and legislators alike assumed that the common law and
Christianity (in its Protestant incarnation) were properly united in domestic
relations and the union was in no way a violation of the disestablishment
clause of the first amendment.
Those two assumptions, resting as they did on an exclusively monogamous
view of marriage, with some denominational variations regarding
its dissolution, help account for the government’s success in pursuing
Cambridge Histories Online © Cambridge University Press, 2008
274 Norma Basch
polygamy in comparison to the relative failure of the drive to roll back
divorce. Moral critics persistently linked both divorce and polygamy to the
degradation of women and highlighted the ease and prominence of divorce
in Utah. But while some states removed omnibus clauses and tightened
residence requirements, most legislators were disinclined to retreat to an
adultery-only standard for divorce; the divorce rate continued to rise, and
a uniform, national divorce code never came to pass. Polygamy, by contrast,
loomed as a much greater deviation from traditional Christianity, and
Mormons soon discovered the extent to which conventional Protestantism
trumped both their own reading of Christian marriage and their reliance
on the protection of the First Amendment.
In the Gilded Age, eradicating polygamy became a defining feature of the
Republican Party and a political substitute for the party’s vaunted role in
saving the Union. Republicans who had labeled polygamy and slavery “the
twin relics of barbarism” before the war continued to compare plural wives
to bond slaves after the war. Relying on patterns developed in Reconstruction,
anti-polygamists demanded federal intervention in Utah. Enforcing
the Morrill Act, however, which made bigamy in federal territories a crime
punishable by imprisonment, had been foiled by Utah’s failure to register
marriages and by the recalcitrance of Mormon juries. After 1870, moreover,
when Utah enfranchised the women of the territory, comparing Mormon
women with bond slaves required a new kind of logic. When newly enfranchised
plural wives endorsed polygamy at a series of mass meetings, critics
suggested their complicity in their own enslavement.
The defeat of polygamy took place in a series of contests that placed the
federal government in an unprecedented position of authority over marriage
law. In an effort to enforce the Morrill Act, the Poland Act of 1874 empowered
federal courts in the Utah territory to try federal crimes and empanel
federal juries. As a result, a test case, Reynolds v. United States, emerged and
reached the Supreme Court in 1879. The Mormons, who avowed plural
marriage was a religious tenet that ordered their moral and social universe,
also based their defense against the Morrill Act on the firmly established
legal principle of local sovereignty over domestic relations. These arguments
were no match for the anti-polygamy fervor of the era. Chief Justice
Morrison Wait writing for the court in Reynolds designated polygamy too
abhorrent to be a religious tenet, declared it “subversive of good order,”
and denounced it in a racial slur as the preserve of Asiatic and African
people.
When polygamy persisted in the wake of the Reynolds decision, further
federal action followed. Congress passed the Edmunds Act in 1882 disenfranchising
polygamists, bigamists, and cohabitants and making it criminal
to cohabit with more than one woman. In 1887 the Edmunds-Tucker Act
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 275
disincorporated the Mormon Church, and in a strikingly indiscriminate
provision, it also disenfranchised the women of Utah regardless of their
religious affiliation or marital status. When the Mormons finally capitulated
by officially abandoning polygamy, they set the stage for Utah’s
admission to the Union. And on the long path to capitulation, the government’s
aggressive campaign to eradicate this offensive local difference
stood as a warning to other groups. Shortly after the decision in Reynolds,
a group from Hamilton College gathered in upstate New York to oppose
“complex marriage,” an experimental regime that controlled reproduction,
raised children communally, and prohibited exclusive pairings between men
and women. When the Oneida Perfectionists gave up complex marriage in
August of 1879, they noted in their newspaper that they could not remain
blind to the lesson in the Mormon conflict.
Given the torrent of words and actions directed at deviations from
monogamy, it is worth considering the common threat embodied in alternative
marital regimes as well as in the serial monogamy unleashed by
divorce. Part of the threat consisted in overturning the rules whereby men
ordered their sexual access to women. The campaigns against polygamy and
divorce typically extolled the Christian/common law model of monogamy
for protecting the chastity of women while they obscured how the chastity
of women represented male control of female sexuality. Campaigns against
birth control revolved around a similar concern with regulating female sexuality
and resulted in a growing governmental presence in reproduction.
The government’s intervention in reproduction took place in the context
of a dramatic demographic shift. Over the course of the nineteenth century
white female fertility declined from a high of 7.04 to 3.56 children per
family. In the absence of famine or catastrophic disease, we can only conclude
that couples were voluntarily limiting the size of their families. One method
when others had failed was abortion, which came under attack at midcentury
from the newly founded American Medical Association. A decade
later many states still had no provision making abortion a crime, and those
that did relied on the old “quickening” rule of the common law, which
permitted abortion so long as there was no discernible movement of the
fetus. By the 1880s and 1890s, in the midst of a crusade for “moral purity,”
abortion, including those performed before quickening, became a crime in
most states.Women who sought abortions were subject to criminal penalties
along with the persons who provided them. Other forms of birth control
generated federal intervention. Although separating sexual relations from
reproduction was undoubtedly the goal of many men as well as women, it
constituted a serious threat to the gender system by affording opportunities
to women for risk-free sexual relations outside of marriage. Indeed, few
advances held a greater potential for liberating women than reproductive
Cambridge Histories Online © Cambridge University Press, 2008
276 Norma Basch
freedom, which may account for why the resources for achieving it were
defined as “obscene.”
Anthony Comstock, a Christian fundamentalist, led the crusade against
birth control. The 1873 federal bill bearing his name criminalized the use
of the mails to disseminate “obscene, lewd, or lascivious” materials, including
items for preventing conception and inducing abortion. Many states
followed the federal lead with their own detailed statutes. Remarkably,
Congress evinced no reservations about the scope or constitutionality of
its assault on obscenity, and federal courts generally followed suit. Seven
months after the bill’s enactment, a federal district court in New York City
upheld both the authority of Congress and the conviction of a physician
for mailing powders with abortifacient and contraceptive properties. Three
years later a federal court in Nevada used the law to penalize ambiguous
advertising for contraceptive remedies.
The Comstock laws, which constrained the free flow of birth control
information, were not strictly enforced, and they could not eradicate the
impulse toward reproductive freedom that gathered force toward the end
of the century. Yet although the quest for effective birth control acquired
some respectability by the third decade of the twentieth century, abortion
grew increasingly problematic. Abortions were still performed in significant
numbers and were never equated with the crime of infanticide, but
the women who sought them were no longer the middle-class wives of
the mid-nineteenth century but instead single and working-class women.
The criminalization of abortion not only deprived women of reproductive
privacy and narrowed their options; it represented a significant departure
from years of common law jurisprudence. The role of the federal government
in enforcing uniform marriage standards was also a departure from
the principle of local sovereignty that had reigned in the first half of the
century. The most revealing feature in the crusade for moral purity and
marital uniformity, however, was its devotion to pronatalism, which was
directed at a society bent on limiting family size. Women – white, nonimmigrant,
properly married women of Northern European origin – were
to serve American society by having more children. Here was maternalism
with a vengeance, and with a distinctly nativist cast.
In the end, the devotion to maternalism played an equivocal role in
reshaping American legal regimes. While Comstockery was placing new
curbs on women’s autonomy in the name of motherhood, motherhood was
eclipsing fatherhood in custody awards in the courts, and the courts were
exercising their authority at the expense of the male head of the household.
The legal system now regarded wives primarily as mothers whose wellbeing
was dependent on their husbands, and it regarded husbands primarily
as wage earners for whom the state might act as substitute in limited
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 277
and closely scrutinized circumstances. These updated legal constructions,
although a far cry from Blackstone’s extended patriarchal household, still
retained some of its elements. They would make their way into the twentieth
century to influence the welfare bureaucracies of the Progressive era and the
provisions for Social Security.
CONCLUSION: THE LONG VIEW
One framework for viewing the long arc of nineteenth-century family law
is to chart departures from the law of persons as it was outlined in Blackstone’s
Commentaries. Even though both the legal details and underlying
rationale in Blackstone’s blueprint for marriage and parenting were dated,
it continued to serve as an outline for the rudiments of domestic relations
law well into the nineteenth century. Blackstone, then, with his emphasis
on the legal fiction of marital unity and its consequence of male headship,
provides us with a convenient baseline from which to map out nineteenthcentury
innovations. Viewed from this perspective, the most striking trend
in domestic relations law over the course of the nineteenth century was
its shift toward the individuation of family members at the expense of the
principle of male headship. A series of specific legal innovations chipped
away at the old common law rules until the unified, hierarchical household
of the Blackstonian model was a shadow of what it had been.
Nineteenth-century innovations took a variety of forms. One was clearly
the legitimization of divorce, which unfolded amidst a growing commitment
to romantic love and marital happiness. The recognition of divorce as
a legal remedy, albeit on limited terms, not only compromised the legal fiction
of marital unity that lay at the heart of the Christian/common law ideal
of marriage but it also paved the way for the acceptance of serial monogamy.
Another innovation took the form of giving wives the right to own property
independently. Despite narrow legislative goals, the married women’s
property acts endowed wives as owners and earners with a legal identity
apart from that of their spouses. In a culture that recast all family relations
in affective terms, invested parenting with great emotional weight, and celebrated
the innocence of childhood, legal innovations extended to parentchild
relations as well. By endorsing a private construction of marriage, the
juridical recognition of common law marriage resulted in insulating the
children born in irregular unions from the disabilities of bastardy. In stark
contrast to the English law of bastardy, nineteenth-century legal regimes
also created a new, female-headed family in which unwed mothers enjoyed
the same custodial rights as married fathers. As the notion of the child serving
as a conduit of the father’s will gave way to a concern for the child’s best
interests, mothers increasingly defeated fathers in custody contests, thereby
Cambridge Histories Online © Cambridge University Press, 2008
278 Norma Basch
eroding the father’s presumptive right of custody. Similarly, by recognizing
non-biological families, the formalization of adoption put another dent in
the patriarchal foundations of Anglo-American family law.
It would be a mistake, however, to dismiss the limits of these reforms
or ignore concerted efforts to undermine them. Although they point collectively
to the democratization of the family, viewing them in an institutional
framework provides a very different picture. Judicial hegemony over
domestic relations reforms was as significant as the reforms themselves, and
legislators undoubtedly came to expect judges to clarify, temper, and even
reverse their more innovative forays. In discrete legal contests as opposed to
generalized statutes, it was judges who tended to define the wife’s separate
estate narrowly, who hewed more often than not to the wife’s traditional
obligation of service, and who replaced the father’s absolute right of custody
with their own discretionary evaluation. As a result, many elements
of coverture survived, and the judicial embrace of maternalism was always
qualified. When judges awarded custody to mothers, they were standing
in the traditional place of fathers and transforming themselves – not the
mothers – into the ultimate authority over the family. Indeed, in custody,
adoption, and the law of bastardy, the judiciary turned parenthood into
a trusteeship that could be abrogated by the state. The role of the federal
government in policing CivilWar pensions, enforcing monogamy, and limiting
reproductive freedom was another telling institutional development.
It not only clouds the picture we have of women’s increasing autonomy in
the family but it also anticipates the ambit of the large welfare bureaucracies
of the twentieth century.
How, then, are we to integrate these conflicting pictures of American
family law? How are we to understand the tensions between the egalitarian
and humanitarian impulses behind the legal reordering of the family on the
one hand and the constraints, obfuscations, and reversals that accompanied it
on the other? One way is to move beyond legal and institutional particulars
to broader frameworks. If we place these tensions in a cultural framework,
for example, we can read them as the agonizing contradictions between
the destabilizing potential of romantic love and the regime of lifelong
monogamy in which it was embedded and which the law modified. If we
place them in a political framework, we can read them as the troublesome
strains between liberalism and its patriarchal components. Admittedly the
latter framework tells us more about what the law reveals than what it
achieved, but what it reveals is a powerful and shifting dynamic between
the legal construction of the family and the evolving gender system.
It is instructive to consider this dynamic in a specifically nineteenthcentury
American context. Although liberalism had the potential to disrupt
all kinds of hierarchies, classical liberal theorists had assumed the wife’s
Cambridge Histories Online © Cambridge University Press, 2008
Marriage and Domestic Relations 279
subordination and counted it among the rights of free men. Especially in
the heyday of abolitionism, however, it was increasingly difficult to limit the
rights of free men to men. To be sure, liberalism with its market yardstick
of value and its failure to attribute value to the wife’s household services
may have proffered little to wives in the way of concrete remedies, but it
always carried within its tenets a compelling challenge to their subordinate
status. The credo of self-ownership and its corollary of bodily integrity so
central to the crusade against slavery were threats to the gender order as
well as the racial order and were understood as such by judges, legislators,
and moralists. The anxieties unleashed by bringing the self-contracting,
rights-bearing individual of liberalism to bear on the gender system by way
of family law only intensified over the course of the century, culminating
in novel restrictions on abortion and birth control that would make their
way into the twentieth century.
Yet these were surely not the only legacy of legal change. The torrent
of Gilded Age programs to police monogamy and sexuality was as much
a manifestation of how the family had been transformed as an effort to
restore it to traditional guidelines. And because the legal reordering of the
family provided nineteenth-century women’s rights advocates with a perfect
field on which to deploy liberal political theory to subvert its own gender
rules, it served as a catalyst for rethinking assumptions about marriage and
parenting and for exploring and exposing their connections to the gender
system. This too was a legacy that would make its way into the twentieth
century and beyond.
Cambridge Histories Online © Cambridge University Press, 2008
9
slavery, anti-slavery, and the coming
of the civil war
ariela gross
Enslaved African Americans who escaped to freedom wrote bitterly of the
role of law in maintaining the institution of slavery. Harriet Jacob emphasized
the law’s refusal to act on behalf of slaves. The enslaved woman or girl
had “no shadow of law to protect her from insult, from violence, or even from
death.” Frederick Douglass focused on the way law did act, turning human
beings into property: “By the laws of the country from whence I came, I
was deprived of myself – of my own body, soul, and spirit . . . ” Whether
through its action or inaction, slaves recognized the immense power of law
in their lives.1
Law undergirded an economic system in which human beings were
bought, sold, and mortgaged and a political system in which two sections
of the United States coexisted profitably, one a slave society and one not.
As we know, this coexistence did not last, and it is tempting to read back
into the antebellum period an instability in the legal edifice supporting
slavery that made its collapse inevitable. Yet, as both Douglass and Jacobs
realized, the law worked remarkably well for a long period to subordinate
human beings one to another, though not without considerable effort in
the face of contradiction, internal conflict, and external challenge. Southern
slaves and Northern abolitionists, in very different ways, posed a threat to
the law of sl,avery, and it took work to overcome those threats. Ultimately,
however, it was a bloody civil war, and not a legal process, that resolved the
contradictions of human property.
Students of Southern history once supposed that law was largely irrelevant
to African American culture, and to Southern culture in general. Most cultural
historians of the nineteenth-century South have assumed that rituals
1 Harriett Jacobs, Incidents in the Life of a Slave Girl (Cambridge, MA, 1987), 27; Frederick
Douglass, “I Am Here to Spread Light on American Slavery: An address Delivered in
Cork, Ireland, on 14 October 1845,” The Frederick Douglass Speeches, 1841–1846 (New
Haven, CT, 1999).
280
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 281
of honor for whites and plantation discipline for blacks replaced law as the
mechanisms to resolve conflict and punish wrongdoers. Thus, histories of
white Southern culture emphasized duels, lynching, and master-slave relations.
Literary sources, letters, and personal papers all painted a picture of a
society governed primarily by what contemporary legal scholars would call
“extra-legal norms.” Studies of slave culture suggested that law had little
influence on slaves’ lives, because for most slaves, the master was the law.
And so the legal history of slavery focused on the extraordinary situation –
the fugitive to the North, the slave who killed her master – not slavery’s
everyday life.
But no longer. First, law was in reality pervasive in slavery – in the social
construction of race, in the regulation of daily life, in the workings of the
slave market, and in the culture of slaves, slaveholders, and non-slaveholding
whites. Second, the great paradoxes of slavery and freedom in the antebellum
republic were all framed precisely in terms of claims to legal rights: the
right to property and the right to liberty. Slaves occupied a unique position
in American society – as both human and property. In constitutional terms,
slavery could be viewed simultaneously in terms of both liberty and property
rights. Abolitionists emphasized the liberty of all Americans; slaveholders
emphasized the property rights of all white Americans, including the right
to own slaves. It is a distinctive feature of slavery in the American South –
slavery embedded in a system of political liberalism – that its defense was
full of the language of property rights. It was the legal-political language of
property, indeed, that rendered slavery and liberalism compatible. Nor were
the property rights arguments of slaveholders simply defensive; they were
also used aggressively and expansively. Not only did they justify holding
slaves in the South, they justified carrying them into the new territories to
theWest and North.
The language of rights was the only language most Southerners had
available to define slavery. Thomas Reade Cobb’s Treatise on the Law of Negro
Slavery defined slavery in pure Lockean terms, as rights denied: “Of the three
great absolute rights guaranteed to every citizen by the common law – the
right of personal security, the right of personal liberty, and the right of
private property, the slave, in a state of pure or absolute slavery, is totally
deprived.”2 Through the denial of legal rights, the slave was put outside
society.
Thus, we can see that law worked on two levels during the antebellum
era: below the radar, law facilitated the routine functioning of the
slave system and mediated the tensions among slaves, slaveholders, and
2 Thomas Reade Cobb, An Inquiry into The Law of Negro Slavery in the United States of America
(1858), §86, 83.
Cambridge Histories Online © Cambridge University Press, 2008
282 Ariela Gross
non-slaveholders. Above the surface, law was the object of contest between
Southern pro-slavery and Northern anti-slavery forces over the future of
slavery in the Union. Through a succession of constitutional “crises” involving
slaves who fled to free states and migrants who brought slaves to new
territories, competing views of the legality and constitutionality of slavery
increasingly came into direct conflict in legal as well as political arenas.
As slaves who resisted their masters or ran away pushed difficult issues of
human agency into the courtroom, they also pushed the anomalous constitutional
status of slavery into the forefront of political debate, adding to
growing Northern fears of an ascendant “Slave Power” conquering not only
political institutions but also the Constitution itself.
Increasingly central on both of these levels of legal activity was the
ideology of race. The power of race in the law was highlighted in the
Supreme Court’s affirmation, in the Dred Scott decision, that even free blacks
had no claim to rights or citizenship, but it had been building for years.
By the 1820s, slavery had become the South’s “peculiar institution.” It
had been successfully regionalized by Northern abolition despite pockets
of continuing enslavement that contravened official law, like the slavery
Dred Scott experienced on Army bases in the Northwest Territories. The
regionalization of slavery brought the issue of “comity” between free and
slave states to the fore, highlighting the political issues involved in every
legal determination about the status of slaves brought to free jurisdictions.
Race held the potential to explain and justify the line between free and
unfree; in the slave states it mobilized non-slaveholding whites behind the
institution of slavery, and in the free states it created a counterweight to
abolitionist compassion for the enslaved. On the local level, Southern jurists’
increasing preoccupation with justifying slavery in their jurisprudence led
not only to legislative crackdowns on the regulation of free blacks and on
many of slaves’ “customary” rights but also to a more self-conscious effort
to make law “paternalist” and thereby to prove that slavery was the best
possible condition for poor, childlike “negroes.” Race was central to this new
justificatory legal enterprise. Law became ever more the forum for telling
stories about black character and, through it, white character.
The essential character of Southern antebellum society and its laws has
been debated endlessly. Was it a pre-capitalist paternalist socioeconomic
system inserted into a bourgeois capitalist world or a market society of
profit-minded individuals pursuing individual gain? Was law an instrument
of slaveholder hegemony, a facilitator of capitalist markets, an object
of contest among many makers, an arena for battles over honor? Ultimately,
these attempts at global characterization of either “the South” or “Southern
law” are less useful to an understanding of the way legal institutions operated
both as cultural forms and as technologies of power than close attention
to the more mundane, daily ways that slaves and masters, slaveholders
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 283
and non-slaveholding whites, buyers and sellers of slaves framed and waged
their encounters with law. We can agree with Walter Johnson: “Neither
structural contradiction nor hypocritical capitalism fully describes the
obscene synthesis of humanity and interest, of person and thing, that underlay
so much of Southern jurisprudence, the market in slaves, the daily
discipline of slavery, and the proslavery argument.”
I. THE EVERYDAY LAW OF SLAVERY
At the level of the day to day, in local trials, whites worked out their
relationships with slaves and with one another through slaves. White men
rarely faced criminal prosecution for striking slaves, but they quite often
found themselves in court for civil suits regarding property damage to the
slave of another. At trials, non-slaveholding whites had the chance to exercise
power as jurors and as witnesses, telling stories about the character and
mastery of defendants who were far more likely to be wealthy planters. Slaves
had no officially sanctioned opportunity to exercise agency, but they too
both consciously and unconsciously influenced outcomes in court, despite
the dangers inherent in outright efforts at manipulation. Lawyers, finally,
played the role of transmitters of culture as they traveled from town to
town. They made their careers in the legal practice of the slave market and
invested the fruits of their careers in the slave market. In all these ways, the
institutions of slavery, law, and the market grew intertwined.
The growing power of race in Southern society shaped all legal confrontations;
courts had the power to make racial determinations, and the
stories told about racial character in the courtroom helped “make race.”
Despite the overdetermined quality of white Southerners’ efforts to make
the boundaries of race and slavery congruent, the indeterminacy of legal
standards made some legal outcomes contestable. Courts, as arenas for shaping
identities, lent some power to slaves.
Who Can Be a Slave? The Law of Race
By the early nineteenth century, it was well-settled law in every state that
only a person of some African descent could be enslaved. One’s appearance as
a “negro” raised a legal presumption of one’s enslavement, but this presumption
could be rebutted by evidence of manumission, whiteness, or another
claim to freedom. Most states passed statutes setting rules for the determination
of “negro” or, more often, “mulatto” status, usually in terms of
fractions of African “blood.” Before the CivilWar, most states also stipulated
either one-fourth or one-eighth African “blood” as the definition of “negro.”
Yet even statutory definitions such as these could not resolve disputes
about the racial identity (and hence, vulnerability to enslavement) of many
Cambridge Histories Online © Cambridge University Press, 2008
284 Ariela Gross
individuals. Often, they just pushed the dispute back a generation or two as
courtroom inquiry turned from the racial identity of the individual at issue
to her grandmother. Still, the question remained: how could one know race?
In practice, two ways of “knowing” race became increasingly important
in courtroom battles over racial identity in the first half of the nineteenth
century, one a discourse of race as “science” and the other of race as “performance.”
During the 1850s, as the question of race became more and more
hotly contested, courts began to consider “scientific” knowledge of a person’s
“blood” as well as the ways she revealed her blood through her acts. The
mid-nineteenth century thus saw the development of a scientific discourse of
race that located the essence of racial difference in physiological characteristics,
such as the size of the cranium and the shape of the foot, and attempted
to link physiological with moral and intellectual difference. Yet the most
striking aspect of “race” in trials of racial identity was not so much its biologization
but its performative and legal aspects. Proving one’s whiteness
meant performing white womanhood or manhood, whether doing so before
the court or through courtroom narratives about past conduct and behavior.
While the essence of white identity might have been white “blood,” because
blood could not be transparently known, the evidence that mattered most
was evidence about the way people acted out their true nature.
Enslaved women suing for their freedom performed white womanhood by
showing their beauty and whiteness in court and by demonstrating purity
and moral goodness to their neighbors. White womanhood was ideally
characterized by a state of legal disability, requiring protection by honorable
gentlemen. In nineteenth-century legal settings, women of ambiguous
racial identity were able to call on the protection of the state if they could
convince a court that they fit this ideal of white womanhood. For example,
in the “celebrated” freedom suit of Sally Miller, her lawyer sought to link
white Southerners’ confidence in the intangible but unmistakable qualities
of white womanhood to identifiable acts of self-presentation and behavior
his client performed:
“[T]he moral traits of the Quartronne, the moral features of the African are far
more difficult to be erased, and are far more easily traced, than are the distinctions
and differences of physical conformation,” he informed the jury. “The Quartronne
is idle, reckless and extravagant, this woman is industrious, careful and prudent –
the Quartronne is fond of dress, of finery and display – this woman is neat in her
person, simple in her array, and with no ornament upon her, not even a ring on her
fingers.”3
3 Transcript of Trial, Miller v. Belmonti, No. 5623 (1845), Supreme Court Records, Earl K.
Long Library, Special Collections & Archives, Univ. of New Orleans, La. “Quatronne”
means person of one-fourth African ancestry, as in “quadroon.”
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 285
The jury accepted the argument, and the Louisiana Supreme Court
affirmed Sally Miller’s freedom. Her case was covered heavily in local newspapers,
and her trial narrative was repeated in novels and autobiographies
by abolitionist ex-slaves, William Wells Brown and William Craft, as a
dramatic representation of the power relations inherent in slavery, so little
caring of the “sacred rights of the weak” that it could question even a fair,
white maiden.
Men, on the other hand, performed white manhood by acting like gentlemen
and by exercising legal and political rights: sitting on a jury, mustering
into the militia, voting, and testifying in court. At trial, witnesses translated
legal rules based on ancestry and “blood” into wide-ranging descriptions of
individuals’ appearances, reputation, and in particular a variety of explicit
forms of racial performance: dancing, attending parties, associating with
white people or black people, and performing civic acts. There was a certain
circularity to these legal determinations of racial identity. As South
Carolina’s Judge William Harper explained, “A slave cannot be a white
man.” But this was not all that it seemed, for he also stated that a “man
of worth, honesty, industry and respectability, should have the rank of a
white man,” even though a “vagabond of the same degree of blood” would
not. In other words, “A slave cannot be a white man” suggested not only
that status depended on racial identity but also that status was part of the
essence of racial identity. Degraded status signified “negro blood.” Conversely,
behaving honestly, industriously, and respectably and exercising
political privileges signified whiteness.4
Manumission and Free Blacks
As more and more people lived on the “middle ground” between slavery
and freedom, black and white, they made it at once more difficult and more
urgent for courts to attempt to draw those boundaries sharply and to equate
race with free or unfree status completely.
By the 1830s, nothing had come to seem more anomalous to many
white Southerners than a free person of African descent. Yet there was a
substantial population of “free people of color” in the South, partly as a result
of relatively lax manumission policies in the eighteenth and early nineteenth
century. Legislatures hurried to remedy the problem, as free blacks were
increasingly seen to be, with a plethora of laws governing manumission.
Before Southerners felt themselves under siege by abolitionists, they had
allowed manumission quite freely, usually combined with some plans for
colonization. But by the 1820s serious colonization plans had died out in
4 State v. Cantey, 20 S.C.L. 614, 616 (1835).
Cambridge Histories Online © Cambridge University Press, 2008
286 Ariela Gross
the South. In a typical Southern slave code of the latter decades of slavery,
slaves could only be freed if they left the state within ninety days and if the
manumitter followed other complicated rules. The rights of creditors were
protected, and a substantial bond had to be posted for the care of the old or
infirm freed slave.
Southern states also tightened restrictions on free blacks beginning in the
1830s and accelerating in the 1840s and 1850s. In part this was a reaction to
the Denmark Vesey (1822) and Nat Turner (1831) insurrections, for Vesey
was free, and Turner was a foreman, a near-free slave. But it was also part
of the reaction, beginning in the 1830s, to anti-slavery sentiment in the
North. In the late eighteenth century, most slaveholders spoke of slavery as
a necessary evil – the Thomas Jefferson position. They were racists, but they
did not pretend that blacks loved slavery; rather, they took the position that
given current circumstances, slavery was the best that could be done. Blacks
could not survive as free people in the United States – perhaps colonization
would be a very long-range solution. By the 1830s, however, Southerners
had developed a defense of slavery that pronounced it a positive good. For
the most part, it was a racially based defense. According to Cobb and other
pro-slavery apologists, blacks were inferior mentally and morally so that
“a state of bondage, so far from doing violence to the law of his nature,
develops and perfects it; and that, in that state, he enjoys the greatest
amount of happiness, and arrives at the greatest degree of perfection of
which his nature is capable.”5
As Southerners articulated the positive-good defense of slavery more
often in terms of race, they increasingly emphasized a dual image of the
black person: under the “domesticating” influence of a white master, the
slave was a child, a happy Sambo, as described by Cobb, but outside of this
influence, he was a savage beast. As they strove to convince themselves and
Northerners that slaves were happy Sambos, they more frequently portrayed
free blacks as savages.With this emphasis on race, Southerners felt the need
to draw the color line more clearly than ever. This placed the South’s free
people of color in an increasingly precarious position.
It is worth remembering that there were two quite distinct groups of
free people of color. In the Upper South, where slavery was essentially
dying out by the CivilWar, and also in Maryland and Delaware, free black
populations were largely the descendants of slaves manumitted during the
Revolutionary era. As a group they were mainly rural, more numerous,
and closer to slaves in color and economic condition than free blacks in
the Lower South, who were light-skinned refugees from the San Domingo
revolution, creole residents of Louisiana, and women and children freed as
5 Cobb, Inquiry into the Law of Negro Slavery, 51.
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 287
a result of sexual relationships. Free blacks in the Lower South tended to be
mixed racially; concentrated in New Orleans, Charleston, and a few other
cities; and better off economically; some of them were large slaveholders
themselves. The Upper South was more hostile to free blacks because they
were more of an economic threat; in the Lower South, the cities recognized
gradations of color and caste more readily.
Along with increased restrictions on manumission, the most important
new limitations on the rights of free people of color were constraints on
their freedom of movement. Free blacks were required to register with the
state and to carry their freedom papers with them wherever they went. They
were frequently stopped by slave patrols who mistook them for slaves and
asked for their passes. If their papers were not in order they could be taken
to jail or even cast into slavery. Mississippi required that, to remain in the
state, free people of color be adopted by a white guardian who could vouch
for their character. Increasingly, criminal statutes were framed in terms of
race rather than status, so that differential penalties applied to free people of
color as well as slaves, including banishment and reenslavement. In most of
the new state constitutions adopted during the 1830s, free people of color
were barred from testifying in court against a white person, voting, serving
in one of the professions, or obtaining higher education. About the only
rights that remained to them were property rights. Some managed to hold
on to their property, including slaves. But by the eve of the CivilWar, white
Southerners had made every effort to make the line between slave and free
congruent with the line between black and white. Free people of color and
people of mixed race, both slave and free, confounded those efforts. It is no
surprise that they were the target of so many legal regulations.
Slave Codes: “A Bill of Rights Turned Upside Down”
On paper, many aspects of slaves’ lives were governed by slave codes. In
practice, slaves were often able to carve out areas of customary rights contrary
to the laws on the books. How, then, can we interpret the significance of the
codes’ detailed restrictions on every aspect of slave life? One way to read the
statutes passed by Southern legislatures to regulate slavery, James Oakes
has suggested, is as “Bill[s] of Rights [turned] upside down . . . a litany of
rights denied.” Slaveholders defined slavery in the terms they used to define
freedom. Slaves had no right of movement, no right of contract, no right
to bear witness in court, no right to own property.
The codes can also be read as timelines of every moment slaves resisted
often enough to trigger a crackdown. The very specificity of the laws in
Southern slave codes hints at this reading. Slaves were hiring out their own
time and moving freely about towns frequently enough to merit a law; slaves
Cambridge Histories Online © Cambridge University Press, 2008
288 Ariela Gross
were selling spirituous liquors, holding dances, and gaming frequently
enough to merit a law. County court records in Natchez, Mississippi, reveal
that the most frequent criminal prosecutions of blacks or whites were for
selling spirituous liquors to a negro, selling spirituous liquors without
a license, and gaming. It is often possible to track insurrectionary scares
simply by reference to the legislative enactments of a particular region.
For example, after Nat Turner’s revolt, South Carolina passed laws against
burning stacks of rice, corn, or grain; setting fire to barrels of pitch, tar,
turpentine, or rosin; and other very specific prohibitions.
The slave codes reveal the hopes and fears of slaveholders. Particularly
after the Vesey and Turner revolts, whites feared the power of black preachers,
particularly free black preachers, to move slaves to rebellion. Many states
passed laws dealing overtly with slave conspiracies, punishable by death.
Other statutes prohibited slaves from gathering for religious meetings or
dances and prohibited slaves or free people of color from preaching.
State courts established enforcement mechanisms that made these legislative
prohibitions real. Slave patrols, appointed by county courts or militia
captains, were supposed to “visit the negro houses . . . and may inflict a
punishment . . . on all slaves they may find off their owner’s plantations,
without a proper permit or pass . . . ” Slave patrols were also supposed to
“suppress all unlawful collections of slaves,” catch runaways, and punish
slaves for other infractions. Eighteenth-century slave patrols had tended
to involve a wide cross-section of the white community, but by the 1820s
higher status whites in some areas appeared to think the work beneath
them and relied instead on their overseers. In general, however, local white
elites stayed active in patrolling. Control of the Southern labor force was
too important for them to delegate to others, and slave patrols were useful
adjuncts to slaveholders’ authority. Similarly, while many masters chose to
punish their slaves on their own farms or leave punishment to their overseers,
some local governments provided whipping houses where slaves could
be sent for the customary thirty-nine lashes. Runaway jails housed escaped
slaves who had been recaptured.6
Marriage and Family
The slave codes illuminate another important aspect of slavery: control over
the slave’s sexuality and family life. Slaves could not legally marry. Nor
could a black slave marry or have sexual relations with a white female.
The codes did not mention relations between white males and black slaves;
slave status followed the mother and not the father. Despite the laws, whites
6 Edward Cantwell, The Practice at Law in North Carolina (Raleigh, NC, 1860).
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 289
routinely recognized slave marriages – often even in courtroom testimony
or in judicial opinions. Yet when it came to testifying against one another
in court or charging manslaughter rather than murder in the case of a
man who had caught his wife in bed with another man, judges refused to
recognize slaves’ marriage. In his treatise on Slaves as Persons, Cobb justified
the non-recognition of slave marriage in racial terms, advancing the myth
that slaves were lascivious and their “passions and affections seldom very
strong,” so that their bonds of marriage and of parenthood were weak, and
they “suffer[ed] little by separation from” their children.7
In fact, family was a source of autonomy and retention of African culture
for enslaved people. Some of the best historical work on slavery has brought
to life the ways that slaves retained their own values despite slavery by
uncovering the survival of practices of exogamy – that is, not marrying first
cousins. White Southerners married their first cousins, but black slaves did
not and persisted in the practice. Efforts to maintain African culture are
also in evidence in naming patterns that sustained African names alongside
owners’ imposition of day-names and classical names, such as Pompey and
Caesar. Native-born populations of slaves appear to have had more success
in self-naming – keeping kin names, especially those of fathers, in a system
that legally denied fatherhood – than the first generation. This suggests
that family was a source of strength in slave communities. It was also a
source of resistance and a means of communication. Slaves ran away to get
back to families and conducted “abroad” marriages with spouses on other
farms, creating a larger community of African Americans.
The importance of family made it at the same time a source of vulnerability:
family breakup was a powerful threat that enhanced slaveholders’
control. It was a threat backed by experience – one-fourth to one-third of
slave families were separated by sale. Family was also a powerful incentive
not to run away, especially for slave women. Enslaved women who ran with
their children could not get far; far more common was truancy, staying out
for several days and then returning.
Unmarried or married, enslaved women lived with the fear of sexual
assault. Sexual assault on an enslaved woman was not a crime. While Cobb
suggested that “for the honor of the statute-book,” the rape of a female
slave should be criminalized, such a statute was passed in Georgia only in
1861 and was never enforced. Cobb reassured his readers that the crime was
“almost unknown,” because of the lasciviousness of black women.8 In one
Missouri case in the 1850s, the slave Celia murdered the master who had
been raping her since she was a young teenager. Her lawyer brought a claim
7 Cobb, Inquiry into the Law of Negro Slavery, 39.
8 Cobb, Inquiry into the Law of Negro Slavery, §107, 100.
Cambridge Histories Online © Cambridge University Press, 2008
290 Ariela Gross
of self-defense, using a Missouri statute that gave “a woman” the right to
use deadly force to defend her honor. But the court in that case found that
an enslaved woman was not a “woman” within the meaning of the statute;
the law did not recognize Celia as having any honor to defend.
Slave law and family law also intersected in the law of property and
inheritance. The most basic property question regarding slavery, of course,
was the status of the slaves themselves as human property – how would
that status be inherited? By the nineteenth century, it was well-settled law
that slave status passed on from mother to child, guaranteeing that the
offspring of masters’ sexual relationships with their slaves would become
the property of the masters. In transfers as well, the master owned the
“increase” of his human property: “When a female slave is given [by devise]
to one, and her future increase to another, such disposition is valid, because
it is permitted to a man to exercise control over the increase . . . of his
property. . . . ”9 Furthermore, as one Kentucky court put it, “the father of a
slave is unknown to our law. . . . ”10
By refusing to recognize slaves’ marriages or honor their family ties,
Southern courts and legislatures inscribed the dishonor of slaves into law.
It should be no surprise that, in the immediate aftermath of emancipation,
many freed African Americans saw marriage rights as central to their claims
of citizenship. A black corporal in the Union Army explained to a group of
ex-slaves, “The marriage covenant is at the foundation of all our rights. In
slavery we could not have legalised marriage: now we have it . . . and we shall
be established as a people.”11 By identifying marriage as the foundation of
citizenship, the speaker dramatized the way slavery’s denial of family ties
had served to put slaves outside society and the polity.
In the Criminal Courts
Slaves who fought back against the injustices of their lives – especially
against masters who raped them, beat their children, or separated them
from their families – ended up in the criminal courts of Southern counties.
In the famous case of State v. Mann, Lydia ran away from her hirer, John
Mann, who shot her in the back as she fled. The question in the case was the
right of the slave to life – to be safe from cruel treatment. This was the one
right Cobb had said the law allowed the slave. Yet, Judge Thomas Ruffin,
9 Fulton v. Shaw, 25 Va. 597, 599 (1827). 10 Frazier v. Spear, 5 Ky. 385, 386 (1811).
11 Letter from J. R. Johnson to Col. S. P. Lee, 1 June 1866, Unregistered Letters Received,
ser. 3853, Alexandra VA Supt., RG 105, National Archives, reprinted in Ira Berlin et
al., eds., Freedom: A Documentary History of Emancipation, 1861–1867, Series II; The Black
Military Experience (Cambridge, MA, 1982), 672.
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 291
in a stark statement of the nature of slavery, held that courts would not
interfere with the owner’s authority over the slave: “We cannot allow the
right of the master to be brought into discussion in the Courts of justice.”12
Discipline was to be left to owners – or, as Mann was, hirers – and trust
placed in their private interest and benevolence.
Four years later, in State v. Will, the same North Carolina court overturned
Ruffin’s decision. In this case, Will, like Lydia, resisted his master
and walked away from a whipping. Like Lydia, Will was shot in the back.
ButWill fought back, stabbing his owner three times with a knife.Will was
put on trial for murder, but the presiding judge,William Gaston, decided
that he was guilty of the lesser crime of felonious homicide. In doing so, he
upheld the principle that there were limits to the master’s authority over a
slave and that a slave had the right to resist the master who overstepped the
limits. Gaston wrote that “the master has not the right to slay his slave, and
I hold it to be equally certain that the slave has a right to defend himself
against the unlawful attempt of his master to deprive him of his life.”13
Oakes comments, “It is pointless to ask whether Ruffin or Gaston correctly
captured the true essence of slavery.” The two cases “reveal the divergent
trajectories intrinsic to the law of slavery – the one flowing from the total
subordination of the slave to the master, the other from the master’s subordination
to the state.”
Ordinarily, when a white person was put on trial for abusing or killing
a slave, the grand jury would simply refuse to issue an indictment or the
jury would turn in a verdict of not guilty. Some doctors gave abhorrent
testimony offering alternative theories as to the cause of death when a slave
had been whipped to death – that she might have had a heart attack or
a sudden illness and that her vicious character and angry passion would
predispose her to such a seizure. But an owner could win damages from
a hirer, overseer, or other person who abused his slave in a civil case for
trespass. In these cases, juries were much more willing to find that cruelty
had taken place in order to compensate the slaveholder.
Civil cases could be a big deterrent, but not to a master for mistreatment
of his own slave. Neighbors of AugustusW.Walker testified that they had
seen him “whip in a cruel manner his slaves and particularly a young girl
11 years old, whom he whipped or caused to be whipped at three different
times the same day, eighty lashes each time and furthermore they said
Walker overworked his negroes.” Walker also locked his slaves in a dungeon
and frequently inflicted “as many as one hundred licks to one boy at a
time” with a “strap or palette.” He made his slaves work from three-thirty
12 State v. Mann, 13 N.C. (2 Dev.) 263, 267 (1829).
13 State v. Will, 18 N.C. 121, 165 (1835).
Cambridge Histories Online © Cambridge University Press, 2008
292 Ariela Gross
in the morning until nine or ten at night, without meal breaks or Sundays
off. In a criminal prosecution for “harsh, cruel & inhuman treatment
towards his slaves,”Walker was acquitted. The judge explained the flexible
standard for punishment of slaves: “the master can chastise; the slave is
entirely subject to his will; the punishment must necessarily depend on the
circumstances . . . if the case is a grave one, the chastisement will probably
be severe, if the slave is of a robust constitution, the chastisement may be
increased . . . ” In an accompanying civil case, in which Walker sued one
Joseph Cucullu for selling him ten slaves “afflicted with serious maladies,
diseases, and defects of the body.” Cucullu argued that any problems with
the slaves could be attributed to Walker’s harsh treatment. However, the
Louisiana court found forWalker in the civil case as well, above all because
he did not “strike . . . at random with passion or anger,” but had a system
for plantation management and discipline. The most important thing was
that a master should have a regular system of “rules” that he “imposes on
him[self].”14
Criminal prosecutions of slaves like Will exhibit a trend toward greater
procedural guarantees for slaves. The greatest unfairness slaves faced were
white juries and the exclusion of slave testimony against a white person.
Unfortunately, slave testimony was allowed against a black person, and it
was not uncommon for slaves to be convicted on the basis of the testimony
of other slaves. Yet slaves received real defenses, often by prominent lawyers,
and their appeals and writs of habeas corpus were heard all the way up the
state court systems. Procedural guarantees were grudgingly conceded by
men who feared their consequences, but saw them as necessary to slavery in
a liberal system. The conflicts between Lydia and Mann, Will and Baxter,
Ruffin and Gaston, exemplified the problem of slave resistance in such a
society. When slaves resisted, they forced the law to deal with them as
people.
Slavery and Commerce
The courthouse was one of two institutions central to Southern culture.,
The other was the slave market. Civil trials involving slaves were routine
events that brought townsfolk and planters together to fight over their
human property and, in the process, to hash out their understandings of
racial character. Through rituals invested with all the trappings of state
authority, both white and black Southerners again and again made the
journey from one institution to the other, slave market to courthouse.
14Walker v. Cucullu, No. 326 (1866), Louisiana Supreme Court Records, Earl K. Long
Library, Special Collections & Archives, Univ. of New Orleans, La.
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 293
The slave markets that provided so many lawyers with their livelihoods –
both as litigators and as slaveholding planters – did a vigorous business in
the antebellum Deep South. Although importation of foreign slaves ended
in 1808 as a result of constitutional prohibition, throughout the antebellum
period the states of the Deep South continued to import slaves from the
Upper South in ever greater numbers. Slave traders brought slaves from
Virginia, Kentucky, and Tennessee to sell at the markets in Charleston,
Natchez, and New Orleans. Overall, more than a quarter of a million slaves
came into the Deep South from the Upper South each decade from the 1830s
on. Local sales also accounted for a substantial part of the trade, probably
more than half. Individual slaveholders sold slaves to one another directly
or used local traders as intermediaries. And slaves were sold by the sheriff at
public auction when a slaveholder or his estate became insolvent. In South
Carolina, one state for which solid numbers are available, insolvency sales
amounted to one-third of all slave sales.
Southern states periodically banned domestic importation, as Mississippi
did, for example, from 1837 to 1846. Bans appear to have been prompted by
both economic and security considerations: sectional tensions between older,
established areas that had no need of more slaves and newer areas; temporary
economic panics; and reactions to well-known slave insurrections. The bans,
however, were always overturned and in any case made little impression
on the trade. Mississippi was the first state to develop another form of
regulation in 1831, again in reaction to the Turner rebellion in Virginia;
it required imported slaves to register a “certificate of character” from the
exporting state, guaranteeing that the slave was not a runaway or thief. This
requirement was also quite simple to circumvent, as one trader explained:
all one had to do was “to get two freeholders to go along and look at your
negroes. You then tell them the name of each negro – the freeholders then
say that they know the negroes and give the certificates accordingly.”
Prices for slaves rose throughout the antebellum period, with the exception
of the panic years of the late 1830s and early 1840s. “Prime male
field hands” in the New Orleans market sold for about $700 in 1846; their
price had more than doubled by 1860 to upward of $1,700. To own slaves
was to own appreciating assets, as important as capital as for the value of
their labor. Slaveholders were an economic class whose slave property was
their key asset; they moved around frequently, investing little in towns or
infrastructure. Even the high level of land speculation in Mississippi and
Alabama suggests that slaveholders were not particularly attached to their
land. Slaves were their most important form of capital.
Slaves were also the cornerstone of the Southern credit economy, for they
were highly desirable as collateral for loans. Credit sales of slaves ranged from
a high of 37 percent of all slave sales (1856) to a low of 14 percent (1859),
Cambridge Histories Online © Cambridge University Press, 2008
294 Ariela Gross
averaged 20 percent, and rarely had terms longer than twelve months; land
mortgages lasted two to five years. Thus, slaves were the ideal collateral for
debts. A complex web of notes traded on slaves existed, though it could, and
often did, fall through in years of financial panic and high land speculation.
Other segments of the Southern economy also depended on slaves.
Hiring, or leasing, provided an important way for both individuals and corporate
entities, especially towns and cities, to obtain labor without making
the major capital investment in slaves. Slave hiring may have involved as
much as 15 percent of the total slave population. Hiring relationships also
took place among private parties. Slaves, in fact, were fragmented property,
with so many interest-holders in any particular slave that there was no such
thing as a simple, unitary master-slave relationship for most slaves and most
masters.
Market transactions, credit relations, and hires all led to disputes that had
the potential to land the parties in court. In cases of hire, some owners sued
hirers for mistreating a slave. More often, these cases resembled warranty
suits in that hirers sued owners when the leased slave turned out to be
“unsound,” died, or ran away. In either situation, the trial revolved around
the question of who should assume responsibility for the condition and
character of the slave.
Most sales anticipated litigation at least indirectly by including an
express warranty by the seller that a slave was “sound in body and mind and
slave for life.” Form bills of sale used by slave traders generally included
spaces for the sex, name, and age of the slave and for the warranty, but left
the language blank to allow variation. Some bills of sale explicitly excluded
certain aspects of that particular slave’s condition or character from warranty.
When slave buyers were dissatisfied with their purchases, they tried
to recover for the problems directly. Usually this meant confronting the
seller with a demand that he take back the slave and return the purchaser’s
money. Slave traders were more likely to settle such cases out of court
than were private individuals. In their private writings, planters wrote of
their frustration with the legal system. Benjamin L. C.Wailes, a prominent
doctor and planter of Natchez, became embroiled in litigation when the life
estate-holder of his plantation Fonsylvania sold and mortgaged a number
of slaves without permission. After an unsuccessful suit for eight slaves sold
through Miles and Adams, New Orleans commission merchants, he wrote
in his diary: “Note. Never engage in a law suit if to be avoided or have
anything to do with lawyers without a written agreement as to terms and
compensation.”15
15 Benjamin L.C.Wailes, Diary, Sept. 2, 1859, available at Duke University Archives.
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 295
Buyers, sellers, owners, and hirers of slaves most often brought their disputes
to the circuit courts of their county. They went to court primarily
to win monetary damages. Their suits dominated the dockets of circuit
courts and other courts of first resort at the county level. In Adams County,
Mississippi, about half of the trials in circuit court involved slaves, mostly
civil disputes among white men regarding the disposition of their human
property. Of these civil disputes, a majority were suits for breach of warranty
– for example, 66 percent of the appealed cases in the Deep South
and 52 percent of the trials in Adams County. Suits based on express warranties
could be pled as “breach of covenant” or as “assumpsit,” both actions
based in contract. In Louisiana, suits of this type were especially common,
because the civil law codified consumer protections under the category of
“redhibitory actions.” One could obtain legal relief for the purchase of a
slave who was proven to have one of a series of enumerated “redhibitory”
vices or diseases, including addiction to running away and theft.16 Although
professional traders preferred cash sales or very “short” credit (notes payable
in six months or one year), a significant number of buyers in local sales paid
at least part of the slave’s price with notes, some of them with much longer
terms. In those cases, breach of warranty might be a defense to a creditor’s
lawsuit to collect the unpaid debt. Over the course of the antebellum period,
litigation increased in the circuit courts because of the growing population
and economy, but slave-related litigation increased grew even more quickly,
indicating the rising economic centrality of slaves.
Commercial law appeared to be the arena in which the law most expected
to treat slaves as property – in disputes over mundane sales transactions.
When slave buyers felt their newly acquired human property to be “defective”
physically or morally, they sued the seller for breach of warranty –
just as they would over a horse or a piece of machinery. In these and
other commercial disputes, the parties brought into question and gave legal
meaning to the “character” and resistant behavior of the enslaved, who persisted
in acting as people. Take as an example Johnson v. Wideman (1839), a
South Carolina case of breach of warranty, in which the buyer (Wideman)
defended his note against the seller by claiming that the slave Charles
had a bad character. According to Wideman, Charles was everything that
struck terror into a slaveholder’s heart: he owned a dog (against the law);
he was married (unrecognized by law); he tried to defend his wife’s honor
against white men; he not only acted as though he were equal to a white
man, he said he wished he was a white man; he threatened white men with
16 “Of the Vices of Things Sold,” La. Civ. Code, Bk. III, Tit. 7, Chap. 6, Sec. 3, arts.
2496–2505 (1824).
Cambridge Histories Online © Cambridge University Press, 2008
296 Ariela Gross
violence; he refused to work unless he wished to; and he did not respond to
whipping.17
The plaintiff-seller’s witnesses told a different story. According to them,
Charles was a drunkard and an insolent negro only when he lived with
Wiley Berry, a “drinking, horse-racing” man himself (from whom Johnson
bought Charles). As one witness explained, “He had heard of [Charles’s]
drinking. He had borne the character of an insolent negro: but not in
the time he belonged to the Johnsons.” Others testified that Charles was
humble and worked well, that when Johnson owned him, “he was not so
indolent as when he belonged to Berry.” Berry had exposed him to spirits
and had whipped him frequently. Johnson’s case rested on the contention
that Charles was a good slave when managed well, and the only evidence
of his insolence came from his behavior under Berry and under Wideman
himself.
Judge John Belton O’Neall, Chief Justice of the South Carolina Court of
Errors and Appeals, who presided over the trial on circuit, explained that he
had instructed the jury as follows: “Generally, I said, the policy of allowing
such a defence might be very well questioned. For, most commonly such
habits were easy of correction by prudent masters, and it was only with
the imprudent that they were allowed to injure the slave. Like master, like
man was, I told them, too often the case, in drunkenness, impudence, and
idleness.” O’Neall’s “like master, like man” theory of slaves’ character led
him to find for the seller in this case.
Thus, even a court that wanted to exclude moral qualities from implied
warranty, as did South Carolina’s High Court of Errors and Appeals, still
heard cases where the moral qualities of a slave were put on trial. In Johnson
v. Wideman we see the range of behaviors and qualities permissible in a
skilled slave. For example, when Charles confronted his first master,Wiley
Berry, about Berry’s behavior with his wife, he convinced Henry Johnson
that he was in the right in this dispute with Berry. This case also offers a
strong judicial exposition of a common theory of slave vice: “like master, like
man.” Johnson’s argument, largely accepted by the trial judge and Justice
O’Neall, was that Charles’s misbehavior could be attributed to the freedom
Berry gave him and the bad, example Berry set. This theory removed agency
from the slave, portraying the slave as the extension of his master’s will.
By painting slaves as essentially malleable in character, courts could lay
the responsibility on masters to mold the slave’s behavior. Thus, sellers
emphasized malleability and exploited the fear of slaves’ deceitfulness to
do so. Slaveholders constantly feared that slaves were feigning illness or
17 Johnson v. Wideman, 24 S.C. L. 325 (Rice 1839).
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 297
otherwise trying to manipulate their masters; a good master was one who
could see through this deceit and make a slave work.
Southern courts confronted the agency of slaves in other kinds of litigation
arising out of commercial relationships as well, most commonly actions for
trespass and other actions we would categorize today as “torts.” Owners
brought lawsuits against hirers, overseers, or other whites who had abused
their slaves or to recover the hire price for slaves who had fallen ill or run
away during the lease term.
All of the explanations of slave character and behavior outlined above –
as functions of slave management, as immutable vice, as habit or disease –
operated in some way to remove agency from enslaved people. Reports of
slaves who took action, such as running away on their own impulse and for
their own rational reasons, fit uneasily into these accounts.Yet because slaves
did behave as moral agents, reports of their resistance persistently cropped
up in court. At times, witnesses provided evidence of slaves acting as moral
agents; on other occasions, the nature of the case required acknowledgment
of slaves’ moral agency.
Occasionally the courts explicitly recognized slaves’ human motivations
as the cause of their “vices.” More often, these stories were recorded in the
trial transcripts, but disappeared from the appellate opinions. Just as judges
were reluctant to recognize slaves’ skills and abilities, they feared giving
legal recognition to slaves as moral agents with volition, except when doing
so suited very specific arguments or liability rules. Recognizing slave agency
threatened the property regime both because it undermined an ideology
based on white masters’ control and because it violated the tenets of racial
ideology that undergirded Southern plantation slavery in its last decades.
Judges outside of Louisiana recognized slave agency most directly in
tort cases, in which a slaveholder sued another for damage to a slave when
under the other’s control. Most commonly, the defendant in such a case
was an industrial hirer or a common carrier, usually a ferry boat. Common
carriers were generally held responsible for damages to property on board,
which they insured. In Trapier v. Avant (1827), in which Trapier’s slaves
had drowned crossing in Avant’s ferry, the trial judge tackled the question
of “whether negroes, being the property damaged, they should form an
exception to the general rule of liability in the carrier.” He determined
that slaves should not be an exception. “Negroes have volition, and may do
wrong; they also have reason and instinct to take care of themselves. As a
general rule, human beings are the safest cargo, because they do take care
of themselves.” According to the judge, the humanity of the slaves did not
present enough of a problem to alter the general property rule. “Did this
quality, humanity, cause their death? certainly not – what was the cause?
The upsetting of the boat. who is liable fore the upsetting of the boat? The
Cambridge Histories Online © Cambridge University Press, 2008
298 Ariela Gross
ferriman; there is an end of the question.” The dissenting judge, however,
pointed out the problem created by slaves’ human agency: if the slaves had
run away or thrown themselves overboard before the ferryman had a chance
to reach them, then holding Avant responsible would amount to converting
his contract into a guarantee of the slaves’ “good morals and good sense.”18
In effect, not recognizing slaves as agents with free will meant holding all
supervisors of slaves strictly liable for their character and behavior; recognizing
slaves as agents, conversely, meant that supervisors were not required
to “use coercion” to compel slaves’ behavior. The first option created the
equivalent of a warranty of moral qualities in the tort context, with all of
its attendant difficulties. The second option threatened anarchy.
In the commercial, criminal, and family law contexts, courts wrestled
with the dilemmas posed by human property. Lawyers and judges confronted
slave resistance by promoting stories about the origins and development
of slave character and behavior that removed rational agency from
slaves. In this way, the law created an image of blackness as an absence of
will, what PatriciaWilliams has called “antiwill.”
Because the conflicts so often devolved into a debate over mutability
or immutability of character, the focus inevitably shifted from slaves to
masters. Mastery and the character of masters came into question directly
under the dictum of “like master, like man,” but indirectly as well in every
decision about a slave’s character that reflected in some way on her master’s
control, will, or honor. Northern abolitionists always said that the worst
thing about slavery was how it depraved white men’s character. Slaveholders
defending slavery tried in various ways to disprove this accusation and even
to show that white men improved their character through governing. By
the final decades before the Civil War, most Southern slaveholders were
keenly aware of the relationship between their role as masters and their
character. The courtroom was one arena in which slaveholders and other
white Southerners worked out their hopes and fears for themselves and
their future.
II. SLAVERY, ANTI-SLAVERY, AND THE CONSTITUTION
Just as slavery was fundamental to the culture and economy of the South,
slavery was pivotal to the compromises and conflicts of national politics
throughout the early nineteenth century, and it was the central issue in
the administration of a federal legal system. The constitutional compromise
reached in 1787 did not hold. Increasingly, runaway slaves pressed
18 Trapier v. Avant, Box 21, 1827, S.C. Sup. Ct. Records, South Carolina Department of
Archives and History.
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 299
the legal system to confront the constitutional basis of slavery just as territorial
expansion forced the political system to reckon with the conflict
between slave labor and free labor. Pro-slavery and anti-slavery constitutional
theories clashed as their advocates used the legal system to forward
their political goals. The irreconcilability of their visions resulted in the
ultimate constitutional crisis, civil war.
Anti-slavery constitutionalism faced an uphill battle in the American
legal and political arena. From the controversy over anti-slavery petitions
in Congress in the 1830s through the debates over fugitive slaves in legislatures
and courts, radical abolitionist positions on the Constitution were
increasingly marginalized. The contest over slavery became ever more the
struggle of Northern whites to head off the “Slave Power’s” threat to their
own freedoms.
The Abolitionist Movement
The era between the American Revolution and the 1830s was the first great
period of the abolitionist movement. The first white abolitionists were a
group of Quaker lawyers in Pennsylvania who formed the Pennsylvania
Abolition Society in 1775. These anti-slavery advocates were elite white
men who worked within the political and legal system to achieve the gradual
abolition of slavery. They used a variety of tactics, including petitioning
state legislatures and Congress regarding specific issues, such as the domestic
slave trade and slavery’s westward expansion, and litigating cases of
kidnapped free blacks or runaway slaves.
Although the lawyers who defended fugitives tried to work within existing
law, rarely making broad arguments about the constitutionality of slavery,
their legal strategies did lay the groundwork for a broader attack on the
institution. Through such litigation, as well as campaigns for the rights of
free blacks in the North, anti-slavery lawyers developed the legal and constitutional
arguments that became the basis for abolitionism after 1830.
The Pennsylvania Abolition Society lawyers hoped that a buildup of judicial
victories, not landmark cases, would eventually result in the national
obstruction of slavery. Numerous complaints from African Americans concerned
about kidnapping drove the Society’s legal strategy, which initially
targeted loopholes and technicalities in Pennsylvania’s own Gradual Abolition
Act in order to free slaves within and outside the state. A number of
legal mechanisms were available to protect black people within the state’s
borders. The most important writ in the anti-slavery arsenal was the “great
writ” of habeas corpus. The writ de homine replegiando was also used to
win the release of captured fugitives and to gain jury trials for them. These
writs required the recipient to “deliver the body [of a detainee] before” a
Cambridge Histories Online © Cambridge University Press, 2008
300 Ariela Gross
legal official. The writ de homine replegiando was even more useful than
habeas corpus, however, because it required the fugitive to be released from
custody until the resolution of the legal process. Abolitionist lawyers used
these writs to fight for the freedom of individual slaves, case by case.
By contrast, black abolitionists developed strategies that sharply diverged
from the legal activism of the early white abolitionists. Black anti-slavery
activists used the early media, including pamphlets and newspapers, to
appeal directly to the public, rather than merely lobbying and petitioning
legislators. They also relied on social organizations such as churches and
benevolent societies to disseminate information and build popular support.
To further these activities, the American Society for the Free Persons of
Color was formed in 1830, holding its first meeting in Philadelphia to
discuss national tactics for combating racial prejudice and slavery.
By directly confronting the underlying racism of the colonization movement
and demanding an end to slavery as well as rights for free blacks,
black abolitionists spurred the advent of immediatism. White abolitionists
in Massachusetts, especiallyWilliam Lloyd Garrison and Amos Phelps,
joined together with black activists to advocate “immediate” abolition and
integration. Abolitionism stormed onto the national scene in the 1830s
with the birth of a new national organization, the American Anti-Slavery
Society. Two calls to action heralded the rise of militant anti-slavery: David
Walker’s 1829 Appeal to the Colored Citizens of the World and the first issue
ofWilliam Lloyd Garrison’s Liberator, on January 1, 1831.Walker’s appeal
exhorted African Americans to take up arms if necessary to fight slavery.
In the inaugural issue of the Liberator, Garrison proclaimed, “I will not
equivocate – I will not excuse – I will not retreat a single inch – AND I
WILL BE HEARD.”
The Liberator targeted all schemes for gradual emancipation, especially
colonization. As criticisms of colonization’s hypocrisy became more prevalent
in the 1830s, many abandoned the movement and devoted themselves
to immediatism: not only Garrison but Arthur and Lewis Tappan, Sarah
and Elizabeth Grimke, Salmon P. Chase, Gerrit Smith, Theodore Dwight
Weld, and many others. Black abolitionists had called for immediate abolition
before the 1830s, but it was the trends among white abolitionist
leaders in that decade that made immediatism a force in national politics.
The new wave of abolitionists fought for an end to segregated schools and
other institutions within Northern states – winning important victories in
Massachusetts – and began calling for mass action against slavery in the
South. They drew in blacks and whites, women and men, establishing
for the first time in an integrated movement. This new strategy of mass
action revolutionized the legal work and legislative petitioning of early
abolitionists. While abolitionists continued to represent fugitive slaves and
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 301
to petition legislatures, they refused to obey “political roadblocks or legal
limitations” as their predecessors had. Instead they “used the people to
circumvent the obstacles to abolition.” Huge crowds of citizens who showed
up at a trial might successfully keep a fugitive slave from being retried or
“upset the cool course of the law [by] making an ‘audience’ for the judge
and lawyers to contend with.”19 The American Anti-Slavery Society grew
quickly in the 1830s, establishing 1,600 auxiliary branches by 1837 and
collecting more than 400,000 signatures during the following year on antislavery
petitions to Congress.
Southerners took the groundswell of 1830s abolitionism seriously. In
response to the flood of anti-slavery petitions arriving on Congress’s steps,
Southerners responded with their own fierce legal and extra-legal action.
A mob in Charleston, South Carolina, seized mail sacks containing American
Anti-Slavery Society literature and burned them. John C. Calhoun
endorsed a bill to prohibit the mailing of any publication “touching on the
subject of slavery” to anyone in a slave state. These efforts to squelch free
speech regarding slavery culminated in the “gag rule” controversy, in which
Calhoun introduced numerous resolutions attempting to force the Senate’s
refusal of anti-slavery petitions.
Yet only a few years later, in 1840, the American Anti-Slavery Society
split into factions, the political abolitionists forming the Liberty Party
to directly effect their anti-slavery aims through political means and the
Garrisonians continuing to insist that change could best be effected through
public opinion. “Let us aim to abolitionize the consciences and hearts of
the people, and we may trust them at the ballot-box or anywhere,” declared
Garrison.20 During the 1840s, three anti-slavery groups emerged from the
schism within the abolitionist movement, each with a different constitutional
theory.
Pro-Slavery and Anti-Slavery Constitutional Theories
Of all of the constitutional theories of anti-slavery, the one that had the
most in common with Southern perspectives on the Constitution was that
of the ultra-radical William Lloyd Garrison. Southerners made the sound
constitutional argument that the compact would never have been made if
it did not recognize and support slavery; that the freedom of whites had
been based on the enslavement of blacks, and that the Constitution protected
property rights in slaves. Garrison declared the Constitution to be “a
19 Richard S. Newman, The Transformation of American Abolitionism: Fighting Slavery in the
Early Republic (Chapel Hill, NC, 2002), 144–45.
20 The Liberator, March 13, 1840.
Cambridge Histories Online © Cambridge University Press, 2008
302 Ariela Gross
covenant with death, an agreement with hell” precisely for the reason that
it did sanction slavery. Garrisonians, including Wendell Phillips, believed
that slavery could not be overthrown from within the legal and constitutional
order; extra-legal means would be required. Beginning in the 1840s,
Garrison moved from his anti-political perfectionism to a constitutional
program of disunion through secession by the free states and individual
repudiation of allegiance to the Union.
Garrison’s remained a minority perspective among abolitionists, but it
was in some ways the most prescient view. Political and legal action within
the constitutional system continued to be a dead end for abolitionists, who
were continually put on the defensive by ever more aggressive and overreaching
pro-slavery political forces wielding dubious theories of “nullification”
– that the Constitution was a compact between states, which could
“nullify” or withdraw from the compact whenever they chose.
The political appeal of the Southern rights argument to Southern nonslaveholders
depended on several linked ideas, some of which also had
resonance in the North, notably the notion of white man’s democracy –
that having a black “mudsill” class made possible greater equality among
whites. Other Southern arguments, however, confronted the North and
West with what looked like implacably expansionist claims, based in part
on fear of what the South would be like without slavery – the threat that
without the ability to expand its socioeconomic system into the territories,
the South would be doomed to second-class citizenship and inequality in
a Union dominated by an alliance of Northern and Western states. Under
these conditions, Northerners for their part grew fearful that an expansionist
octopus-like “Slave Power” would overwhelm and consume the free-labor
North.
Within anti-slavery politics, radical constitutional abolitionists such as
Frederick Douglas and Lysander Spooner began to argue after 1840 that,
rather than endorse slavery, the Constitution in fact made slavery illegitimate
everywhere, in the South as well as in the territories. Theirs was
a minority position that relied on a textual reading of the Constitution,
arguing that the document nowhere explicitly sanctioned slavery and that
the “WRITTEN Constitution” should not be “interpreted in the light of
a SECRET and UNWRITTEN understanding of its framers.” The radicals
argued that the federal government should abolish slavery in the states
because it violated the Fifth Amendment due process guarantee, the Article
IV guarantee of republican government, and other clauses of the Constitution.
Spooner and Douglas also made originalist arguments about the
founders’ intentions to have slavery gradually wither away. They claimed
that the slavery clauses of the Constitution had been written in such a
way as to offer no direct support to the institution, even while satisfying
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 303
its supporters in the short term. According to this view, the Constitution
had become perverted by acquiescence in pro-slavery custom, but its antislavery
character could be redeemed by federal action: “The Constitution
is one thing, its administration is another. . . . If, in the whole range of the
Constitution, you can find no warrant for slavery, then we may properly
claim it for liberty.” Finally, the radicals relied on a natural law interpretation
of the Constitution, insisting that it had to be read side by side with
the Declaration of Independence and given the meaning that best expressed
the ideals of the Declaration.21
The most popular anti-slavery position, held by moderate abolitionists
like Salmon P. Chase, posited that the federal government lacked power
over slavery, whether to abolish it where it existed or to establish it anew
anywhere. Drawing on Lord Mansfield’s famous decision in Somerset’s Case
(1772), they argued that slavery was established only by positive law and
only existed in those places (the South) where it had been so created. The
political theory that went along with this constitutional theory was that of
“divorcement,” the idea that slavery was dependent on support by the federal
government and would wither away if separated from it. By 1845, divorce
had given way to Free Soil, which in effect fully applied Somerset to American
circumstance. This was the idea embodied in theWilmot Proviso of 1846;
it eventually became the Republican Party platform and the argument of
Lincoln in his debates with Stephen Douglas. It was opposed by Douglas,
whose theme of “popular sovereignty” held each new state could decide
for itself whether to be slave or free. The Compromise of 1850 and the
Kansas-Nebraska Act of 1854 embodied popular sovereignty’s emphasis on
state-by-state decision making, leading to terrible civil wars in the territory
of Kansas between rival pro-slavery and anti-slavery governments, each with
its own constitutions.
All of these constitutional theories came into direct conflict in a series of
legal confrontations involving two sets of issues: the fate of fugitive slaves
in free states and territories and the future of the territories themselves.
The first set of controversies, regarding fugitive slaves, came to a head
largely in state legislatures and courts, as Northern legislatures sought to
protect fugitives and both Northern and Southern courts wrestled with the
interpretation of those statutes and of the Fugitive Slave Laws passed by
Congress to implement the Constitution’s Fugitive Slave Clause. The second
set of dilemmas, regarding the status of slavery in the Western territories,
played out in Congress and in presidential politics in a series of short- (and
21 Frederick Douglass, “The Dred Scott Decision: Speech at New York, on the Occasion of
the Anniversary of the American Abolition Society,” reprinted in Paul Finkelman, ed.,
Dred Scott v. Sandford: A Brief History with Documents (New York, 1997), 177, 181.
Cambridge Histories Online © Cambridge University Press, 2008
304 Ariela Gross
shorter) lived compromises. The two sets of controversies culminated and
merged in the dramatic and infamous Supreme Court case of Dred Scott v.
Sandford (1857), which represented the ultimate constitutionalization of
political conflict – a case that the Supreme Court meant to resolve the
conflict conclusively, but instead helped pave the way for war.
Personal Liberty Laws and the Rights of Fugitives in the North
Many slaves ran away, some with help from whites and free blacks; the
so-called Underground Railroad had an estimated 3,200 active workers.
It is estimated that 130,000 refugees (out of 4 million slaves) escaped the
slave South between 1815 and 1860. By the 1850s, substantial numbers of
Northerners had been in open violation of federal law by hiding runaways for
a night. By running away, slaves pushed political conflict to the surface by
forcing courts and legislatures to reckon with the constitutional problems
posed by slaves on free soil. Later, during the war, slave runaways would
again help force the issue by making their own emancipation militarily
indispensable.
Southern slaves in the North – whether visiting with their masters or
escaping on their own – raised a difficult issue of comity for the courts to
resolve. Even so-called sojourning slaves could be considered free when they
stepped onto free soil. The question of whether the Northern state should
respect their slave status or whether the Southern state should bow to the
rule became a heated issue throughout the states.
The state courts reached different answers to the question. The best precedent
from the abolitionist standpoint was a Massachusetts case decided by
Chief Justice Lemuel Shaw in 1836, Commonwealth v. Aves. Citing Somerset’s
Case, Shaw wrote that slavery was “contrary to natural right and to laws
designed for the security of personal liberty.” Therefore, any “sojourning”
slave who set foot on Massachusetts soil became free; fugitives were the only
exception. But Aves represented the peak of anti-slavery interpretation of
comity. By the end of the 1830s, any agreement in the North about the
obligations of free states to return slaves to Southern owners had dissipated.
States had given divergent answers on the questions of whether legislation
was necessary to secure the rights of masters and whether states could or
should provide jury trials to alleged slaves.22
From the 1830s until 1850, many Northeastern states tried to protect
Northern free blacks from kidnapping by slave catchers and to provide some
legal protections for escaped slaves who faced recapture in the North. In most
of New England, New York, New Jersey, and Pennsylvania, legislatures
22 35 Mass. 193 (1836).
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 305
passed personal liberty laws to limit the recovery of fugitive slaves from
within their boundaries by forbidding the participation of state authorities
or the use of state property in the capture of a fugitive. Other laws gave
alleged runaway slaves procedural protections in court and created various
obstacles to recovery by owners.
Some state statutes, such as that of Massachusetts, tied anti-kidnapping
provisions to the writ of habeas corpus. One such law was the Pennsylvania
personal liberty law, which gave rise to the famous Supreme Court case,
Prigg v. Pennsylvania (1842). Prigg was a test case arranged by Pennsylvania
and Maryland to determine the constitutionality of Pennsylvania’s personal
liberty law. For the Court, Justice Joseph Story held the Fugitive Slave Act
of 1793 to be constitutional and therefore concluded that a Pennsylvania
law prohibiting local collaboration with slave reclaimers was also unconstitutional.
He read the Constitution with the assumption that the fugitive
slave clause had been necessary to the compromise that secured the Union
and the passage of the Constitution. Therefore, “seizure and recaption” of
fugitive slaves was a basic constitutional right, and states could not pass laws
interfering with that right. But Prigg left open important questions, some
of which Story purported to answer only in dicta: Could states enact laws to
obstruct recapture or provide superior due process to captured slaves? Did
Prigg enshrine in American law, as Story later claimed, the Somerset principle
that slavery was only municipal law? Justice Story’s opinion argued that
the power to pass legislation implementing the fugitive slave clause resided
exclusively in Congress. Congress proceeded so to act in 1850, as part of
the Compromise of 1850. For his part, Chief Justice Taney – concurring
in Prigg – argued that the states, while they could not legislate to hinder
recaption, could always enact measures to aid the rights of slaveholders to
recapture fugitives.
Abolitionists were furious over the outcome in Prigg. Garrison wrote:
“This is the last turn of the screw before it breaks, the additional ounce
that breaks the camel’s back!”23 Yet many anti-slavery advocates used the
essentially pro-slavery Prigg decision for their own purposes in the 1840s,
picking up Story’s hint that it could be read, or at least mis-read, to bolster
the Somerset position, and insisting that free states must do nothing to
advance slavery.
Northern states passed a new series of personal liberty laws in part out
of increased concern for the kidnapping of free blacks given the lack of
procedural protections in the 1850 federal Fugitive Slave Act, but also
out of a growing defiance against the “Slave Power.” For example, a new
Pennsylvania Personal Liberty Law of 1847 made it a crime to remove a
23 The Liberator, March 11, 1842.
Cambridge Histories Online © Cambridge University Press, 2008
306 Ariela Gross
free black person from the state “with the intention of reducing him to
slavery” and prohibited state officials from aiding recaption. It reaffirmed
the right of habeas corpus for alleged fugitives and penalized claimants who
seized alleged fugitives in a “riotous, violent, tumultuous and unreasonable
manner.”24 The Supreme Court overturned these laws in the consolidated
cases of Ableman v. Booth and United States v. Booth in 1859, in an opinion by
Justice Taney upholding the constitutionality of the 1850 Act and holding
that a state could not invalidate a federal law.
Increasingly, slaveholding states specified that slavery followed a slave to
free jurisdictions, whereas free states made the distinction between temporary
sojourns, during which a slave retained slave status, and transportation
to a free state or territory with the intent to remain, in which case the slave
was emancipated. However, under the 1850 Fugitive Slave Law, blacks in
any state, whether free or not, were in danger of being accused of fleeing
from bondage. The law empowered court officials to issue warrants allowing
alleged runaways to be turned over to any claimant with convincing
evidence that the prisoner was a slave, without a trial. The law greatly
enhanced slaveholders’ power to recover their property anywhere in the
country by annulling attempts by states to protect fugitives from recapture.
Furthermore, the law allowed marshals to summon “bystanders” to
help them, commanded “all good citizens” to “assist in the prompt and
efficient execution of this law,” and provided officials with an extra reward
for determining the accused to be a fugitive.25 Gangs of bounty hunters
began kidnapping African Americans to sell southward. Captured blacks’
opportunities to defend themselves were severely eroded. As many as 3,000
free blacks, fearing enslavement, headed for Canada, by the end of 1850. No
longer could one be certain that free states were truly free; it now seemed
to many Northerners as though the tentacles of the “Slave Power” reached
to the Canadian border.
Comity – recognition of the validity of the laws of one state by the
sovereign power of another – had seemed for a time to be a stable compromise
between the rights of property and of liberty. Joseph Story wrote in 1834
that comity was rooted in “a sort of moral necessity to do justice, in order
that justice may be done to us in return.” Similarly, Cobb believed comity
was necessary to “promote justice between individuals and to produce a
friendly intercourse between the sovereignties to which they belong.”26 But
that accommodation dissolved under the pressure of sectional conflict. Both
24 Pennsylvania Session Laws, 1847, 206–08, “An Act to Prevent Kidnapping . . . and to
repeal certain slave laws.”
25 9 U.S. Statutes at Large 462–65 (1850), at 463.
26 Joseph Story, Commentaries on the Conflict of Laws (Boston, 1846), 39–45; Cobb, 174.
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 307
Southern and Northern courts became increasingly aggressive. In Lemon v.
People (1860), for example, New York’s highest court freed a number of
slaves who were merely in transit from Virginia to Texas on a coastal vessel
and had docked briefly in New York City’s harbor to refuel. Similarly, the
Missouri Supreme Court, in Scott v. Emerson (1852), explained, “Times are
not as they were when the former decisions on this subject were made.
Since then not only individuals but States have been possessed of a dark and
fell spirit in relation to slavery. . . . Under such circumstances it does not
behoove the State of Missouri to show the least countenance to any measure
which might gratify this spirit.”27 Missouri’s refusal to apply principles of
comity to the slave Dred Scott was ratified by the U.S. Supreme Court five
years later.
Territorial Expansion
Just as the problem of fugitives increasingly brought sectional tension to
the surface, so did the seemingly inevitable march of territorial expansion.
Westward expansion of the United States raised the political question
of whether slave or free states would dominate the Union. The Missouri
Compromise of 1820 had decreed one new free state for each new slave
state; Southerners worried about the balance of power in Congress between
slave and free states. The succeeding decades saw a sequence of “compromises”
struck, each lasting a shorter time than the previous one.
The Missouri Compromise admitted Maine as a free state and Missouri
as a slave state and drew a line at the 36th parallel – all new states formed
north of the line would be free, and all south would be slave. This was
the most stable compromise of the antebellum period. It was upset by the
annexation ofTexas in 1846. Just three months after the start of the Mexican-
AmericanWar, Congressman DavidWilmot proposed an amendment to a
military appropriations bill, which became known as theWilmot Proviso.
It would have barred slavery in all of the territories acquired from Mexico.
Although the Proviso failed to pass, it marked the beginning of the Free
Soil movement. Free Soilers wanted to check Southern power and keep
slavery out of new territories to protect the “rights of white freemen” to
live “without the disgrace which association with negro slavery brings on
white labor.” The Free Soil Party formed in 1848 to fight for free labor in
the territories. Although the new party failed to carry a single state in the
1848 election, it did quite well in the North.
In the 1850s, “settlements” of the slavery question came fast and furious
– each one settling nothing. The Compromise of 1850 resulted in the
27 Scott v. Emerson, 15 Mo. 576, 586 *1852).
Cambridge Histories Online © Cambridge University Press, 2008
308 Ariela Gross
admission of California to the Union as a free state, while the other parts of
the Mexican Territory came in under “popular sovereignty”; the slave trade
was abolished in the District of Columbia; and the new, more stringent
Fugitive Slave Law was passed. Under the 1850 law, suspected fugitives
were denied the right to trial by jury and the right to testify in their own
behalf.
In 1854, Senator Stephen Douglas introduced a bill to organize the
Kansas and Nebraska territories on the basis of popular sovereignty, officially
repealing the Missouri Compromise. Douglas hoped that the Kansas-
Nebraska Act would focus the Democratic Party on internal expansion and
railroad building. Instead, the passage of the act split the Democratic Party
along sectional lines and led to the formation of the Republican Party,
which was a coalition of Northern Whigs, dissident Democrats, and Free-
Soilers who first came together in Michigan and Wisconsin. The Republicans
emphasized a platform of free soil and free labor for white men.
In 1856, violence broke out in Kansas: the “sack of Lawrence” by
pro-slavery forces was followed by the civil war that became known as
“Bleeding Kansas” and John Brown’s massacre of slaveholders at Pottawatamie.
Preston Brooks’ near-fatal caning of abolitionist Senator Charles
Sumner on the floor of the Senate coincided with the Lawrence attack. All
these events convinced free-soil Northerners that the “Slave Power” had
grown impossibly aggressive. Likewise, Southerners began to believe that
abolitionists’ tentacles were everywhere.
It was in this overheated atmosphere that the Supreme Court decided the
Dred Scott case in 1857. Chief Justice Roger Taney apparently hoped that
his opinion might settle these roiling constitutional controversies. Instead,
he probably hastened the resort to armed conflict.
The Dred Scott Case
Dred Scott v. Sandford addressed a question of comity that was similar to but
not the same as that raised by Prigg v. Pennsylvania. In Dred Scott, the issue
was not the fate of a fugitive to a free state, but rather of a sojourner in a free
territory.Territorial expansion raised the new question of whether slaves who
moved into new territories should be presumed slave or free. Chief Justice
Roger Taney’s infamous decision in Dred Scott v. Sandford represented only
the second time to that point that the Supreme Court had overturned an
act of Congress, and it was seen by many at the time as the first shot fired
in the Civil War. It was in reaction to the Dred Scott decision immediately
following the Kansas-Nebraska Act that Abraham Lincoln declared, “A
house divided against itself cannot stand.”
The case’s long legal odyssey began when Dred Scott’s owner, John Emerson,
took Scott out of Missouri, a slave state, to Illinois, a free state, and
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 309
then to Minnesota Territory, a free territory. Emerson was an Army physician
successively transferred to different stations. Scott’s daughter was born
somewhere on the Mississippi River north of Missouri, in either a free state
or territory. Scott and his daughter returned to Missouri with Emerson,
who died, leaving his wife a life interest in his slaves. Scott then sued for
his freedom; he won in lower court in Missouri on comity grounds, supported
by earlier Missouri precedent that a master voluntarily taking a slave
for permanent residence in a free jurisdiction liberated the slave. However,
in 1851, the Supreme Court decided Strader v. Graham (in an opinion by
Taney), ratifying a turnaround in conflict-of-laws doctrine, whereby courts
were to prefer the policy of the forum state – a holding first applied in
Northern courts as anti-slavery doctrine, but one that Southern courts could
use too.
When the Dred Scott case arrived at the Missouri Supreme Court, the
Court applied Missouri law and found Scott to be a slave, noting that
“[t]imes are not as they were when the former decisions on this subject
were made.” Sectional conflict had made comity impossible. Dred Scott
found a new master, John Sanford (brother of the widow Emerson) and, in a
collusive suit, sued for freedom from his new master in another state through
a diversity suit in federal court. The federal district court found that Scott’s
status should be determined by Missouri law, which had already upheld
his status as a slave, and he therefore remained a slave. Dred Scott appealed
to the U.S. Supreme Court in December 1854, and the case was argued in
February 1856. Interestingly, no abolitionist lawyers argued Scott’s case. His
attorney, Montgomery Blair, was a Free-Soiler concerned with the spread
of slavery into the territories. George T. Curtis, who joined Blair for the
December 1856 reargument of the case, was a political conservative opposed
to anti-slavery but fearful that the Taney Court might overturn the Missouri
Compromise and exacerbate sectional conflict.
The case presented two important questions. First, was Scott a citizen
for purposes of diversity jurisdiction? Second, was Scott free because he
had been taken into a free state and free territory? A third question, which
could probably have been avoided, was whether Congress had the power
to prohibit slavery in the territories. In other words, was the Missouri
Compromise constitutional? In an era in which the Supreme Court usually
strove for unanimity, there was little agreement on the Court on any one
of these questions. The Court issued nine separate opinions in the case,
including numerous overlapping concurrences and dissents, and many have
argued that Taney’s well-known opinion spoke for a majority of one. The
opinions of Justice Daniel and Justice Campbell were, if such a thing is possible,
even more extreme than Taney’s. Nevertheless, Taney’s high-handed
effort to “settle” the sectional conflict on Southern terms certainly had a
far-reaching influence.
Cambridge Histories Online © Cambridge University Press, 2008
310 Ariela Gross
The most infamous part of Taney’s opinion was the first section, in which
he held that Scott was not a citizen, because neither slaves nor free blacks
could claim the privileges and immunities of citizenship. To reach this conclusion,
Taney made an originalist argument that blacks were “not included,
and were not intended to be included, under the word ‘citizens’ in the
Constitution. . . . On the contrary, they were at [the time of the framing
of the Constitution] considered a subordinate and inferior class of beings,
who had been subjugated by the dominant race.” In fact, blacks were “so far
inferior that they had no rights which the white man was bound to respect.”
Even if some states, like Massachusetts, had bestowed rights on them, their
state citizenship did not confer U.S. citizenship on them.
Taney might have stopped there, finding that Dred Scott had no right
to sue in federal court and sending him back to Missouri court. Judge
Nelson’s concurrence argued more conservatively that slavery was a state
question that should be (and had been) decided by the state of Missouri.
But Taney was determined to answer the final question in the case, namely
whether Congress could make a territory free by federal law. Taney held
that the Missouri Compromise was unconstitutional and that the federal
government lacked power over slavery except to protect property rights in
slaves. He claimed that Article IV Sec. 3 of the Constitution, authorizing
Congress to legislate for the territories, applied only to the public lands
as they stood in 1789. According to this logic, the Northwest Ordinance
was constitutional, but Congress had no power to legislate for the territories
once people were able to legislate for themselves, reaffirming the
“popular sovereignty” principle of the Kansas-Nebraska Act. A blistering,
sixty-nine page dissent by Justice Benjamin Curtis attacked each and
every one of Taney’s premises. Curtis painstakingly recreated the history
of free blacks in the late eighteenth century, showing that in a number of
states, free blacks had been voters and citizens at the time of the founding.
Curtis also argued forcefully that Congress had the right to regulate
slavery.
Taney had hoped that his decision would lay to rest the political debate
over slavery. He was not the only one to harbor this hope. In his inaugural
address delivered just two days before the announcement of the decision,
Democratic President-elect James Buchanan observed pointedly that the
issue of slavery in the territories was “a judicial question, which legitimately
belongs to the Supreme Court of the United States,” to whose decision he
would “cheerfully submit.”28 Many observers saw this agreement between
Taney and Buchanan as more than happenstance – in fact, as a conspiracy.
28 James Buchanan, Inaugural Address, March 14, 1857, in James D. Richardson, ed., A
Compilation of the Messages and Papers of the Presidents (New York, 1897), 6:2962.
Cambridge Histories Online © Cambridge University Press, 2008
Slavery, Anti-Slavery, and the Coming of the Civil War 311
In his opening campaign speech to the Illinois Republican convention in
1858, Lincoln pointed to the fact that the Dred Scott decision was
held up . . . till after the presidential election . . . Why the outgoing President’s felicitation
on the indorsement? Why the delay of a reargument? Why the incoming
President’s advance exhortation in favor of the decision? . . .We can not absolutely
know that all of these exact adaptations are the result of preconcert. But when we see
a lot of framed timbers, different portions of which we know have been gotten out
at different times and places and by different workmen – Stephen, Franklin, Roger
and James, for instance – and when we see these timbers joined together . . . in such
a case, we find it impossible to not believe that Stephen and Franklin and Roger
and James all understood one another from the beginning, and all worked upon a
common plan or draft . . . 29
Of course, the decision could not have had less of the effect Taney hoped
for it. Frederick Douglass declared that his “hopes were never brighter than
now,” after the decision came down, because he believed it would incite
the North to take a firmer stand against slavery. Dred Scott almost certainly
contributed to the election of Abraham Lincoln in 1860 and the onset of
the CivilWar the following year.
Dred Scott was never overruled by the Supreme Court, although the Thirteenth
and Fourteenth Amendments, passed by Congress in 1865 and 1868,
ended slavery and guaranteed civil rights for African American citizens.
Justice Frankfurter was once quoted as saying that the Supreme Court
never mentioned Dred Scott, in the same way that family members never
spoke of a kinsman who had been sent to the gallows for a heinous crime.
CONCLUSION
On the eve of the Civil War, slavery was a system that functioned quite
smoothly on a day-to-day level. Law helped the institution function –
enforcing contracts, allocating the cost of accidents, even administering
sales. Slaves who fought back against their masters could sometimes influence
the outcome of legal proceedings, and their self-willed action posed
certain dilemmas for judges who sought to treat them solely as human property.
But the legal system developed doctrines and courtroom “scripts” that
helped erase evidence of slaves’ agency and reduce the dissonance between
what the ideology of white supremacy dictated relations between slaves
and masters ought to be and what had actually transpired among slaves,
slaveholders and non-slaveholders to bring them into the courtroom.
29 Abraham Lincoln, Illinois State Journal, June 18, 1858, reprinted in Paul M. Angle,
Created Equal? The Complete Lincoln-Douglas Debates of 1858 (Chicago, 1958), 1–9.
Cambridge Histories Online © Cambridge University Press, 2008
312 Ariela Gross
Ultimately, it was politics that destroyed slavery. Slaves helped push
sectional conflict over slavery to the surface by running away. Fugitive
slaves forced the legal system to confront the issue of comity as well as
the problem of territorial expansion. And because, in the United States, all
major political conflict is constitutionalized, although slavery did not lead
to a crisis in law, it did create a crisis for the Constitution. The Civil War
was the constitutional crisis that could have ended the brief experiment
of the United States. Instead, it led to a second American Revolution, a
revolution as yet unfinished.
Cambridge Histories Online © Cambridge University Press, 2008
10
the civil war and reconstruction
laura f. edwards
The Civil War and Reconstruction utterly transformed American society.
Historians argue over the nature and extent of the changes wrought during
the period, but there is little disagreement over the importance of the
period as such: if nothing else, the sheer volume of scholarship establishes
that point. Textbooks and college-level survey courses usually break with
the CivilWar and Reconstruction, which provide either the ending for the
first half or the beginning of the second half. Books debating the causes of
the war and its implications line the library shelves and are fully represented
in virtually every historical subfield: party politics, ideology, religion, the
economy, slavery, race and ethnicity, the status of women, class, the West,
the South, religion, nationalism and state formation, as well as law and the
Constitution. Other historical issues dating from the American Revolution
to the present are linked to this period as well – historians look back to
the nation’s founding for the war’s roots and then trace its effects into the
present.
Rather than focusing on the war years or their immediate aftermath, legal
historians have tended to concentrate on matters linked to it, before and
after. Particular emphasis has been given to the perceived limits of the U.S.
Constitution in diffusing the issues that led up to war and to the changes
that occurred in federal law afterward, although a considerable body of work
examines the legal implications of policy changes in the Union during the
war as well. The first group of professional historians to consider these
issues had been raised in the bitter aftermath of the war, and their work
reflected that background. This group – influenced by the Dunning school,
after its intellectual mentor,William A. Dunning, a professor at Columbia
University – deemed Reconstruction an unmitigated failure. Although the
work of Dunning school historians ranged widely in focus, they singled
out legal changes at the federal level – specifically, the Thirteenth, Fourteenth,
and Fifteenth Amendments – for particular opprobrium. Open apologists
for white supremacy, these historians argued that the amendments
313
Cambridge Histories ,Online © Cambridge University Press, 2008
314 Laura F. Edwards
constituted an illegal usurpation of state authority and led the country to
the brink of chaos: by imposing the will of a radical minority and granting
rights to African American men who were incapable of exercising them, the
results destroyed the South and jeopardized the nation’s future. Inflammatory
today because of its open racism, Dunning School scholarship actually
reflected a reconciliation among whites, North and South, at the beginning
of the twentieth century. It assumed a consensus on racial issues in all sections
of the country. The war and, particularly, its aftermath could thus be
characterized as an avoidable aberration, the work of radical elements in the
North who captured the national stage and forced their wild schemes on an
unsuspecting populace.
The Dunning school has had a remarkably long purchase on the scholarship
of the period, including legal history. The aftershocks ofWorldWar II,
when the scope of the Holocaust was revealed, dealt a final blow to its overtly
racist props. But its themes continued to define the basic questions about
legal change: Was the Civil War inevitable, within the existing constitutional
framework? To what extent did postwar policies alter fundamentally
the legal order of the nation?
Later historians writing in the shadow of the civil rights movement
addressed those questions by focusing on Reconstruction’s promise of full
civil and political equality to African Americans. One strand of the scholarship
has emphasized the failures. A combination of judicial foot-dragging
and political maneuvering turned back the clock nearly to where it had
been before the war. Not only were white Southerners allowed to regain
control, they were also allowed – even encouraged – to ignore new federal
law and to create a new racial system that closely resembled slavery. To make
matters worse, federal courts then turned to the Fourteenth Amendment
to buttress the position of corporations at the expense of labor, creating
new inequalities from the very laws that were intended to promote greater
equality.
Where some historians have seen a glass half empty, others have seen
it half full. Federal policy, particularly the Fourteenth Amendment, was
a “second American revolution” that provided the constitutional basis to
fulfill at last the promises of the first. Progress came slowly, culminating
only in the mid-twentieth century with the civil rights movement. But
those changes never would have been realized at all had it not been for the
policies of the Reconstruction era.
The tendency to see Reconstruction as an era that promised great legal
change has spilled over into the scholarship on the Civil War. Recent histories
have treated the war as if it were inevitable, a fight that had to be
waged to clear the way for what came next. They characterize the conflict as
the collision of two distinct social orders, each with different conceptions of
Cambridge Histories Online © Cambridge University Press, 2008
The Civil War and Reconstruction 315
individual rights, the role of law, and the reach of the state. Only one could
survive. One body of scholarship has focused on the dynamics leading up the
war, with an eye toward explaining why conflicts reached the point where
the existing institutional order could no longer contain them. This work
has pointed to inherent weaknesses attributable to the Constitution, particularly
the lack of authority at the federal level, which short-circuited the
development of a strong, effective nation-state. Those weaknesses not only
contributed to the outbreak of the war but they also presaged problems
that the reconstructed nation would need to address afterward. Another
body of scholarship has looked to the war years more directly as a precursor
to Reconstruction, examining wartime policies within the Union and the
Confederacy to contextualize postwar policies and reactions to them. This
work has tended to emphasize change rather than continuity by showing
how the war, itself, took the nation in new legal directions.
Most work undertaken within the field of legal history has focused on
the national level, exploring mandarin policy debates and then tracing the
effects through the states and, from there, to people’s lives. This scholarship
treats causation as a process that works from the top down, with the most
momentous changes emanating from the three branches of the national
government. Lately, though, a body of work has emerged that not only
expands the field of vision to include law at the state and local levels but also
locates the sources of change outside the federal government. Not all of this
work falls within the field of legal history, at least as traditionally conceived:
it is to be found in women’s, African American, labor, and Southern history
and is inspired by the approaches used in social, cultural, and economic
history. Nevertheless, this body of scholarship both engages questions that
have been central to legal history and highlights the legal component of
issues not usually considered in that field. As this work shows, the war
opened up a series of debates about the location of legal authority and the
daily operation of law. It also reveals that legal change flowed from below as
well as above, directed by the actions of ordinary people in all sections of the
country who confronted questions about law in the course of the war and
its aftermath. The theoretical implications of federal law filtered through
the courts, but the practical application occurred in local areas, both North
and South. That dynamic drew ordinary Americans further into conflicts
about the operation of law, its scope, and its ends.
Here, I unite the traditional work of legal history with the new approaches
that contemplate law from the perspective of social, cultural, and economic
history. I develop one central argument: the Civil War forced the nation
to confront slavery. The implications of that confrontation reached beyond
the status of former slaves to transform law and legal institutions in ways
that affected all the nation’s citizens. I begin with the Civil War itself,
Cambridge Histories Online © Cambridge University Press, 2008
316 Laura F. Edwards
focusing on changes during the war years that took the nation in new legal
directions afterward. In both the Union and Confederacy, many wartime
policies addressed immediate concerns and were not intended as reforms
to law or the legal system. Yet, whether explicitly intended to change law
or not, wartime policies laid the groundwork for profound changes in the
legal order afterward.
The second section turns to Reconstruction, but takes the analysis beyond
the brief formal span of that period and into the last decades of the nineteenth
century. Here I trace the legal difficulties presented by emancipation,
which necessitated the integration of a formerly enslaved population into
the legal order. I look beyond federal cases resulting from the Reconstruction
amendments and other national legislation for signs of what Reconstruction
meant at the state and local levels. An exclusive focus on the federal
level can understate the extent of change in this period by characterizing
the problem as one of establishing and implementing the civil and political
equality of African Americans within the existing legal order. As difficult as
that was, the issues become more problematic when considered in the context
of states and localities. Events at these levels reveal that the extension
of rights to African Americans required structural change in the logic and
institutions of law. The implications reached out in unpredictable directions,
involving issues seemingly unconnected to the war and people whose
legal status was not directly affected by Reconstruction policies.
I. THE CIVIL WAR
From the outset, questions about law, particularly the location of legal
authority, were central to the Civil War. Secessionists, of course, asserted
state sovereignty over most legal issues. At the outbreak of the war, debates
over states’ rights had become inextricably tied to sectional differences
connected to slavery. Those claiming to represent “the South” advocated
an extreme states’ rights position, whereas their counterparts from “the
North” predicted the end of the Union should such a view prevail. Yet
the polarized rhetoric overstated the differences between the two sections.
It also oversimplified the underlying issues by conflating questions about
governing structures with disagreements over the content of the resulting
decisions. Those issues centered on federalism – the relative balance of legal
authority between states and the federal government. Federalism had not
always divided the nation into opposing geographic sections. At the time of
the nation’s founding, for instance, Southern slave holders were among those
advocating a stronger federal government. In 1832, during the Nullification
Crisis, most Southern political leaders still rejected the extreme states’ rights
position of South Carolina radicals. Even in subsequent decades, as states’
Cambridge Histories Online © Cambridge University Press, 2008
The Civil War and Reconstruction 317
rights became a lightning rod for sectional differences, the rhetoric did not
accurately describe federalism’s practical dynamics, to which the balance
of state and federal authority was as much a means as an end. Political
leaders shifted back and forth, depending on the particular issue. Stances
on the Fugitive Slave Act (1850) and the U.S. Supreme Court’s decision
in Dred Scott (1857) are representative. Many Northerners opposed both
as illegitimate encroachments on states’ established purview over the legal
status of those who lived within their borders. Many Southerners supported
them as necessary means to uphold property rights, as established in their
own states and threatened by other states’ laws. However heated, debates
remained possible as long as both sides accepted the legitimacy of the
existing legal order. The breaking point came when Southern political
leaders rejected that order. Political leaders remaining in the Union did
not, nor did they seek fundamental change in it.
Yet, change is what resulted, on both sides of the conflict. Recent research
has focused on the Union, particularly the dramatic increase in federal
control over issues that previously had rested with states, local areas, and
individuals. Scholars have shown how the evolution of policy in the Union
during the CivilWar laid the groundwork for the dramatic legal changes of
Reconstruction. Their analyses also tend to echo the terms of contemporary
debate, namely that centralization would remove law from the people. Yet,
tracing the implications beyond the federal arena suggests additional layers
to the issue. In daily life, the results of increased federal authority were
more ambiguous, altering people’s relationship to law in unforeseen ways.
In those areas occupied by the Union Army, for instance, federal presence
actually had democratizing tendencies. In other policy realms traditionally
considered as attempts to increase opportunities for ordinary Americans –
such as the transcontinental railroad and the opening of Western lands –
federal policies had very different effects.
Historians have not considered Confederate policies to have had the same
kind of long-term impact on the nation’s legal order as those of the Union.
That conclusion is understandable, in the sense that Confederate laws were
fleeting products of a short-lived political experiment. Even so, their implications
were still lasting, because states’ rights led in two, contradictory
directions that left deep trenches in Southern soil. Conducting a war to
establish states’ rights required a centralized, federal government. By the
end of the war, the Confederate national government actually had assumed
far more authority than the U.S. government ever had, at least on paper. In
practice, however, the continued commitment to states’ rights undercut the
central government’s legitimacy and tied it up in controversy. The upheaval
of war, which was fought primarily on Confederate soil, further undermined
the legitimacy of government at all levels. It was not just the war, moreover,
Cambridge Histories Online © Cambridge University Press, 2008
318 Laura F. Edwards
that produced conflicts over law. Different people had long defined law
in their own terms: the dislocation of wartime provided opportunities for
those differences to emerge. The result was a radical decentralization of legal
authority that went far beyond what states’ rights advocates ever imagined
or desired. The end of the war may have led to the collapse of both the
Confederate government and the legal order that it tried to create. But the
conflicts generated by that government and its policies defined the postwar
years.
The Union
In mobilizing to defend the existing legal order, those in the Union ended
up changing it. As often has been the case in U.S. history, war went hand in
hand with an increase in federal authority. Abraham Lincoln began using
the open-ended nature of presidential war powers almost immediately in
his efforts to hold the border states of Maryland, Kentucky, and Missouri in
the Union. He suspended civil rights, threatened martial law to force votes
against secession, and then forestalled further conflict through military
occupation. Lincoln continued to make liberal use of presidential powers
throughout the war. In 1862, he announced that anyone who resisted the
draft, discouraged others from enlisting, or was deemed disloyal to the
Union war effort would be subject to martial law. That meant dissenters
would be tried in military courts rather than in state or local courts, where
juries might be more sympathetic. He also suspended constitutional guarantees,
such as the writ of habeas corpus, thereby closing off the means
by which those arrested through executive order could contest imprisonment.
Executive authority expanded in other directions as well, particularly
through the draft and theWar Department. While the draft represented a
major encroachment by the federal government on the rights of its citizens,
the War Department became a model for modern bureaucracy, as it developed
an elaborate structure to manage those drafted into the military and
to oversee occupied areas.
Congressional Republicans followed suit, extending the federal government’s
reach to wage war more effectively. Funding the army’s operations
required the complete overhaul of the nation’s financial structure and the
centralization of authority over it. Congress also enhanced federal power
by expanding the scope of the judiciary. Concerns about dissent led to the
Habeas Corpus Act of 1863, which enhanced the power of federal officials
vis-`a-vis the states and expanded the jurisdiction of federal courts.
That same year, Congress also created the Court of Claims to settle claims
against the U.S. government, which formerly had been settled in Congress.
Claims multiplied exponentially as a result of the war.
Cambridge Histories Online © Cambridge University Press, 2008
The Civil War and Reconstruction 319
Not all wartime initiatives, however, were connected to the war. Many
were part of the Republican Party’s political agenda, which advocated more
federal involvement in the nation’s economy. What Republicans hoped to
accomplish was an extension of the existing economic and legal order to
encompass the generality of the population. Their goal was evident in the
party’s slogan, “free soil, free labor, free men,” evoking a polity based on
independent producers along the lines of the Jeffersonian ideal. In that
ideal, male farmers and artisans owned the means of production (land or
the tools of their trade), which allowed them to direct their own labor
and that of their families. Male economic independence then grounded
the legal order, because it entitled men to rights: access to the legal system
through full civil rights as well as the ability to alter and create law through
political rights. Economic independence thus secured the entire nation’s
future by ensuring a responsible, engaged citizenry, who were equal before
the law.
Like most ideals, this one was more consistent in theory than in practice.
Despite the rhetorical emphasis on equality, inequality was integral to it.
By the 1850s, most adult white men could vote and claim the full array
of civil rights on the basis of their age, race, and sex. But for others, age,
race, and sex resulted in inequalities. The legal status of male, independent
producers, for instance, assumed the subordination of all domestic dependents
– wives, children, and slaves – to a male head of household and the
denial of rights to them. Free African Americans were included in theory
but not in practice. The free black population had increased in the decades
following the Revolution, with abolition in Northern states, the prohibition
of slavery in many Western territories, and individual emancipations
in the South. State and local governments had responded by replacing the
disabilities of slavery with restrictions framed in terms of race. Even for
free white men, the ideal of economic independence and legal equality
had never fully described reality. For many, economic independence had
been difficult to achieve. Their situation deteriorated as capitalist economic
change intensified in the antebellum period, for those changes eroded the
link between economic independence and legal rights as state legislatures
uncoupled claims to rights from the ownership of productive property.
Numerous legal restrictions still attached to those without visible means
of support and even those who performed menial labor.
The theoretical link between economic independence and legal rights
nonetheless persisted. If anything, its symbolic significance acquired more
importance over time, as the Republican Party’s popularity suggests. The
notion of a republic of independent producers resonated widely and powerfully,
even among those who did not enjoy its promises in their daily
lives. Placing those promises at the center of its platform, the Republican
Cambridge Histories Online © Cambridge University Press, 2008
320 Laura F. Edwards
Party hoped to use federal power to create more independent producers and
promote their interests.
Secession gave Republicans a decisive majority in Congress and the
opportunity to act on this agenda, which they did, even as the war raged
around them.With the 1862 Homestead Act, they opened up settlement of
Western lands that had been tied up in sectional controversy. The act made
land more readily available to individual farmers than previously. It also
prohibited slavery, which Republicans deemed incompatible with the interests
of independent producers. To encourage farmers’ success, Congressional
Republicans provided for the development and dissemination of new agricultural
methods through the Land-Grant College Act and a new federal
agency, the Department of Agriculture. Then they tied all these individual
farms together in a national economic network with the Pacific Railroad
Act, which subsidized construction of a transcontinental railroad. To bolster
manufacturing, Congressional Republicans passed protective tariffs. Financial
reforms that helped fund the war effort figured prominently as well.
Many of those changes, including the creation of a unified national currency
and a central bank, did for finance what the railroad did for transport,
facilitating capital transfers and economic exchanges across the nation’s vast
expanses.
At their most idealistic, Republicans hoped that these economic programs
would enhance individual rights, particularly those of free white
male household heads. Yet, with the exception ofWestern settlers, few ordinary
farmers, artisans, and laborers benefited from Republican economic
programs. Republican initiatives instead fueled a competitive, national
economy that swallowed up small, independent producers. Railroad corporations
gained the most directly, pocketing millions of acres of public
land and other federal incentives. Those who did own their farms or shops
were no longer the kind of independent producers posited in the ideal.
Cornelius Vanderbilt, for instance, hardly fit the category although he
owned his own railroad “shop.” But, then, neither did farmers who presided
over large, mechanized enterprises, sold most of what they produced, and
bought most of what they consumed.
The Republican economic future was one of wage labor, not independent
producers. That created unforeseen contradictions, because the Republican
legal order was still based on independent producers, not wage work.Wage
laborers were included among the “free men” of Republican rhetoric, in the
sense that they owned their own labor, could sell it at will, and could enjoy
whatever they earned in doing so. If they were adult, white, and male, they
also could claim full civil and political rights, at least in theory. But in practice,
they were legally subordinate to their employers, who enjoyed rights
as independent producers that wage workers did not. Property rights gave
Cambridge Histories Online © Cambridge University Press, 2008
The Civil War and Reconstruction 321
employers extensive authority over their factories. Those rights extended
over laborers while they were on the job, where they could do little to
alter working conditions on property that was not their own. In this context,
the legal equality that wage workers theoretically enjoyed as citizens
could actually compound their subordination: in law, Vanderbilt and his
employees were equal, preventing legal intervention on employees’ behalf;
while as a property owner, Vanderbilt could do whatever he wished with
his property, preventing legal intervention on employees’ behalf.
Many Republicans’ reluctance to expand federal authority beyond its traditional
bounds compounded these problems. They were comfortable using
federal power to promote economic growth, the principle of equality before
the law, and the Union. But they were unwilling to use it to address the
inequalities that resulted in practice, whether economic or legal. Doing so,
they argued, pushed centralization too far and threatened individual liberty.
That stance shaped popular perceptions of the federal government during
the Civil War. Despite Republican intentions to distribute existing
economic opportunities and legal rights more broadly, at least among the
free white male population, most ordinary Northerners actually experienced
federal authority through the draft, taxes, and military service. Those
encounters were not always positive, even for those who supported the war
effort: the federal government did not give; it took – resources and lives.
It offered little in return, other than rhetorical promises of freedom and
equality. That situation only reinforced existing suspicions of centralized
authority and limited possibilities for its future use.
Slaves and the Future Legal Order of the Union
The Republican Party’s reluctance to use federal authority to rectify inequalities
among individuals carried over into slavery. Although most Republicans
opposed slavery, not all advocated its abolition in those areas where
it already existed. Only a small minority favored the extension of full civil
and political rights to former slaves – and many of those were free blacks
who identified with the Republican Party but could not vote because of
racial restrictions on suffrage in their states. Many Republicans considered
any intervention in slavery to be a dangerous projection of federal authority
onto the states and a fundamental violation of individual property rights.
That the federal government might go further, mandating the legal status
of free blacks or anyone else, was not on the political horizon in 1860.
Echoing the Republican Party’s platform, Abraham Lincoln opposed only
the extension of slavery in Western territories at the time of his election.
Otherwise he promised to leave the regulation of slavery, where it already
existed, to the states.
Cambridge Histories Online © Cambridge University Press, 2008
322 Laura F. Edwards
From the war’s outset, free blacks in the North tried to turn the war
for Union into one for the abolition of slavery and the legal equality of
all free people, regardless of race. So did slaves in the Confederacy. Even
before the states of the upper South seceded, slaves along the South Carolina
coast began fleeing to U.S. naval vessels. By August 1861, several thousand
were camping with General Benjamin Butler’s army at Fortress Monroe,
Virginia. Permanent U.S. posts in North Carolina, South Carolina, and
Georgia early in the war made it possible for more African Americans to
seize their freedom. Wherever Union troops went, slaves found them and
followed them. They did so at great risk. Runaways faced execution if recaptured
by Confederates and an uncertain future if they remained in Union
camps.
African Americans’ actions slowly pushed the military to intervene in
slavery. At first, commanders did not really understand that escaped slaves
expected freedom once they made it to Union lines. Many considered slaves
either too stunted by the institution of slavery or too inferior racially to
understand the concept of freedom, let alone to act so decisively to obtain it.
Nor did Union officers know what to do with these refugees, since their own
commander-in-chief, President Abraham Lincoln, still insisted that nothing
would be done to interfere with slavery. In fact, the Fugitive Slave Law
mandated that all runaways be returned. Existing law and Republican policy
statements, however, did not anticipate the situation facing the Union
armies.With thousands of African Americans crowding into Union camps
and following federal troops, military officials had no choice but to adapt.
Union commanders also saw the strategic benefits of harboring the enemy’s
slaves and were quick to appreciate the value of a ready labor supply for
themselves. The specific policies, though, were uneven and a