Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, AFRICAN HISTORY ( (c) Oxford University Press USA, 2018. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

date: 16 December 2018

Digital Approaches to the History of the Atlantic Slave Trade

Summary and Keywords

The robust, sustained interest in the history of the transatlantic slave trade has been a defining feature of the intersection of African studies and digital scholarship since the advent of humanities computing in the 1960s. The pioneering work of the Trans-Atlantic Slave Trade Database, first made widely available in CD-ROM in 1999, is one of several major projects to use digital tools in the research and analysis of the Atlantic trade from the sixteenth through the mid-nineteenth century. Over the past two decades, computing technologies have also been applied to the exploration of African bondage outside the maritime Atlantic frame. In the 2010s, (the online successor to the original Slave Trade Database compact disc) joined many other projects in and outside the academy that deploy digital tools in the reconstruction of the large-scale structural history of the trade as well as the microhistorical understandings of individual lives, the biography of notables, and family ancestry.

Keywords: Atlantic world, databases, digital humanities, genealogy, slavery, Trans-Atlantic Slave Trade Database, transatlantic slave trade

Slave Trade Studies in the Age of Big Data

Two prominent characteristics of computing technologies fuel Atlantic slave studies in the digital age—the first at the level of systems, servers, and software, the second at the level of end users. First, slave studies thrive on the continual improvements in the capacity, accessibility, and affordability of a digital infrastructure that can accommodate the massive scale and scope of the historical archive of black and indigenous bondage. From the punched card to G Suite and from thumb drives to cloud computing, multimedia hardware and information processing applications power the storage, conservation, retrieval, duplication, and analysis of billions of data points about slaves and slavery. The digital archive of slavery has grown to be just as significant as the manuscript, oral, and archeological archives. Second, slave studies draw from the digital user’s seemingly inexhaustible commitment to reassembling the life arcs of Africans and descendants whose lives were fragmented by enslavement. With the dogged patience of the lone genealogist and the cooperative ethos of the research collaborative, end users of computing technologies inside the academy and beyond have embraced tools that facilitate machine-assisted explorations of slave life and experience. Together, these dynamic forces animate interest in a wide range of approaches to enslavement and slaveholding, abolitionism and freedom, and Africa in the world. There may be a certain degree of ironic justice that the dehumanizing powers of Big Data and the stultifying tedium of the spreadsheet have made possible a humanistic recovery of one of the greatest crimes against humanity.

Early Computing and the African Slave Trade

Institutionalized antiblack bias at major research institutions in North America and Europe and under-resourcing at black-majority colleges and universities slowed the adoption of technology in slave trade studies in the first two decades of postwar academic computing. A casual disregard for the slave past and the global dispersal of uncatalogued archival resources also hampered the approximation of nascent machine-based methods. A paradigm shift took root in the 1960s, when US historian Herbert S. Klein catalyzed the spread of an international network of scholars engaged in the assembly of machine-readable datasets about the transatlantic slave trade. Klein’s own research in Spanish, US, and Brazilian sources complemented the work of others who were working with British, French, and Dutch materials. Simultaneously, Philip D. Curtin led the charge on advancing a combination of deep archival research and quantitative method to correct a long tradition of purely speculative estimates of the scope, volume, and significance of the transatlantic trade. It was by no coincidence that the opening chapter of Curtin’s highly influential monograph The Atlantic Slave Trade (1969) took “The Slave Trade and the Numbers Game” head on.

The 1970s were a fertile time for an empirically driven study of the transatlantic trade. The number of theses and dissertations, academic conferences, and special editions of peer-reviewed journals on the topic steadily increased. The international and comparative scope of research by Klein, Curtin, and others both informed and were informed by a number of innovative fields of scholarship including African history, black studies, and the study of world economic systems. Whereas data-driven analysis experienced uneven uptake across the academy, researchers in the history of the slave trade developed and debated an empiricist’s approach to the archive, with special attention to the economic dimensions of the trade and its destruction. Rendered in tabular and graphical forms, the numerical information that fed statistical analyses of the trade later served as the foundation for building bigger machine-readable datasets of increasing complexity.

In parallel developments in the field of social and economic history, the study of the slave plantation and slave production activated interests in statistical analysis of slaveholding, the slave family, and slave demography. Computer-aided mathematical modeling was used to examine the economics of slave labor and the transatlantic trade. Historians of slavery drew from the methods of computer-assisted social science research that were making steady inroads in the fields of early modern studies and cliometrics, among others. Advances in computing sciences came to some corners of historical research, including the study of slavery. As early as 1972, Computers and the Humanities, a pioneering journal in humanities computing launched in 1966, included select publications on the subject of the slave trade in its annual bibliography. David Herlihy, a medievalist at Harvard University, included Fogel and Engerman’s controversial Time on the Cross (1974) in a 1978 survey of computer-assisted historical method that appeared in the flagship publication of the Institute of Electrical and Electronics Engineers Computer Society.1

In North American and British higher education, an expanding aggregation of quantified data gathered from public records, private archives, and print material developed alongside increased access to mainframes and related technologies of data processing.2 The resulting computer-assisted studies traced the broad geographic and temporal outlines of the transatlantic trade between Africa and the Americas, the demography of the Middle Passage (i.e., total numbers of trafficked slaves as well as sex ratios, age cohorts, mortality, and ethnic origins), and the profitability and economic risks of the Odious Trade. While the frame was not exclusively British, statistical records of English traders garnered sustained attention.3

Although early data-driven analysis of the Middle Passage was generally inattentive to the experiences of common individuals or slave families before or after a transatlantic voyage, it shared with early social-history adopters of statistical software (notably Statistical Package for the Social Sciences, or SPSS, first released in 1968) a strong interest in creating and manipulating raw data about people that could tell stories of historical change from the bottom up. In subsequent years, the statistical evidence and analysis would play an important role in the biographical turn.

In a pattern of knowledge creation and diffusion that continues to the present, this early combination of archival research, encoding, and computer-assisted analysis informed traditional print scholarship, such as Klein’s The Middle Passage: Comparative Studies in the Atlantic Slave Trade (1978) and a corresponding professional standing and career advancement. Statistical analysis of quantified information drawn from the archives catalyzed the growth of additional datasets that refined and expanded the work pioneered in the late 1960s. Yet even as the study of the transatlantic trade and slave societies elevated quantifiable evidence over fanciful speculation or the disingenuous resignation of “never knowing,” the protocols of analog history continued to frame standards of field research, funding, and publication. Tenure and promotion criteria remained (and remains) resistant to treating the dataset per se as comparable to peer-reviewed publication.

In this early period, encoding priorities hewed closely to the interests of individual researchers and their graduate students, a handful of funding agencies (i.e., the Social Science Research Council, American Council of Learned Societies, National Science Foundation, and the Carnegie Corporation of New York), and the computing capacities of home institutions. The variables included in the early datasets were limited in number and are primarily demographic, financial, or maritime. Curtin, for example, encoded a modest sixteen variables for the 2,313-record dataset built from British sources on slave voyages between Africa and the Americas from 1817 to 1843; Klein had just ten variables for a 436-record dataset on the trade to Rio de Janeiro between 1825 and 1830, sourced from newspapers. Standardization in fields and metadata across projects and archives was elusive. Machine-readable datasets traveled with difficulty outside small networks of specialists. Cross-project compatibility of programming and software was not a given. Despite the now-obvious limitations of data manipulation and automated calculations, hard-copy runs of quantified material could be more easily accessible and useful, especially for researchers not versed in statistical software or those in resource-limited settings. The “History” webpage of Slave Voyages (successor to the Trans-Atlantic Slave Trade Database) aptly summarizes the period: “Scholars of the slave trade spent the first quarter century of the computer era working largely in isolation, each using one source only as well as a separate format, though the [Philip] Curtin, [Jean] Mettas, and [David] Richardson collections were early exceptions to this pattern.”4

Finally, it bears noting that this early period was marked by a dynamic of scholarly community formations characterized by limited racial and gender diversity and a concentration of resources in major research institutions. Even as a field now known as digital humanities matured and even as the academy was changed by expanded access for women and scholars of color, inclusion was (and remains) uneven in digital slave studies.

The Digital Forward Leap: The Trans-Atlantic Slave Trade Database

Capacity and access experienced significant growth in the 1990s, culminating in the release of The Trans-Atlantic Slave Trade: A Database on CD-ROM. The disc’s 1999 publication by Cambridge University Press was preceded by major conferences at Harvard University and the College of William and Mary, both of which generated considerable buzz among Atlanticists. A collaborative effort led by David Eltis (whose award-winning The Rise of African Slavery in the Americas (2000) drew heavily from the research that went into the database) alongside Richardson, Klein, and Stephen Behrendt, the interactive CD and its underlying datasets (formatted for export and user customization in SPSS) compiled and standardized records of more than 27,000 transatlantic slave voyages between 1595 and 1866. At the time, this was estimated to cover approximately 70 percent all transatlantic voyages departing Africa from 1600 forward. The disc and accompanying user manuals presented more than ten thousand voyages that had not been included in a pilot version of the dataset that grew out of seed funding from the National Endowment for the Humanities (an independent government agency that began funding database development in the early 1980s) awarded to the W.E.B. Du Bois Institute for Afro-American Research at Harvard University.

Programmed for compatibility with Microsoft Windows, the 1999 CD-ROM empowered the end user to follow his or her own curiosity and research agenda via customizable searches across one or more variables assigned to individual voyages. Search results were enhanced by interactive maps on the geography of the shifting trade and the spatial trajectories of individual voyages. The dataset provided significant information about the business of the trade and the key features of individual voyages (each assigned a unique numerical identifying code, or “VoyageID”) that seeded the archival record, including ports of departure and arrival, departure and arrival dates, crew and cargo size, deaths at sea, and onboard insurrections. In addition to the flexibility of searching, the end user was provided with underlying source citations, drawing the user, archive, and individual data point in close conversation.5

The 1999 CD-ROM brought new inflections to the old “numbers game,” as the CD release stimulated new scholarly and popular appraisals of the numerical scale of the transatlantic trade. Though acknowledging a “scholarly consensus” figure of 11.4 million embarkations in Africa (subsequently revised upward to slightly more than 12.5 million), the database editors nonetheless did not advance any definitive claims on the total volume of the Atlantic trade based on direct evidence. They instead relied upon the archival record and statistical projections to estimate the number of slaves embarked in Africa, slaves disembarked in the Americas, and those who died during the Middle Passage, among other variables. Specialized users who carefully worked through the documentation about estimates and their limitations were equipped to look far beyond the numerical mysticisms that so irritated Curtin in 1969. However, early reviews from academic faculty who taught from the database found that students could be overwhelmed by the figures and clunky interface and many struggled to understand scale and scope. An innumeracy of population statistics could leave the unguided user awed, but not necessarily better informed, about the magnitude of the historic trade. The database provided better numbers, but the “game” was not necessarily any easier.

Yet even if The Trans-Atlantic Slave Trade Database presented a measured (sometimes obtusely technical) approach to estimating the ship-by-ship, region-by-region, decade-by-decade scale of the African trade to the Americas, the deep research behind a multisource dataset and its distribution to hundreds of higher education and public libraries captured the imagination of academic and popular audiences. Scholarly uptake was rapid and enthusiastic, led by a special edition of the William and Mary Quarterly poetically prefaced by Henry Louis Gates, Jr. Press coverage embraced the multiple possibilities for K–12 education. A free-of-charge, downloadable teacher’s manual accelerated classroom adoption.

Within a decade of the CD’s release, Eltis and his collaborators at Emory University, where the project has received continued support from the Hutchins Center for African and African American Research at Harvard, the NEH, and the Emory Center for Digital Scholarship, invested significant time and resources to build the academic, public, and technical faces of an online website that permits complex, downloadable searches across multiple variables. (The 2016 online version has 96 variables; the corresponding downloadable dataset of 278.) The website makes available tutorials, technical documentation including a codebook, lesson plans, and a handful of essays, including “Estimates of the Size and Direction of Transatlantic Slave Trade” by Eltis and Paul F. LaChance. The online version makes available a modest number of digitized manuscripts of slave ship registries (each listed by its VoyageID) drawn from the British National Archives Slave Trade Department series (chiefly the Foreign Office 84 record group) as well as a smattering of visual images of enslaved Africans.

The ongoing research project led by Eltis has grown in a digital ecology of online browsing, internet-based search and discovery, crowdsourcing, and social media. The team has continually refined procedures for end users to contribute new material and to suggest corrections of existing data. As of 2016, more than fifty scholars had contributed additions and corrections to the 1999 dataset. Under the guidance of lead investigators and an editorial board, Slave Voyages has grown to be a dynamic, iterative, and collaborative digital resource.

An unquestioned field changer that has achieved the unusual distinction of a scholarly reference that also enjoys sustained engagement with popular audiences, Slave Voyages has still been the object of complaints about design flaws and inadequate user documentation. Especially early on, the Slave Trade Database was the object of some skepticism over inaccuracies and the gaps in geographic and chronological coverage. A heavy reliance upon the archival records of enslavers and their enablers has troubled end users searching for the voice of the enslaved in such a powerful research tool. Even enthusiasts acknowledge a certain disquiet with the database’s bloodlessness—in the double sense that the life blood of enslaved peoples and the bloody work of bondage are drained away by bytes and variables. In a 2015 blog, University of California Santa Cruz historian Gregory O’Malley remarked on the enduring strain of unease toward data-driven approaches that obscure “the humanity of the captives and the violence of their exploitation.”6

Eltis’s research team has devoted considered resources to address known shortcomings in the data, its presentation, and the concerns of scholarly empathy. Academic conferences, media interviews, and peer-reviewed publications have advanced robust responses to questions of utility. Since the publication of the Atlas of the Trans-Atlantic Slave Trade in 2010, Eltis has been especially prolific in presenting the public case for the project, appearing in online outlets such as The Conversation as well as traditional media including C-SPAN and National Public Radio’s Talk of the Nation.

Whereas data geeks have been drawn to the possibilities of data visualization—take, for example, the stir caused by Andrew Kahn and Jamelle Bouie’s “The Atlantic Slave Trade in Two Minutes” time-lapse animation that appeared in Slate in June 2015—the scholarly community continues to draw from the database to ask new questions about the slave family, naming practices, ethnolinguistic identity, and African participation in the trade and its decline. Methodological innovations are continuous, with multiple scholars demonstrating how the database can be read alongside nonquantitative sources, illuminating both. A social-media savvy group of historians loosely organized around hashtags such as #twitterhistorians, #DigHist, and #slaveryarchive have been especially keen to adapt the dataset to the modern classroom. In 2016, Arizona State University faculty John Rosinbum shared with the American Historical Association lesson plans that called upon students to use Voyages alongside the Slate visualization that had garnered tens of thousands of Facebook and Twitter engagements.

As of 2016, the dataset included 35,994 voyages, with greatly expanded coverage of the Luso-Brazilian Atlantic. Through close archival research, chronological coverage is much wider than the 1999 version. Although the site has yet to be optimized for mobile devices and touchscreens, and web accessibility features are lacking, the online version has been fully freed from prior limitations of Windows-based desktop hardware. Research that makes direct use of the site, as well as research modeled on a multisource, multilingual data architecture of standardized variables and unique identifiers (e.g., new datasets on the Indian Ocean traffic; newspaper ads about slave runaways; the body marking and modifications of enslaved Africans; and the experiences of illegally enslaved Africans liberated by bilateral mixed commissions or admiralty courts between 1807 and the 1860s; the movement, dislocation, and resettlement of enslaved Africans outside the transatlantic slave voyage) advances at a brisk pace. Another major update is slated for release in 2019, to include the intra-American movement of enslaved persons. Although sustainability remains one of the project’s greatest challenges—urgent measures had to be taken in 2015 to update obsolete code—Slave Voyages remains the gold standard for the field of digital slave studies.

Genealogies, Origins, and the Biographical Turn

The year following the publication of the original Slave Trade Database in CD-ROM, Louisiana State University Press released Databases for the Study of Afro-Louisiana History and Genealogy, 1699–1860, also in optical disc. Later renamed Afro-Louisiana History and Genealogy and republished on, an open source online library and archive platform hosted at the University of North Carolina at Chapel Hill, the large dataset was the fruit of Rutgers University professor Gwendolyn Midlo Hall’s monumental archival recovery of the registries of black lives in eighteenth- and nineteenth-century Louisiana.

Whereas Slave Voyages sprawls across the maritime Atlantic, Afro-Louisiana History and Genealogy dives deep into the interior of an American slave society organized around the port of New Orleans and the Lower Mississippi. Of the approximately 100,000 slave records included in a dataset of 114 fields, about 58 percent come from the county records of Orleans parish, in urban New Orleans. The remainder are from rural Louisiana. Alongside the record of slave ship arrivals to Louisiana (both from African ports and via transshipment from the Caribbean and the eastern United States), the database covers slave sales, estate inventory, probate records, runaway advertisements, mortgages and liens against slave property, death certificates, marriage licenses, criminal and judicial proceedings, reports on slave resistance, and certificates of manumission. With the named person as the primary organizing datapoint, the dataset ranges across life events and intimate relationships that went far beyond the transatlantic trade. Making legible the lives of named individuals, and life incidents from birth to death, Afro-Louisiana History and Genealogy appealed equally to the academics interested in the human saga of bondage from Africa to the Americas as well as to genealogists on the trail of a family past in slave Louisiana.

In 2009, the world’s largest for-profit genealogy company republished portions of Afro-Louisiana History and Genealogy in its “Louisiana, Slave Records, 1719–1820” and “Louisiana, Freed Slave Records, 1719–1820” search interfaces. The migration of a dataset developed by university academics to a commercial genealogical site reflects the ease with which arcane archival sources once digitized can be scaled upward and packaged as a product for noncommercial and market-based user demand. It reflects new standards of intellectual property and data stewardship quite distinct from the protocols for projects housed in university libraries. meets the hunger of end users—professional genealogists, academics, litigants, and the merely curious—exploring the African-American family legacies of enslavement, including the recovery of “lost” connections to the transatlantic trade and the Continent. Alongside the expansive online records of FamilySearch, a genealogy site operated by the Church of Jesus Christ of Latter-day Saints, and moderately priced genetic testing services (e.g., 23andMe, AncestryDNA, and African Ancestry, among others), online genealogical research into the slave past has become exceptionally accessible. Televisions programs including Genealogy Roadshow popularize a digital method for slave ancestral research, pushing viewers to turn to digitized collections such as censuses and tax records to find enslaved ancestors. Although the commercialization of genealogical data is proving to be rife with bioethical and legal liabilities, the digital has become a constitutive component of a remarkable lowering of barriers to entry and the democratization of research tools about slavery that were once reserved for specialists.

The scholarly publications informed by Afro-Louisiana History and Genealogy, including Midlo Hall’s prizewinning Africans in Colonial Louisiana (1992) and Slavery and African Ethnicities in the Americas (2007), have revealed many things, but principal among them was the recovery and subsequent analysis of registries of African origins among enslaved Africans in Spanish, French, and early American Louisiana. Perhaps the most significant field for the dataset aside from name is African birthplace (“BIRTHPL”). Among 38,019 slave records with a listed place of birth, a full 64 percent indicate Africa. Of that figure, 37 percent include specific ethnonyms (“nations”). Roughly the same percentage points to African coastal origins. Afro-Louisiana History and Genealogy, then, becomes a remarkably rich resource for the recovery of African lives before, during, and after the Middle Passage, with special insights into the geographies of forced migrations.

A similar interest in identifying the ethnolinguistic, cultural, and geopolitical origins of Africans swept into the trade is a fundamental objective of African Origins, an offshoot of Slave Voyages that draws from the nominal registries containing names and “nations” of approximately 91,000 enslaved Africans liberated before mixed commissions and British vice-admiralty courts in Sierra Leone and Havana. While recognizing that the historical experiences of Liberated Africans assigned Christian or classical given names makes the recovery of African heritage and birthplace difficult, the project deploys a modified form of crowdsourcing to identify regional and ethnic naming patterns in nominal lists containing African names. The correlations of name, suspected birthplace, and conditions of enslavement and liberation advance a more complete recovery of an individual’s life, lending important clues in the end users’ search for African ancestry and lineage.

A compelling urge to reconstruct the stories of African lives animates the use of digital resources that have direct applications in genealogy. These resources are also informing the academic work of writing of Atlantic biographies. For example, Slave Voyages publishes a biographical sketch of Dobo, a ten-year-old enslaved boy who was rescued by a British cruiser in 1826 aboard the Spanish slaver Fingal (VoyageID 558) and subsequently liberated before the Mixed Commission in Havana. Using a combination of sources including the African Names dataset (which has its own set of unique identifiers that can be correlated to VoyageID), Oscar Grandio Moraguez demonstrates that Dobo (given the Christian name Gabino) originated from the interior of Sierra Leone in a region occupied by an ethnolinguistic subgroup of Mel speakers known as Gola. Although the sources do not permit a full recovery of the conditions under which the young Dobo was enslaved, Grandio Moraguez is able to advance an argument about the geographies of enslavement that go well beyond the generic port of embarkation, Cape Mount, located at some distance from the Gola hinterlands. In a brilliant use of sources supplemental to the Transatlantic Slave Trade and Origins datasets, Grandio Moraguez follows Dobo/Gabino’s life in Cuba, as a young indentured emancipado to the granting of unconditional freedom in 1841. The story of this everyday African ends poignantly, with Dobo’s death in a miserable military prison in Cádiz, Spain, while en route to Ceuta. The reader learns that Dobo’s wife was left behind in Cuba. The essay is a model of the combined use of digital datasets and traditional archival materials to write narratives of the enslavement, family, transatlantic crossings, and individual experience that are driving forces of the “biographical turn” in slave studies.

That epistemic turn of historical method and meaning—and its attendant goal to situate individuals in the multiple and contingent contexts of slaving—drives several more projects in digital slave studies. Chief among them are Freedom Narratives (formerly the “Life Stories of West Africans in Diaspora” section of York University’s SHADD Collection), and the Oxford University Press reference works of African, African-American, and Caribbean and Afro-Latin biographical dictionary sponsored by Harvard’s Hutchins Center for African and African American Research. Digital resources provide the raw elements of biography, the infrastructure for biographical publishing, and the scholarly community of biographical studies.

Toward Linked Open Data

As noted previously, field research and encoding in the early days of digital work in the history of the slave trade were conducted in isolation. The research method may have been collaborative and team-based as research teams relied upon models of sponsored research and institutional resource utilization that were unlike the traditional lone humanities scholar. Nonetheless, individual research teams still operated independently, with different protocols, technologies, and languages. Machine-readable datasets and statistical software permitted the sharing and repurposing of raw data, but scholarly and popular audiences without adequate training and support in statistics were marginal to the conversation. The lines of communication between academics and genealogists were weak. Isolation began to soften in the early 1990s, and diminished significantly after 1999 with the publication and commercialization of a multisource, multiuser dataset that operated on an operating system compatible with personal computing. The Transatlantic Slave Trade Database has been especially influential in establishing source-based empirical evidence alongside standardized field variables, such as VoyageID, that structure encoding protocols for ongoing and new projects.

Yet, isolation continues to be a challenge for digital work on the history of enslavement. A rapid expansion in research activities in civil, ecclesiastical, government, genealogical, and private collections, many resulting in the autonomous development of datasets in productivity suites by Microsoft, Google, and Apple, have outpaced data standards. Across projects, field naming and metadata conventions have been especially idiosyncratic. As encoding remains uneven, the same individual can appear in more than one dataset, but the absence of protocols for datafields and orthography leave the individual in digital isolation (or, more accurately, in unrecognized digital duplication). The challenges of cross-project search and analysis are amplified in statistical calculations for variables such as ethnicity, race, color, and occupation that have been encoded with widely varying practices ranging from strict standardization to a fidelity to the original source. The encoding of place names has presented unique challenges, as geolocational names found in official reference sources like the Geographical Names Information System and GEOnet Names Server (both having multiple applications in geoinformatics) may depart dramatically from the archival original. The flattening of multilingual original sources into English translations and the variety of languages used in field variables titles have added new elements of uneasy intelligibility from one project to another. Whereas a tension between controlled versus natural vocabulary has been a part of slave trade research since the early encoding of machine-readable datasets, the struggle between clean and messy data is now systemic.

The 2010s have been marked by sustained and increasingly successful attempts to overcome isolation and messiness of data and method. The history of the slave trade has experienced a concomitant push toward the adoption of open-source software and the lowering of restrictions on data usage for noncommercial purposes, notably via Creative Commons. Slave Biographies (founded 2011) and Enslaved (founded 2018), two projects housed at MATRIX: Center for Digital Humanities and Social Sciences at Michigan State University, have been leaders in an intentional push beyond digital isolation. Funded by the National Endowment for the Humanities and the Andrew W. Mellon Foundation, respectively, both projects are organized around the goal of creating online, open-source, open-data, free-of-charge search and discovery resources to allow the researcher to (1) identify enslaved individuals who have been encoded in one or more databases; (2) permit statistical calculations and visualizations of metadata about enslaved individuals and cohorts (again across one or more datasets); and (3) build the elements of biographical narratives structured around the people, places, events, and relationships of Atlantic slavery. Slave Biographies, which was originally built using Midlo Hall’s Louisiana dataset alongside a dataset of slave inventories from late colonial Maranhão, Brazil, developed by Walter Hawthorne, has also built the prototype for an archival repository of peer-reviewed datasets.

A chief goal for both projects has been to define a core set of metadata fields on enslaved peoples and crosswalking those fields between various datasets for federated searching with Linked Open Data (LOD) protocols. Using a system of persistent unique resource identifiers (URIs) centrally assigned to each discrete person, life event, or place found in a partner project database, the LOD-based approach and its associated syntax facilitates searching and browsing across multiple partner projects while permitting each partner to maintain its own data under independently developed encoding protocols and data stewardship. In its proof-of-concept stage, predictive disambiguation is one of great challenges of LOD; that is, can algorithms accurately predict that one Pedro Congo in the Liberated Africans dataset is or is not likely to be the same as the Pedro Congo in Slave Voyages/Origins or any of the hundreds of Pedros and thousands of Congos found in the Slave Societies Digital Archive? At the conclusion of Enslaved’s first phase, in mid-2019, the field will have a working LOD model that should mitigate the persistence of digital isolation.

Notes on the Digital Method of Research

The transformative revolution in personal computers and consumer electronics has had a major impact on the research methodologies deployed in the history of the slave trade. Gwendolyn Midlo Hall, whose field research began in the early 1980s with notarial records in Pointe Coupee Parish, outside Baton Rouge, can be identified as an early practitioner of personal computing in the archive and data wrangling in situ. The user guide to the 2000 CD-ROM about slave Louisiana describes a process of direct transcription of original archival data into dBase, a database management system, on the researcher’s laptop. Midlo Hall and her researchers were among the many academics who brought the personal computer to the archive, breaking from an older tradition of off-site data encoding of handwritten transcriptions of archival originals.

Ecclesiastical and Secular Sources for Slave Societies (ESSSS, renamed Slave Societies Digital Archive in 2017), a project launched by Jane Landers in collaboration with Paul Lovejoy and Mariza de Carvalho Soares, took digital cameras into Cuban and Brazilian archives between 2003 and 2006 to photograph ecclesiastical registries of more than 750,000 individuals. A series of grants from the British Library Endangered Archives Program funded additional work to preserve in digital format more slave records in Brazil and Cuba as well as Colombia. Later funded work includes digitization of ecclesiastical and secular sources from Spanish Florida, Angola, Benin, and Cape Verde. Although the urgency of the preservation of documents threatened by vermin, natural disaster, or human carelessness has taken precedence over encoding, the Slave Societies Digital Archive has embraced the practice of using digital devices alongside the personal computer to transcribe and encode manuscript originals.

The digital field methods used by the Midlo Hall and the Slave Societies Digital Archive teams are examples of the outsized significance of personal computers and consumer electronics in the mass-scale digitization of the historical manuscript. Whereas early archival research efforts turned on the manual copying of data elements selectively taken from the manuscript for subsequent encoding into machine-readable media (for subsequent batch processing on costly equipment such as a mainframe), technological and market advances have dramatically shortened the distance between archive and encoded file, while expanding the scope of what can be encoded. Many key functions of data wrangling are now conducted in real time, in the archive, with the assistance of inexpensive computing applications. Wireless Internet connections and cloud computing have dramatically reduced the need for and costs of local data storage. The dramatic drop in the cost of duplication, thanks to the digital camera, and cheap commercial air travel have accelerated the speed and scope in which information can be extracted from the archive, for encoding and analysis anywhere. Anticipated advances in optical character recognition and data mining should further improve the efficiency of digitization and data encoding, while further reducing the need for manual transcription. At present, the digitized manuscript photographed by the researcher has become a research standard.

Even with the promise of machine-assisted transcription and data extraction, the fundamentals of paleographic training and researcher expertise shall remain essential skill sets that cannot be replaced by the computer. Traditional pencil-and-paper and the notebook persist in archival research. They may be essential in resource-limited settings and in archives that do not permit digital photography or personal computing devices. For the foreseeable future, familiar nondigital reproduction technologies including microform and photocopies (all requiring equipment generally priced beyond the reach of the individual user) shall remain essential to the research process. Nonetheless, the research method now turns on computing technologies and personal electronic devices commonly found in the hands of the archive end user.

It bears noting that the most common output of the digital method—the flat file database, or spreadsheet—bears some striking similarities to the original structure of key primary sources about the slave trade. These comma-separated values files can have the look and feel of the tabular account books, navigational logs, cargo manifests, crew lists, tax records, health inspection reports, port movement registries, and emancipation returns that have been the foundation of slave trade studies in the computer age. Across time and region, and across language and culture, archival information organized in hand-drawn or preprinted tables and charts has been directly replicated in the columns, rows, and cells of organizationally simple digital spreadsheets of widely varied size and scale. Autocomplete and autofill features of Microsoft Excel and Google Sheets add speed and efficiency. Autocorrect, spellcheck, search and replace, and conditional formatting features enhance the accuracy of transcription (and boost the intentionality of maintaining the inaccuracies of the original). Sorting, filtering, and column hide/reveal bolster data navigation. OpenRefine and similar data wrangling tools are great aids for cleaning and structuring messy data. Nonetheless, when all application tools are returned to default, the data is returned to a digital file that looks faithful to tabular archival manuscripts that power statistical and biographical-social approach to the history of the slave trade.

Toward an Ethics of Digital Slave Trade Research

The April 2018 symposium “Slave Pasts in the Present: Narrating Slavery Through the Arts, Technology, and Tourism” was held at New York University. Across panel presentations that explored the what and how of a sustained interest in slave pasts manifest in tourism circuits, UNESCO heritage sites, literary publishing, and global entertainment industries, the panelists and audiences repeatedly returned to an ethical question of why slavery, and especially the tragedy of the slave trade, remains powerfully resonant far beyond the academy. The assembled collective interrogated the ethical obligations and risks of engaging a slave past in our present. The digital may not have been a primary consideration across these moving discussions, but nearly all present were in dialogue with the insights and tools of digital scholarship, media, and tools of slave studies.

One of the most arresting interventions came from Brenda Romero, an award-winning American game developer. In 2013, Romero made waves in the gaming world with the release of The Mechanic Is the Message, an analog game series that places players in the traumatic histories of Cromwellian invasion of Ireland, the Holocaust of European Jewry, and the Trail of Tears. A game about black bondage fit within the logic of the series. In a short overview of the fractured presence of slavery in a history of the gaming industry, Romero remarked that unlike academic scholarship, public history, and the entertainment industries, “slavery is not something we [game developers; the gaming industry] think of from a design perspective or even as a topic.” Slaves are almost never protagonists in the rich and imaginative storytelling of gaming; they are rarely even bit players. Briefly recounting her own early experimentation in designing and prototyping a nondigital gaming about the slave trade, “The New World” (2008), Romero went on to argue that gaming has the power to unlock a slave past— a past undoubtedly difficult, yet ethically resonant. She suggested that such a path might engage a dialect of bondage, to be played out in a digital environment made between game designers (a field dominated by white males), players, and the rules. Humbly, Romero put the game developer alongside other designers: the sculptors, landscape architects, installation artists, and museum professionals who have designed interactive spaces for users to access the trauma of enslavement.

Romero briefly touched upon the perils of the digital in the spectacular offense of “Playing History 2: Slave Trade,” a 2013 release by the Danish gaming company Serious Games that quickly earned the nickname “Slave Tetris” for a role-play sequence in which an enslaved African boy participates in the puzzle-stacking of three hundred bodies (including the boy’s sister) into the hold of a slave ship bound for the Americas. In the game, the player “wins” if the boy successfully lands the ship and its human cargo in the Americas, after surviving unfavorable weather conditions, disease, and dwindling rations. His return to Africa and rebellion were not options. “Absolutely sickening,” observed Romero.

“Playing History 2: Slave Trade” was roundly denounced and quickly withdrawn from the market. Yet, as Romero reminded the audience, it fit within a larger matrix of factual and fictive grappling with the slave trade as we might know it. Digital recreations of the slave ship are hardly unique to gaming, and digital renderings of the slaver Brookes have inspired immersive artistic installations at York Castle Museum and the International African American Museum in Charleston, among other locales. Nonetheless, gaming presents a new, digital-native field where a slave past takes meaning. In modern times, that grappling and its attendant plays of injustice and wrong, strategy and survivals, knowledge and shame are made accessible through a digital environment of bytes, pixels, programs, and consumer electronic hardware. The video game is not wholly apart from the bytes, pixels, programs, and consumer electronic hardware that animate academic studies of the slave trade, the genealogies of enslaved peoples, and a digital method of research, knowing, and sharing. Modern gaming may be at some distance from the “numbers game,” but like its scholarly digital precursors, it may provide access to a recovery of the human experience of African bondage.


In thanking my assistant Andre Pagliarini for feedback on a working draft of this essay, I praise the work of the many research assistants, programmers, designers, and database administrators who all too often go unnamed and undercelebrated in digital scholarship.

Further Reading

Anderson, Richard, Alex Borucki, Daniel Domingues da Silva, David Eltis, Paul Lachance, Philip Misevich, and Olatunji Ojo. “Using African Names to Identify the Origins of Captives in the Transatlantic Slave Trade: Crowd-Sourcing and the Registers of Liberated Africans, 1808–1862.” History in Africa 40, no. 1 (2013): 165–191.Find this resource:

Curtin, Philip D. The Atlantic Slave Trade: A Census. Madison: University of Wisconsin Press, 1969.Find this resource:

Eltis, David. The Rise of African Slavery in the Americas. New York: Cambridge University Press, 2000.Find this resource:

Eltis, David, Stephen D. Behrendt, David Richardson, and Herbert S. Klein, eds. The Trans-Atlantic Slave Trade: A Database on CD-ROM. Cambridge, UK: Cambridge University Press, 1999.Find this resource:

Eltis, David, and David Richardson. eds. Atlas of the Transatlantic Slave Trade. New Haven, CT: Yale University Press, 2010.Find this resource:

Eltis, David, and David Richardson, eds. Essays on the New Transatlantic Slave Trade Database. New Haven, CT: Yale University Press, 2008.Find this resource:

Grandio Moraguez, Oscar. “Dobo: A Liberated African in Nineteenth-Century Havana.” Slave

Hall, Gwendolyn Midlo. Africans in Colonial Louisiana: The Development of Afro-Creole Culture in the Eighteenth Century. Baton Rouge: Louisiana State University Press, 1992.Find this resource:

Hall, Gwendolyn Midlo, ed. Afro-Louisiana History and Genealogy, 1699–1860. CD-ROM. Baton Rouge: Louisiana State University Press, 2000.Find this resource:

Hall, Gwendolyn Midlo. Slavery and African Ethnicities in the Americas Restoring the Links. Chapel Hill: University of North Carolina Press, 2005.Find this resource:

Kamerling, Henry. “Research Note on the Atlantic Slave Trade Database Project.” African Diaspora Archaeology Newsletter 2, no. 1 (April 1995): article 6.Find this resource:

Klein, Herbert S. The Middle Passage: Comparative Studies in the Atlantic Slave Trade. Princeton, NJ: Princeton University Press, 1978.Find this resource:

Klein, Herbert S. The Atlantic Slave Trade (2nd ed.). Stanford, CA: Stanford University Press, 2012.Find this resource:

“Mellon Foundation Announcement.”, January 9, 2018.

Miller, Joseph C. “A Historical Appreciation of the Biographical Turn.” In Biography and the Black Atlantic. Edited by Lisa A. Lindsay and John Wood Sweet, 16–47. Philadelphia: University of Pennsylvania Press, 2013.Find this resource:

Omohundro Institute of Early American History and Culture. “New Perspectives on the Transatlantic Slave Trade” (special issue). William and Mary Quarterly 58, no. 1 (January 2001).Find this resource:

Romero, Brenda. “The Mechanics and Narratives of Slavery in Game Space.” Symposium: Slave Pasts in the Present: Narrating Slavery through the Arts, Technology, and Tourism. King Juan Carlos Center, New York University, April 2018.Find this resource:

Rosinbum, John. “Teaching the Slave Trade with Voyages: The Transatlantic Slave Trade Database.” AHA Today, October 31, 2016.Find this resource:

Thomas, Dexter. “I played ‘Slave Tetris’ so your kids don’t have to.” Los Angeles Times, September 7, 2015.Find this resource:


(1.) David Herlihy. “Computation in History: Styles and Methods,” Computer 11 (August 1978): 8–17.

(2.) The University of Wisconsin Libraries’ Data & Information Services Center maintains an archive of early machine-readable datasets.

(3.) The close study of the economics of English slaving interests continues to inform database development, including the ongoing work of the Centre for the Study of the Legacies of British Slave-ownership, based at University College–London. The wider field of English involvement in slavery and antislavery has been at the center of archival digitization and online educational initiatives funded by the British Library and National Archives.

(4.) “History of the Project,” Slave Voyages.

(5.) An extremely useful database of cited sources from the British Parliamentary Papers is now available online at Visualizing Abolition: A Digital History of the Suppression of the African Slave Trade, a University of Missouri Honors College project. By subscription, the commercial information-content company ProQuest permits full-text searchability of the parliamentary papers and related documents originally appearing in the Slave Trade Correspondence. The Slavery and Anti-Slavery: A Transnational Archive and Slavery, Abolition and Social Justice collections, aggregated by educational publishers Gale and Adam Matthew, respectively, lend additional reach to full-text searching of digitized print source materials.

(6.) Gregory O’Malley, “Balancing the Empirical and the Humane in Slave Trade Studies,” OIEAHC Uncommon Sense—The Blog, January 14, 2015.