Engaging Science, Technology, & Society

Refusal in Data Ethics: Re-Imagining the Code Beneath the Code of Computation in the Carceral State


CHELSEA BARABAS
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
UNITED STATES

Abstract

In spite of a growing interest in ethical approaches to computation, engineers and quantitative researchers are often not equipped with the conceptual tools necessary to interrogate, resist, and reimagine the relationships of power which shape their work. A liberatory vision of computation requires de-centering the data in “data ethics” in favor of cultivating an ethics of encounter that foregrounds the ways computation reproduces structures of domination. This article draws from a rich body of feminist scholarship that explores the liberatory potential of refusal as a practice of generative boundary setting. To refuse is to say no—to reject the default categories, assumptions and problem formulations which so often underpin data-intensive work. But refusal is more than just saying no; it can be a generative and strategic act, one which opens up space to renegotiate the assumptions underlying sociotechnical endeavors. This article explores two complementary modalities of refusal in computation: “refusal as resistance” and “refusal as re-centering the margins.” By exploring these two modes of refusal, the goal of this paper is to provide a vocabulary for identifying and rejecting the ways that sociotechnical systems reinforce dependency on oppressive structural conditions, as well as offer a framework for flexible collective experimentation towards more free futures.

Keywords

ethics; justice; feminism; refusal; power; abolition

[C]oding, in the guise of objective science, expands the project of settler colonial knowledge production—inquiry as invasion is built into the normalized operations of the researcher. Coding, once it begins, has already surrendered to a theory of knowledge. We ask, what is the code that lies beneath the code? (Tuck and Yang 2014b, 811)

Introduction

In spite of a growing interest in ethical approaches to computation, engineers and quantitative researchers are often not equipped with the conceptual tools necessary to interrogate, resist, and reimagine the relationships of power which shape their work. Like most euphemisms, terms like “diversity” and “fairness” have begun to blunt our collective imagination, making it difficult to articulate and intervene on the ways that data production and analysis perpetuate complementary structures of privilege and harm (Hoffmann 2021).

A liberatory vision of ethical data practice requires de-centering the data in “data ethics” in favor of cultivating an ethics of encounter. As Bergman and Montgomery (2017) explain, an “ethics of encounter” is grounded in a relational theory of change, building our capacity for collective struggle and lived transformation through forging new ways of relating across difference (ibid., 238). In contrast to techno-centric efforts to render algorithms more inclusive, accurate, or fair, an ethics of encounter requires us to grapple with the ways technocratic modes of innovation maintain violent social relations, so that we might begin “an ongoing process of becoming otherwise” (ibid., 112).

The concept of “refusal” offers a foundational framework for computational practitioners interested in pursuing a relational ethics of encounter in their work. I use the term “computational practitioners” to indicate a wide range of actors in academia, industry, and government who are engaged in data-centered discourse, research, and design. Refusal is an especially useful concept for computational practitioners involved in the emerging subfield of data ethics. This article draws from my own experiences working as a member of this community. For the last six years, I have collaborated with data scientists, community organizers, and government officials on a variety of applied data projects related to the US criminal legal system, as well as participated in a number of critical interventions designed to deepen the conversation regarding what constitutes ethical data practice in this domain. Through these efforts, I have been both the recipient and instigator of acts of refusal which aimed to shed light on unnamed power asymmetries and reorient conversations towards more transformative modes of collaborative problem-solving.

In addition to my own experiences, this article builds from a rich body of feminist scholarship that explores the liberatory potential of refusal as a practice of generative boundary setting (Tuck and Yang 2014a; Honig 2021; Simpson 2014; McGranahan 2016; Wright 2018). As Tuck and Yang argue, “instead of a settler-colonial configuration of knowledge that is petulantly exasperated and resentful of limits, a methodology of refusal regards limits of knowledge as productive, as indeed a good thing” (2014a, 239). In recent years, feminist thinkers have worked to delineate ways that researchers might refuse harmful data regimes while affirming commitments to more radical and transformative social and scholarly practices (Hoffmann 2021; Cifor et al. 2019; Barabas et al. 2020a). Some scholars have explored the ways that marginalized groups refuse co-optation into harmful sociotechnical systems (Benjamin 2016b; Gangadharan 2020; Zong and Matias 2020; Zong 2020), while others have introduced opportunities for technology designers to embrace refusal as a generative and collaborative practice in their work (Graeff 2020; Benjamin 2016b).

Refusal requires computational practitioners to push beyond easy liberal fixes and abstract formulations of fairness, to grapple with the practical and material ways that data science is used to undermine collective struggles for liberation. Refusal is a particularly important practice for computational practitioners to engage in today, when there is heightened demand for data-driven reform amid widespread abolitionist efforts to roll back the tendrils of the carceral state. In the United States, philanthropic organizations and government agencies have funded a number of high-profile initiatives aimed at reforming the carceral state through the adoption of “smart,” “evidence based,” and “data driven” practices (Stoller 2019; Ball 2015). Elite research initiatives such as the Chicago Crime Lab, Harvard’s Access to Justice Lab, and the Stanford Computational Policy Lab endow computational practitioners with the resources, access, and influence necessary to intervene in high-stakes debates regarding criminal legal reform and the administration of social services. In this context, “data science” is often framed as the key skill set needed to “drive social impact through technical innovation.” (Stanford Computational Policy Lab, n.d.).

At the same time, there are a growing number of examples of data practitioners pushing back against uncritical entanglements with the carceral state. For example, thousands of employees from large tech companies like Google and Microsoft have staged protests to pressure their companies to withdraw from lucrative military contracts (Wakabayashi and Shane 2018; Schneider and Sydell 2019). Scholars have come together to demand that academic journals rescind offers to publish studies which fuel the “tech to prison pipeline” (Barabas et al. 2020a). And student groups have organized petitions to cancel computational courses that treat marginalized communities as “laboratories for experimentation” (Kahn and Levien 2021).

All of these efforts have emerged at a time when the carceral state is undergoing a crisis of legitimacy. By carceral state, I mean the expansive system of state-sanctioned capture, confinement, and control that underpins our current unjust social order, both in the United States and globally. The carceral state is a fundamentally relational phenomenon that extends far beyond brick and mortar prison walls—as Ruth Wilson Gilmore explains, “prison is not a building ‘over there’ but a set of relationships that undermine rather than stabilize everyday lives everywhere” (Gilmore 2007, 242). These relationships are grounded in punitive systems of control that undermine the capacity for people to protect one another and resolve conflicts themselves, while simultaneously buttressing existing relations of power and domination. The carceral state is also a global phenomenon. Domestic policing programs are deeply entangled with militarized security practices abroad—methods for surveilling and punitively controlling people in one context are often repurposed for other locales around the globe (Schrader 2019). In this article, I draw primarily from examples based in the US context because that is where my own fieldwork has been grounded, while also nodding toward other important strains of research and praxis that are taking place outside of the United States.

Historically, law enforcement agencies and policymakers have embraced science and technology as a stabilizing force during moments when the carceral state is undergoing a crisis of legitimacy (Wang 2018). Data collection and analysis have long played an important role in upholding the racialized and gendered systems of meaning and control that legitimize punitive social practices. For example, US crime statistics were used to conflate Blackness with danger and criminality during the Progressive Era in order to justify racial violence and systematically exclude African Americans from the broader public sphere (Muhammad 2019). Modernized crime reporting initiatives were used during the 1960s to conflate social unrest with criminality during the Civil Rights Era (Murakawa 2014). The US government exported their police surveillance practices abroad in order to construct and keep track of “enemies of the state” during the Cold War (Weld 2014; Schrader 2019).

Today, data collection serves as a powerful vehicle for carceral expansion into other important realms of life, such as educational, financial, and healthcare institutions, often under the guise of progressive care-based rhetoric (Brayne 2014; Friedman 2021; Katz 2020). In this context, it is important for data practitioners to interrogate what Tuck and Yang term the “code beneath the code” (2014b, 812) regarding who benefits from and who is harmed by data-intensive interventions within the ever-expanding dominion of the carceral state.

Refusal could serve as a powerful analytic for computational practitioners who are interested in unpacking the code beneath the code of data work. To refuse is to say no—to reject the default categories, assumptions and problem formulations which so often frame the work of data science. But it’s more than that; refusal can be a generative and strategic act, one which opens up space for us to forge new kinds of relationships and renegotiate the assumptions and premises underlying socio-technical work (Benjamin 2016b). Refusal allows the data practitioner to reposition themselves as actively producing the conditions of inquiry in ways that are accountable to the groups who are often silenced by data-driven discourse—it breaks down harmful modes of knowledge production and imagines a new, reparative role for itself. This article explores two complementary modalities of refusal in data-intensive work: “refusal as resistance” and “refusal as re-centering the margins.” In order to concretize these concepts, I draw from examples from my own fieldwork, as well as other projects that have catalyzed acts of refusal within my broader community of collaborators.

As Tuck and Yang argue, refusal is not about identifying an unproblematic object of study, but rather it’s about cultivating “an ethic of studying to object” (2014b, 814). Refusal as resistance involves identifying and rejecting the ways that sociotechnical systems undermine collective resistance and reinforce dependency on oppressive structural conditions. Building from Ruha Benjamin’s concept of “secondhand refusal” (2016b, 982), I explore the transformative potential of refusal as resistance when practiced by powerful institutional actors like computer scientists and technology designers. Computational practitioners often occupy privileged positions within organizations as either valued insiders or welcomed outsiders whose technical skills are in high demand. In such contexts, computational practitioners are in a powerful position to negotiate and challenge the underlying theories of change associated with a given data project. When met with resistance, such individuals have the choice to decline to participate in the work.[1] By contrast, the people targeted by carceral data systems are often not given a chance to refuse participation. In this context, secondhand refusals are crucial, because they give computational practitioners the opportunity to voice dissent in solidarity with vulnerable and marginalized populations.

In my discussion of “refusal as resistance,” I outline three common pitfalls that computational practitioners fall into when engaging in work regarding the carceral state: 1) “proving” harm 2) adopting deficiency narratives and 3) optimizing harmful systems. I explain how these pitfalls undermine collective efforts toward transformative change, and then explore ways computational practitioners might resist these pitfalls in their work. Finally, I discuss ways we might view “refusal as resistance” as a generative project, one that creates space for new possibilities to emerge.

Refusal as re-centering the margins explores the potential of refusal as a transformative relational practice. Building on the work of bell hooks, I conceptualize the margins as a space of radical openness and possibility, rather than as a site of deprivation and need (hooks 1989; Shah 2015). Refusal as re-centering the margins hinges on an acknowledgement that transformative social work is always already present and in formation (Kaba 2021), and explores the ways that computational practitioners might contribute to this work without assimilating it into dominant power structures. Refusal as re-centering the margins requires an intentional redistribution of resources and power (Benjamin 2016b), as well as the cultivation of “common notions” (Bergman and Montgomery 2017, 42) that reshape our habits for working together across power differentials. Such an approach lays the foundation for flexible collective experimentation towards the possible, rather than the probable (McGranahan 2016; Benjamin 2016a).

Refusal as Resistance

“Proving” Harm

One of the most prominent theories of change underlying computational work is the idea that quantitative analysis itself can make an impact by proving harm. For example, perhaps one could develop an observational study to quantify the role that race plays in police-involved shootings (Fryer 2016), or construct a randomized control trial to measure the impacts of a new violence prevention program (Kubiak et al. 2015). One common, seemingly progressive hope underlying such studies is that, by rigorously documenting the effects of such policies, researchers might bolster the legitimacy of personal testimonies or qualitative research that already exists and thus persuade policymakers and law enforcement agencies to abandon harmful practices.

Activists and feminist scholars have pushed back against this widespread intuition, arguing that damage-centered narratives that are designed to convince an outside adjudicator that violence has been perpetrated rarely lead to accountability or reparations for the aggrieved (Hartman 1997; Gilmore 2002; Tuck and Yang 2014a; Onuoha 2020). As Mimi Onuoha argues, “The idea that structural racism can be proven and overcome by gathering just enough or the right kind of evidence is nothing more than a myth. Historically, it has rarely been the case . . . the grand ritual of collecting and reporting this data has not improved the situation” (2020). This is particularly true in instances when harms are carried out against marginalized racial communities, such as Black and Indigenous peoples. For example, scientific approaches to documenting the harms of punitive legal practices which disproportionately impact people of color have proven largely ineffective in courts of law (Gilmore 2002).

Moreover, such approaches reinforce knowledge hierarchies that effectively silence the voices of directly impacted people and maintain the computational practitioner’s privileged status as the primary arbiter of truth (hooks 1990; Tuck and Yang 2014a). All too often, the partial insights gleaned from computational research are used to undermine robust bodies of historically grounded and community-centered knowledge that are already available. The result of such work is the production of research that makes far-reaching claims based on faulty assumptions.

For example, a recent observational study made headlines by claiming that “contrary to conventional wisdom, parental incarceration has beneficial effects on children,” especially for Black children (Norris, Pecenco, and Weaver 2021, 1). The authors of the study do not substantially engage with the large body of research that already exists documenting the various physical, psychological, and social harms associated with parental incarceration. Rather, they measure the primary “beneficial impact” of parental incarceration in extremely narrow terms—the likelihood that a child will be charged, convicted, or incarcerated for a crime before the age of twenty-five. Such limited notions of impact are commonplace in quantitative research regarding the carceral state, because the maintainers of the criminal legal system collect only a narrow set of operational data about the system’s impact, usually in terms of “recidivism.”

Recidivism is a classic example of a quantitative metric which reveals virtually nothing about the conditions that people live under after they leave prison. Yet it continues to serve as the default standard in measuring post-carceral success, largely because it is a convenient data point to capture via administrative systems. Such myopic data collection results in serious consequences, often erasing the well-established harms of incarceration and recasting violent policies as benevolent interventions.

The state of existing data regimes presents a serious ethical challenge for quantitative researchers in this area. As Lily Hu argues, “Either [the researcher] buys herself the ability to work with troves of data, at the cost of implausibility in her models and assumptions, or she starts with assumptions that are empirically plausible but is left with little data to do inference on” (2021). As such, an ethical computational practice will often require practitioners to look beyond the limitations of current data regimes, to imagine the data for what it might have been.

For example, I was once part of a research team that was negotiating access to data housed in a state government’s administration of the courts. During the course of our interactions, one government official suggested that we use their data to evaluate the impact of electronic monitoring on pretrial outcomes. Although this was beyond the scope of our original research topic, our team considered taking on the project as a stepping stone to accessing the data we wanted.

Before agreeing to do the study, we consulted with James Kilgore, a researcher and organizer against the use of “e-carceration,” or punitive technological interventions that deprive people of their liberty. Kilgore warned us against pursuing an impact study that was limited to data collected by the courts. He pointed us to a widely cited study on the effectiveness of electronic monitoring, wherein the authors used propensity score matching in order to compare the outcomes of people assigned to electronic monitoring to those who were not. The authors included one hundred and twenty-two covariates in their analysis, boasting that “the richness of [their] covariate set is quite extraordinary” (Bales et al. 2010, 47).

Yet as Kilgore (2017) points out elsewhere, these propensity scores did not include any of the variables that most analysts and formerly incarcerated people identify as the most important factors contributing to recidivism, such as employment, access to housing, and the strictness of the supervising law enforcement agent. As Kilgore (ibid., 3) argues, without taking such factors into account, the study “becomes merely an exercise in statistical gamesmanship rather than serious research.” After careful consideration, we decided not to proceed with the proposed study, largely due to concerns that the limited selection of data points would erase the harmful impacts of electronic monitoring and silence the testimonies of directly impacted people who were organizing against the use of this form of e-carceration.

Damage-centered research can also perpetuate harm in carceral contexts by serving as a distraction that diverts precious time, attention, and resources away from more transformative efforts for change. This is especially true during times of social upheaval. Calls for expanded data collection are often framed as pragmatic and politically neutral responses to widespread calls for change, but they have significant social and political consequences.

For example, during the height of the Covid-19 crisis in 2020, a network of grassroots bail funds reached out to me and other allied researchers with concerns about a study being launched by Harvard’s Access to Justice Lab. The stated goal of the study was to evaluate whether the presence of counsel at first appearance in court increased the likelihood that a person would be released from jail (North 2020). The study was launched amid widespread calls to decrease jail populations around the country, given the fact that jails and prisons had become major vectors for the spread of Covid-19 (Schnepel 2020). Jim Greiner, the faculty director of the Access to Justice Lab, framed the study as an opportunity to glean “concrete evidence” on the potential benefits of legal counsel, which he argued could pave the way for useful policy changes (North 2020).

Yet, as others at Harvard Law School pointed out, the study was harmful and unnecessary because 1) there already exists evidence demonstrating that access to counsel improves outcomes and 2) available research reveals how pretrial detention harms detained people and their loved ones (Naples-Mitchell 2021). As Katy Naples-Mitchell of Harvard’s Houston Institute argues, “Instead of randomizing access to interventions for which there is already an evidence base in order to prove the causal mechanism with greater precision, we should be spending our efforts, and our funds, on implementing those policies as widely and quickly as possible” (ibid.). Naples-Mitchell (ibid.) points out that there is a stark disparity in the levels of evidence required to roll out expanded policing, surveillance, and incarceration compared to the proof required to undo such harmful systems. Rather than simply act on existing evidence, the Access to Justice Lab decided to reframe a beneficial intervention as an unknown. Meanwhile, thousands of people were needlessly exposed to a life-threatening virus behind bars. In the context of the carceral state, data-intensive studies like this impose serious burdens on directly impacted communities, used to investigate questions for which answers are already available, but often ignored.

Data-intensive projects like observational studies and randomized control trials have become increasingly popular in recent years as abolitionist social movements such as the Movement for Black Lives work to dismantle the carceral state. For example, in response to uprisings against police brutality in 2015, FBI Director James Comey argued that “the first step to understanding what is really going on in our communities is to gather more and better data related to those we arrest. . . . Data seems a dry and boring word but, without it, we cannot understand our world and make it better” (2015).

Khalil Gibran Muhammad (2015) argues that this statement frames data collection as a politically neutral response to the problem of police brutality, while simultaneously implying that further study is needed before we can truly understand what the core issues are. Rather than listen to the communities who are directly impacted by police violence, Comey asserts that expanded data collection is the only way to understand the problem and identify steps for improving it. Such interventions dilute and diffuse the momentum of transformative social movements by giving the appearance of taking action while maintaining the status quo (Hoffmann 2020).

Getting at the code beneath the code of computational work requires us to ask whose voices are being silenced and what sacrifices are being demanded for the sake of the research. As Ruha Benjamin argues, such questions ultimately push us away from narratives that are “meant to convince others of what is . . . to expand our visions of what is possible” (2016a, 2). Refusal is a practice that we can use to redirect academic analysis away from harmful, damage-centered narratives toward the institutions and policies that produce those narratives in the first place (Zahara 2016).

Adopting Deficit Narratives

One important aspect of refusal involves an intentional shift in how we interpret data, away from measuring the “antisocial” pathologies of “risky” individuals and towards a carceral system that surveils and punishes people in disparate ways. As Catherine D’Ignazio and Lauren Klein argue, the language surrounding data analysis of poor or marginalized communities is often grounded in “deficit narratives,” or descriptions that reduce a given population to their perceived deficiencies, rather than highlighting the richness that they possess (2020, 167). In the context of the carceral state, language is a powerful tool for naturalizing harm against historically marginalized groups, but it is also an important site of struggle (hooks 1989). As Kimberlé Crenshaw argues, when discussions of incarceration are framed around “at-risk” populations, the result is a “subtle erasure of the structural and institutional dimensions of social justice politics” (2012, 1466).

Such framings often show up in the way data are characterized in computational models. For example, data about arrests, convictions, and incarceration are often used to measure “public safety risk” even though numerous researchers have pointed out the fundamental measurement errors that occur when computer scientists use such data as a shorthand for danger or risk (Barabas 2020a; Dolovich 2011; Harris 2003; Prins and Reich 2018). Similarly, when computer scientists develop machine learning algorithms to identify or predict “criminality” using biometric and/or criminal legal data, they materially conflate the shared, social circumstances of being unjustly overpoliced with pathological criminality (Barabas, et al. 2020a). Such claims are based on an ahistorical interpretation of data, devoid of notions of structural harm and state-sanctioned racialized violence.

Yet these flawed measures persist because they are politically useful as rhetorical tools for justifying violent and punitive decisions (Stop LAPD Spying Coalition 2018). Such criminalizing tactics are not new. As Naomi Murakawa argues, attempts to modernize law enforcement and the courts have historically served to legitimize and expand the carceral state, maintained through a “politics of pity” (2014, 151) that depends on portrayals of Black people as damaged and potentially violent. The persistence of this interpretive lens is not surprising, given the political economy of data-intensive work. Computation often relies on data and analytical tools that are housed in powerful institutions, which subtly shape research agendas and problem framings by acting as gatekeepers. In addition, the data collection regimes of powerful institutions tend to skew toward the surveillance of marginalized populations who often do not have the political capital to successfully resist such tracking (Eubanks 2018).

As a result, available data, and the accompanying interpretations of that data, tend to erase systemic violence and shift blame and risk onto marginalized groups. These data then serve as the basis for dividing people into categories of deserving and undeserving of life-changing intervention (Kaba 2020; Gilmore 2017). One of the most important forms of refusal we can engage in as computational practitioners is to reject these default problem framings and reorient the algorithmic gaze “upward” to examine the people and institutions who occupy positions of power (Barabas et al. 2020b; Benjamin 2019b). Recent efforts to resist pretrial risk assessment offer a great example of how we might support such shifts in computational work.

Pretrial risk assessments are a classic example of the downward orientation underlying most data-driven reforms. These tools were introduced as a solution to mass pretrial incarceration, a serious and growing problem in the US (Barabas et al. 2020b). This issue is primarily driven by a severe case of “institutional decoupling” in the US courts (Christin 2017), whereby judges increasingly diverge from state and federal laws which mandate that pretrial detention be the limited exception, not the rule. Proponents of pretrial risk assessment claim that such tools could help to course correct judges’ behavior by offering a more accurate and objective view into who poses a serious risk to the community. Yet, to date, these tools have not made a significant impact on the way judges make decisions regarding pretrial release (Stevenson 2017).

In 2016, I was a part of a team of researchers who were interested in developing a computational platform that would enable court administrators to audit the accuracy and biases of algorithmic risk assessments. Our hope was that we could increase the rate at which judges released people pretrial by improving the transparency and reducing the biases of such tools. This approach was aligned with the broader discourse surrounding pretrial risk assessments, where there was growing interest in the development of strategies to audit and reduce the biases of high-stakes decision-making tools (Kleinberg et al. 2018; Chouldechova 2017).

Early on in the project, we were challenged to fundamentally re-think our approach by a group of community organizers working on pretrial justice. We’d originally reached out to the group to see if they would be interested in co-designing our audit platform. Rather than collaborate with us on our terms, the organizers engaged in an act of refusal, pushing us to re-think the project entirely. According to them, the main problem with pretrial risk assessments wasn’t that they were inaccurate or that judges didn’t adhere to them. Rather, it’s that they focus exclusively on modeling the supposed risk of people awaiting trial, instead of shedding light on a punitive courtroom culture run amok. They argued that the same court data could be used to surface insights regarding the way judges make pretrial release decisions. Such a reorientation would shift the solution framing away from problematic measures of individual “risk” in marginalized populations, toward studying the behaviors of decision makers who possess the most agency to drive pretrial outcomes.

In light of these comments, our team reformulated our understanding of the problem we aimed to address and ultimately pursued a different intervention. Rather than seek to improve the accuracy of existing risk assessments, we developed an alternative tool that measured the risk that a given judge would unlawfully deprive someone of their liberty before their trial. The goal of this work was to shift the conversation away from measuring the risk of individual defendants to increasing the accountability of powerful system actors. This project exemplifies a deliberate shift in the unit of analysis, away from the people who bear the brunt of our unjust social order, and toward the institutions which perpetuate such harmful policies. Computational work which shifts its focus onto the study of powerful institutions could lay the foundation for more robust forms of accountability and shed light on the structural factors that produce unjust outcomes.

Refusal as resistance, then, requires researchers to cultivate an approach to analyzing data “within a matrix of commitments, histories, allegiances and resonances” (Tuck and Yang 2014b, 811) that informs what can and cannot be known through computational analysis. Such refusals hinge on an acknowledgement that quantitative work is always interpretive and requires having a fluent command of the power dynamics underlying the way questions are framed and the data used to answer those questions (Gangadharan 2020; Irani et al. 2010).

Optimizing of Harmful Systems

One of the most effective means of co-opting the demands of transformative social movements is through the adoption of technocratic reforms that embrace progressive rhetoric while normalizing and expanding systems of harm (Benjamin 2019a; Katz 2020; Murakawa 2014; Spade 2015). Such interventions are what Ruth Wilson Gilmore (2007, 242) calls “reformist reforms” because they widen, rather than dismantle, the carceral state’s net of social control. An important and generative mode of refusal, then, involves disengaging from computational work that aims to optimize the efficiency and expand the reach of carceral systems through ever more data collection, maintenance, and analysis.

One of the primary vehicles for carceral expansion is through bolstering the state’s capacity to produce facts on the ground that support punitive policies (Melamed 2019; Sutherland 2019). Computational systems are particularly useful vehicles for this kind of expansion, due to the depoliticized nature of mainstream engineering culture and education, which encourages engineers to tinker on the edges of large bureaucratic systems without ever grappling with the downstream consequences of their efforts. By recasting the dirty work of the carceral state in terms of bureaucratic optimization challenges and technical data audits, officials are able to abstract away the pain and misery enabled by carceral systems while simultaneously giving computational practitioners a chance to demonstrate their technical prowess (Katz 2020). Such “win-win” scenarios stabilize structural violence by strengthening dominant systems of meaning and control rather than dismantling them to make room for alternative approaches to safety and justice. As Ruth Wilson Gilmore explains, “big answers are the painstaking accumulation of smaller achievements. But dividing the problem into pieces in order to solve the whole thing is altogether different from defining a problem solely in terms of the bits that seem easiest to fix . . . [it] is the difference between reformist reform—tweak Armageddon—and non-reformist reform—deliberate change that does not create more obstacles in the larger struggle” (2014, 13–14).

The depoliticized nature of engineering culture means that technologists tend to dismiss epistemic questions regarding how data are interpreted and operationalized as “non-technical” concerns, irrelevant to the “real” work of engineering (Cech 2013). This ideology assumes that political and social contexts can and should be kept separate from the purely technical concerns of engineering (Cech and Sherick 2015). As a result, criminalizing logics and discourses are easily embedded into these systems, because the interpretation and operationalization of data analysis is considered outside the purview of system engineers.

State agencies often then interpret the results of algorithmic systems according to a default philosophy of punishment in marginalized communities (Roberts 2019). For example, a company called LEO Technologies partnered with the National Sheriff’s Association during Covid-19 to “support the health and safety of our deputies and inmates” (Thompson 2020). LEO offers a product called Verus, a transcription service that uses natural language processing to produce real-time transcriptions of incarcerated people’s phone calls, as well as keyword-based searches and alerts. LEO’s marketing materials are couched in a language of compassionate care, claiming that incarcerated people’s phone calls “contained valuable intelligence” that could be used to listen for “cries for help from vulnerable inmates . . . information that could potentially save lives” (LEO Technologies 2020b).

However, a few months into the pandemic, LEO came under fire for their “absurd” attempts to mitigate the impacts of Covid-19 in prisons (Lacy et al. 2020). Rather than removing obvious obstacles to care, prisons opted to eavesdrop on prisoners’ phone calls to supposedly identify people who might be sick, as well as “other threats” related to the disease. As it turns out, those other “threats” included outlandish and unsubstantiated events that shifted the focus away from the dire, life and death situation that incarcerated people faced behind bars during the pandemic, recasting the prisoners themselves as the imminent danger to be managed.

For example, LEO claimed that by searching for keywords like “kill him” they had identified inmates with plans “to kill anyone who was discovered to have the virus in order to contain the spread” and by searching for the word “coughing” they had identified a person who “had declared that inmates would begin coughing on guards if coronavirus was discovered in the jail” (LEO Technologies 2020a). Rather than mobilize resources to increase access to testing and treatment, prison officials invested in software that helped them manufacture a different narrative, one that recast captive populations as the real threat to be mitigated during a deadly pandemic. Meanwhile, transmission and death rates in prison skyrocketed at rates three to five times higher than the general population (Saloner et al. 2020).

This example powerfully illustrates the ways data are used to prop up narratives that justify the systematic abandonment and sacrifice of people’s lives for the sake of preserving the life of carceral institutions (Friedman 2021). In response to public outcry after the revelation of these supposed use cases, LEO Technologies CEO Scott Kernan asserted that, “Our company points [prisons] to a point in the call where a word or phrase was said that they have concerns about. What they do with that information is purely out of our control” (Lacy et al. 2020). Kernan’s position is emblematic of a broader stance within engineering culture, which offloads responsibility for harm propagated by sociotechnical systems onto circumstances “outside of their control.” In their marketing materials, the company prefers to emphasize the technical performance of their system, highlighting that it is automated (enabling continuous surveillance), fast (offering “real-time” intelligence), and objective (without “implicit bias”) (LEO Technologies n.d.).

Refusal in this context requires computational researchers to actively engage with the material and epistemic stakes of their work, by recognizing the ways that techno-reforms ultimately justify and expand the violent practices of the carceral state (Katz 2020). By disengaging with efforts to expand, improve and maintain the carceral state’s administrative power through data collection and analysis, we create space for experimenting with new ways of improving people’s life chances beyond the constraints of the criminal punishment system. In this sense, refusal can be understood as a beginning that starts with an end (Barabas 2020b). This stance pivots on a rejection of the notion that carceral systems are simply broken and in need of fixing. As Mariame Kaba urges, let us not focus on asking “‘What do we have now, and how can we make it better?’ Instead, let’s ask, ‘What can we imagine for ourselves and the world’” (Kaba 2021, 36).

Resistance as a Generative Act

The above sections outline three important opportunities for “refusal as resistance” in computational work. In the context of the carceral state, these modes of resistance hinge on second-hand refusals from computational practitioners who decline to engage in data projects that are often lucrative and professionally beneficial, but that ultimately reproduce harmful social structures.

Refusal as resistance is more than just a decision to decline what’s on offer—it opens up opportunities for alternative social formations to emerge, ones which re-center the communities who are often targeted and silenced by data analysis. Refusal as resistance embraces limits on knowledge production and technological capacity, framing those limits as productive boundaries that ultimately create space for us to renegotiate the assumptions and key vocabularies underlying data work. As Sara Ahmed argues, such refusals should be understood as generative acts, a kind of “counter-institutional project . . . creating new paths for others to follow” (2017). Resistance in this context lays the groundwork for us to learn from dispossessed people without serving up stories of harm or peddling in criminalizing narratives.

But refusal as resistance, especially in the form of secondhand refusals enacted by relatively privileged actors, is only a first step. At the end of the day, computational researchers and toolmakers must not only create space but also cede space in order to enact new modes of relating to one another outside of techno-centric theories of change. In the following section, I outline a framework for refusal as re-centering the margins which explores some of the practical aspects of doing this work.

Refusal as Re-centering the Margins

The crux of refusal as re-centering the margins lies in identifying practical ways for data practitioners to participate in and support social transformation rather than control it. All too often, data analysis is used as a tool for circumscribing political agendas and maintaining the “savior status” of technical experts (Raji, Scheuerman, and Amironesei 2021). While traditional strategies of mass organizing are chronically underfunded, philanthropic and governmental organizations channel resources toward projects with quantifiable outcomes that are housed in elite institutions. Social justice work becomes a career track pursued by specialized professionals who maintain a monopoly over decision-making and policy discourse (Spade 2015). Refusal as re-centering the margins moves beyond technocratic notions of social change to develop a shared imagination of what transformative change looks like and what we need to do to get there. Re-centering the margins is easier said than done. Such work requires computational practitioners to embrace a theory of change that decenters data and focuses instead on cultivating new ways of relating “at the margins” (Shah 2015; Dutta and Pal 2010). As bell hooks teaches us, the margins are a site of radical possibility, a “central location for the production of a counter hegemonic discourse that is not just found in words but in habits of being and the way one lives” (1989, 20). As hooks also reminds us, the margins are not always a comfortable place to operate from—and efforts from computational practitioners to engage at the margins might be met with skepticism and distrust. Benjamin (2016b) explains that such distrust often stems from a long history of asymmetrical social relationships, which are reproduced through an uneven distribution of material resources and symbolic power.

Rebuilding trust at the margins requires the careful cultivation of spaces where silences can be broken so that new shared vocabularies and habits can be formed (Lorde 1984). A first step toward creating such spaces involves an intentional redistribution of power and resources, beyond diversity and inclusion models of participation in hegemonic structures. Only then can we cultivate the collective capacity to rebuild trust, which may entail taking responsibility for prior and ongoing complicity with dominant structures.

Open dialogue about past and present harms creates opportunities for us to radically reshape our habits and ways of being together across difference. To that end, refusal as re-centering the margins also involves cultivating what Bergman and Montgomery (2017) call “common notions,” rather than a rigid set of best practices. As I explore below, common notions are a flexible and pragmatic set of shared sensibilities that guide us in fostering mutually enabling relationships through the enactment of values such as accountability, reciprocity and embeddedness.

Redistributing Resources and Power

A key aspect of refusal involves cultivating relationships that are anchored in an intentional redistribution of resources and decision-making power (Benjamin 2016b). Technical solutions and tools often divert material resources and symbolic power away from directly impacted communities and into the hands of technocratic elites. When computational practices have a monopoly over what’s considered innovative or cutting-edge, then we end up erasing a much wider array of community-led activities that are at the heart of transformational work (Benjamin 2019b). This problem cannot be solved by simply making data-driven processes more diverse and inclusive. As Anna Lauren Hoffmann explains, inclusion in technical design processes “normalizes the dependency on exclusive forms of expertise in ways that do not address but feed and maintain the potential for violence” (2020, 11). Refusal as re-centering the margins requires us to resist the terms of inclusion on offer and think through new modes of relating to one another outside of techno-centric theories of change.

This is not to say that data interventions do not have any place in social change efforts, but they should be conceived of as supplementary resources, rather than as the focal point of the work. Expert interventions might take on a supportive role in service of larger strategies for mass mobilization in social change processes (Spade 2015). For example James Kilgore points out that social movements can recognize the value in data, without necessarily being “data-driven” (Media Justice 2021). Kilgore is a part of a collaborative data project called “Mapping E-carceration” which tracks ongoing developments related to e-carceration around the country in order to support local and national organizing efforts. As his collaborator Eteng Ettah describes, the primary role of data in this project is to inspire more people to join the collective struggle against ever-expanding regimes of e-carceration. “Stories and narratives move folks to action,” Ettah argues, “This is data . . . directly pushing back against the notion that the lived experiences of black and brown folks do not matter, because they do” (ibid.).

Kilgore and Ettah’s use of the term data includes qualitative information and personal testimonials, which are valued for their affective and narrative potential. The use of such data inverts the default knowledge hierarchies implicit in most data regimes, which place quantitative data as the most highly valued form of information. This project illustrates the ways that community organizers have used an expansive notion of data as a resource for building collective power, rather than as a strategy for circumscribing the agenda and monopolizing resources for narrow technical interventions.

Technical experts can play a valuable role in supporting such grassroots strategies for data activism. For example, in 2018 CourtWatch MA launched a community data collection project that aimed to shift “the power dynamics in our courtrooms by exposing the decisions judges and prosecutors make about neighbors every day” (CourtWatch MA 2018). The project faced a number of challenges due to rules regarding outside data collection. For example, all observations had to be recorded by hand with pen and paper because it was prohibited for members of the public to enter the courtroom with digital devices such as cell phones or computers. This presented a massive administrative challenge for the organizers of CourtWatch MA, who then had to develop a strategy for digitizing and analyzing that data.

I was part of a small group of computational practitioners who supported this effort, which included developing a data entry and clean-up strategy and recruiting students from local universities to help with the tedious work of entering handwritten observations into the database. It’s important to note that this kind of technical support is not flashy and will not likely be considered “cutting-edge” in the field of computer science (Hope et al. 2019). As Terra Graziani and Mary Shi point out, “there is often a gap between the work that is rewarded by the academy and the work that most directly empowers communities” (2020, 399).

However, it is this kind of contribution that can actually move the needle on community-led engagements with data. The goals and strategies for CourtWatch MA were developed by community organizers, who then called on the support of technical experts to assist them in implementing their vision. Technical experts played a similar supporting role in campaigns against expanded surveillance infrastructure in places like San Diego. As Irani and Alexander (2021) explain, technology workers lent their expertise in order to counter official claims about what new surveillance systems could or couldn’t do. They engaged in tedious work, such as combing through government contracts, in order to identify specific threats and concerns regarding how surveillance data were used and by whom (ibid.). In this way, data expertise served as a practical resource for grassroots efforts, rather than as a tactic for seizing resources and control of the work.

Cultivating Common Notions

Refusal offers a relationship-centered approach to ethics that builds our collective capacity to create emergent alternatives to existing social conditions. A key aspect of this work involves undoing hegemonic desires to control processes of social change and addressing our complicity with oppressive social structures, so that space is created for more radical and collective forms of transformation. Yet within the field of computation, the overwhelming tendency is for practitioners to envision ethics as a set of rigid rules and decontextualized best practices that leave oppressive modes of engagement intact. As Sareeta Amrute (2019, 57) argues, such frameworks are often the result of top-down efforts to “future proof” technologies against potential harm, yet such rules-based frameworks are wholly inadequate for addressing the types of relational and structural harms explored in this article. By contrast, the concept of “common notions” could offer an alternative approach to ethical data practice. Common notions are based on an inherently experimental and improvisational approach to social change that stems from a “constant working on each other” (Bergman and Montgomery 2017, 115). The essence of this process pivots on an ethics of encounter, or “holding a conversation rather than following a recipe” (Irani et al. 2010, 1317).

Bergman and Montgomery (2017) define common notions as an evolving set of shared values that arise out of frank (and often uncomfortable) conversations across difference. As Audre Lorde argues, “it is not difference which immobilizes us, but silence. And there are so many silences to be broken” (1984, 86). A first step toward developing common notions, then, involves breaking long held silences so that we can engage in conversations which “enable us to see and feel the toxicity of some of our attachments” (Bergman and Montgomery 2017, 116) or the ways that we reproduce structures of domination in our work. Nagar and Shirazi talk about this in terms of cultivating “radical vulnerability,” or the process of “reminding ourselves and one another of the violent histories and geographies that we inherit and embody despite our desires to disown them” (2019, 239).

I have experienced the value of such conversations first-hand. For example, I was once part of a team of researchers who initiated a meeting with leaders from the Massachusetts Bail Fund in order to explore potential collaborations. During the meeting my team offered up a number of ideas on ways we might use data collected by the bail fund to run various studies. By and large, our ideas were met with silence. Toward the end of the meeting, Atara Rich-Shea, the executive director of the bail fund, leaned in and shared her frank perspective. She said the bail fund was frequently contacted by researchers who were only interested in asking their own questions and that, for the most part, those questions were harmful to the people she served. She went on to explain the ways that academics undermine the work of organizations like hers by pursuing research that either divided people into categories of “deserving and undeserving,” distracted from more pressing issues, or erased the violence of carceral policies.

Atara broke her initial silence during our brainstorm to tell us that the bail fund was not interested in sharing their data with us if there was no accountability for how that data would be used. She also challenged some of the language we’d been using throughout the conversation, pointing out the ways that that language reinforced harmful assumptions about the people that she worked with. Atara’s comments were the beginning of many more fruitful conversations about how our group might begin to ask better questions and set up structures to ensure that our work was accountable to community organizations like hers. Over time, not only did the quality of our questions improve, but we became increasingly capable of identifying and un-doing harmful tendencies in our work.

In order to create space for such conversations to occur, computational practitioners must rethink the default extractive modes of engagement that are inculcated in their field. One of the major appeals of computational work is that it provides the opportunity to traverse multiple high stakes domains while engaging in large-scale problem-solving. In theory, the same basic computational methods might be used to develop a cure for malaria or to weigh in on important policy debates regarding public safety. Yet such interventions often ignore the nuances of local context and operate without concern for prior and ongoing grassroots efforts. Few computational practitioners take the time to cultivate relationships with directly impacted communities. And when they do, those interactions often lack the structures and commitments necessary to develop a shared sense of accountability and reciprocity in the encounter (Benjamin 2016b). As a result, computational practitioners miss valuable opportunities to learn about the ways they might break cycles of domination in their work and forge new kinds of mutually enabling relationships.

A foundational first step toward cultivating common notions is to prioritize long-term relationships with grassroots collaborators situated in local contexts. As Graziani and Shi explain, a commitment to long-term relationships means “moving at the speed, scale, and pace of community collaborators instead of according to the expectations of the academy” (2020, 409). Embedding oneself in a local community over a prolonged period of time enables computational practitioners to become attuned to the contextual and temporal nuances of their work so that they can adjust to changing circumstances.

Such long-term commitments lay the foundation for building and continuously renegotiating a shared vocabulary for the work. A key aspect of building a shared vocabulary involves recognizing the importance of bi-directional communication—it is not simply about familiarizing lay people with technical jargon, but also about familiarizing technical people with the liberatory terms and counter-concepts communities use to break free of discursive violence. Yet all too often, the latter is neglected or completely overlooked. When bi-directional communication is prioritized, it creates space for multiple forms of expression and types of knowledge to be valued and heard. This lays the groundwork for productive conversations in which a set of common notions can emerge.

Conclusion

The goal of this article has been to explore the liberatory potential of refusal as a framework for ethical engagement in data science. While significant effort and resources are spent on formalizing abstract notions of “fairness” in data, this work leaves a number of harms unexamined. Refusal expands the conversation beyond the “solution space,” to grapple with the ways that computational work obscures violence and reproduces structures of domination. Rather than frame the expansion of data regimes and technological capacity as an inherent good to be maximized, the concept of refusal embraces boundaries as productive and beneficial.

But refusal is more than just an exit strategy. In its most potent form, refusal operates as a framework for renegotiating the terms of engagement, creating space for new kinds of relationships to emerge within radically different structures of commitment and accountability. Rather than adhering to a fixed set of rules, the goal of refusal is to cultivate an ethics of encounter that enables us to grow in our collective capacity to experiment, learn, and build new worlds together. In this way, refusal offers an entry point into a transformative process of becoming otherwise, to break free from overdetermined notions of the probable or the practical in order to enact the possible.

Acknowledgements

Many thanks to Shreya Chowdhary, Erhardt Graeff, Sarah Hamid, Anna Lauren Hoffmann, Os Keyes, James Kilgore, Beth Semel, Justin Steil, Lucy Suchman, Danielle Wood, Jonathan Zong, Ethan Zuckerman, and three anonymous reviewers for insightful comments and feedback that informed this work. In addition, this paper would not have been possible without key conversations during my 2020–2021 Tech and Human Rights Fellowship at Harvard’s Carr Center for Human Rights Policy and a generous paper workshop discussion at Data and Society in the summer of 2021.

Author Biography

Chelsea Barabas is a PhD candidate in the Media, Arts, and Sciences program at MIT.

References

Ahmed, Sara. 2017. “Institutional As Usual.” feministkilljoys, October 24, 2017. Accessed July 11, 2021.
https://feministkilljoys.com/2017/10/24/institutional-as-usual/.

Amrute, Sareeta. 2019. “Of Techno-Ethics and Techno-Affects.” Feminist Review 123(1): 56–73.
https://doi.org/10.1177/0141778919879744.

Bales, William, Karen Mann, Thomas Blomberg, Gerry Gaes, et al. 2010. “A Quantitative and Qualitative Assessment of Electronic Monitoring.” US Department of Justice Report. Bibliogov, 208. Accessed August 24, 2022. https://www.ojp.gov/pdffiles1/nij/grants/230530.pdf.

Ball, Molly. 2015. “Do the Koch Brothers Really Care About Criminal-Justice Reform?” The Atlantic, March 3, 2015. Accessed July 12, 2021.
https://www.theatlantic.com/politics/archive/2015/03/do-the-koch-brothers-really-care-about-criminal-justice-reform/386615/.

Barabas, Chelsea. 2020a. “Beyond Bias: Re-Imagining the Terms of ‘Ethical Al’ in Criminal Law.” Georgetown Journal of Law and Modern Critical Race Theory 12(2): 83–112.
http://doi.org/10.2139/ssrn.3377921.

. 2020b. “To Build a Better Future, Designers Need to Start Saying ‘No.’” Medium, October 20, 2020. Accessed July 12, 2021.
https://onezero.medium.com/refusal-a-beginning-that-starts-with-an-end-2b055bfc14be.

Barabas, Chelsea, Audrey Beard, Theodora Dryer, Beth Semel, et al. 2020a. “Abolish the #TechToPrisonPipeline.” Coalition for Critical Technology, June 23, 2020. Accessed July 8, 2021.
https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16.

Barabas, Chelsea, Colin Doyle, J. B. Rubinovitz, and Karthik Dinakar. 2020b. “Studying Up: Reorienting the Study of Algorithmic Fairness around Issues of Power.” Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 167–176.
https://doi.org/10.1145/3351095.3372859.

Benjamin, Ruha. 2016a. “Racial Fictions, Biological Facts: Expanding the Sociological Imagination through Speculative Methods.” Catalyst: Feminism, Theory, Technoscience 2(2): 1–28.
https://doi.org/10.28968/cftt.v2i2.28798.

. 2016b. “Informed Refusal: Toward a Justice-Based Bioethics.” Science, Technology, & Human Values 41(6): 967–990.
https://doi.org/10.1177/0162243916656059.

Benjamin, Ruha, ed. 2019a. Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life. Durham, NC: Duke University Press.
https://doi.org/10.1215/9781478004493.

. 2019b. Race after Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity.

Bergman, Carla, and Nick Montgomery. 2017. Joyful Militancy: Building Thriving Resistance in Toxic Times. Chico, Oakland, Edinburgh, Baltimore: AK Press.

Brayne, Sarah. 2014. “Surveillance and System Avoidance: Criminal Justice Contact and Institutional Attachment.” American Sociological Review 79(3): 367–391.
https://doi.org/10.1177/0003122414530398.

Cech, Erin A. 2013. “The (Mis)Framing of Social Justice: Why Ideologies of Depoliticization and Meritocracy Hinder Engineers’ Ability to Think about Social Injustices.” In Engineering Education for Social Justice: Critical Explorations and Opportunities, edited by Juan Lucena, 67–84. Dordrecht: Springer Netherlands.
https://doi.org/10.1007/978-94-007-6350-0_4.

Cech, Erin A., and Heidi M. Sherick. 2015. “Depoliticization and the Structure of Engineering Education.” In International Perspectives on Engineering Education. Engineering Education and Practice in Context, Volume 1, edited by Steen H. Christensen., Christelle Didier, Andrew Jamison, Martin Meganck, et al., 203–216. Cham: Springer.
https://doi.org/10.1007/978-3-319-16169-3_10.

Chouldechova, Alexandra. 2017. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data 5(2): 153–163.
https://doi.org/10.1089/big.2016.0047.

Christin, Angèle. 2017. “Algorithms in Practice: Comparing Web Journalism and Criminal Justice.” Big Data & Society 4(2): 1–14.
https://doi.org/10.1177/2053951717718855.

Cifor, Marika, Patricia Garcia, T. L. Cowan, Jasmine Rault, et al. 2019. “Feminist Data Manifest-No.” Accessed July 7, 2021.
https://www.manifestno.com/home.

Comey, James. 2015. “Law Enforcement and Race Relations.” C-SPAN, February 12, 2015. Accessed July 11, 2021.
https://www.c-span.org/video/?324342-1/fbi-director-james-comey-law-enforcement-race-relations.

CourtWatch MA. 2018. “First 100 Days.” CourtWatch MA: Blog. Accessed July 12, 2021.
https://www.courtwatchma.org/first-100-days.html.

Crenshaw, Kimberlé W. 2012. “From Private Violence to Mass Incarceration: Thinking Intersectionally About Women, Race, and Social Control.” UCLA Law Review 59(6): 1418–1472. Accessed August 15, 2022.
https://www.uclalawreview.org/pdf/59-6-1.pdf.

D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. Cambridge, MA: The MIT Press.
https://doi.org/10.7551/mitpress/11805.001.0001.

Dolovich, Sharon. 2011. “Exclusion and Control in the Carceral State.” Berkeley Journal of Criminal Law 16(2): 259–339.
https://doi.org/10.15779/Z383G8P.

Dutta, Mohan, and Mahuya Pal. 2010. “Dialog Theory in Marginalized Settings: A Subaltern Studies Approach.” Communication Theory 20(4): 363–386.
https://doi.org/10.1111/j.1468-2885.2010.01367.x.

Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.

Friedman, Brittany. 2021. “Toward a Critical Race Theory of Prison Order in the Wake of Covid-19 and Its Afterlives: When Disaster Collides with Institutional Death by Design.” Sociological Perspectives 64(5): 689–705.
https://doi.org/10.1177/07311214211005485.

Fryer, Jr Roland G. 2016. “An Empirical Analysis of Racial Differences in Police Use of Force.” National Bureau of Economic Research Working Paper Series 1–55.
http://doi.org/10.3386/w22399.

Gangadharan, Seeta P. 2020. “Context, Research, Refusal: Perspectives on Abstract Problem-Solving.” Our Data Bodies: Blog. April 30, 2020. Accessed July 8, 2021.
https://www.odbproject.org/2020/04/30/context-research-refusal-perspectives-on-abstract-problem-solving/.

Gilmore, Ruth W. 2002. “Fatal Couplings of Power and Difference: Notes on Racism and Geography.” The Professional Geographer 54(1): 15–24.
https://doi.org/10.1111/0033-0124.00310.

. 2007. Golden Gulag: Prisons, Surplus, Crisis, and Opposition in Globalizing California. American Crossroads. Berkeley: University of California Press.

. 2014. Foreword to The Struggle within: Prisons, Political Prisoners, and Mass Movements in the United States, by Dan Berger. Oakland, CA: PM Press.

. 2017. “Abolition Geography and the Problem of Innocence.” In Futures of Black Radicalism, edited by Gaye T. Johnson and Alex Lubin, 224–241. Brooklyn: Verso.

Graeff, Erhardt. 2020. “The Responsibility to Not Design and the Need for Citizen Professionalism.” Tech Otherwise 1–5. May 25, 2020.
https://doi.org/10.21428/93b2c832.c8387014.

Graziani, Terra, and Mary Shi. 2020. “Data for Justice: Tensions and Lessons from the Anti-Eviction Mapping Project’s Work Between Academia and Activism.” ACME: An International Journal for Critical Geographies 19(1): 397–412.
https://acme-journal.org/index.php/acme/article/view/1776.

Harris, David A. 2003. “The Reality of Racial Disparity in Criminal Justice: The Significance of Data Collection.” Law and Contemporary Problems 66(3): 71–98.
https://scholarship.law.duke.edu/lcp/vol66/iss3/4.

Hartman, Saidiya V. 1997. Scenes of Subjection: Terror, Slavery, and Self-Making in Nineteenth-Century America. New York and Oxford: Oxford University Press.

Hoffmann, Anna L. 2020. “Terms of Inclusion: Data, Discourse, Violence.” New Media & Society 23(12): 3539–3556.
https://doi.org/10.1177/1461444820958725.

. 2021. “Even When You Are a Solution You Are a Problem: An Uncomfortable Reflection on Feminist Data Ethics.” Global Perspectives 2(1):
https://doi.org/10.1525/gp.2021.21335.

Honig, Bonnie. 2021. A Feminist Theory of Refusal. Cambridge, MA: Harvard University Press.
https://doi.org/10.4159/9780674259249.

hooks, bell. 1989. “Choosing the Margin as a Space of Radical Openness.” Framework: The Journal of Cinema and Media 36: 15–23.
https://www.jstor.org/stable/44111660.

. 1990. “Marginality as a Site of Resistance.” In Out There: Marginalization and Contemporary Cultures, edited by Russell Ferguson and Trinh T. Minh-ha, 241–243. Cambridge, MA: MIT Press.

Hope, Alexis, Catherine D’Ignazio, Josephine Hoy, Rebecca Michelson, et al. 2019. “Hackathons as Participatory Design: Iterating Feminist Utopias.” Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14. Glasgow: Association for Computing Machinery (ACM).
https://doi.org/10.1145/3290605.3300291.

Hu, Lily. 2021. “Race, Policing, and the Limits of Social Science.” Boston Review, May 6, 2021. Accessed July 9, 2021.
http://bostonreview.net/science-nature-race/lily-hu-race-policing-and-limits-social-science.

Irani, Lilly, and Khalid Alexander. 2021. “The Oversight Bloc.” Logic Magazine, December 25, 2021. Accessed August 24, 2022.
https://logicmag.io/beacons/the-oversight-bloc/.

Irani, Lilly, Janet Vertesi, Paul Dourish, Kavita Philip, et al. 2010. “Postcolonial Computing: A Lens on Design and Development.” Proceedings of the 28th International Conference on Human Factors in Computing Systems—CHI ’10, 1311–1320. Atlanta: ACM Press.
https://doi.org/10.1145/1753326.1753522.

Kaba, Mariame, ed. 2020. “What’s Next? Safer and More Just Communities Without Policing.” Interrupting Criminalization: Research in Action. Project NIA. Accessed July 12, 2021.
https://view.publitas.com/interrupting-criminalization-byekyy37zyrk/whats-next-safer-and-more-just-communities-without-policing/page/1.

. 2021. We Do This ’Til We Free Us: Abolitionist Organizing and Transforming Justice. Chicago: Haymarket Books.

Kahn, Natalie L., and Simon J. Levien. 2021. “SEAS Cancels Class on Controversial Policing Strategy After Student Petition.” The Harvard Crimson, January 26, 2021. Accessed March 1, 2022.
https://www.thecrimson.com/article/2021/1/26/seas-cancels-policing-course/.

Katz, Yarden. 2020. Artificial Whiteness: Politics and Ideology in Artificial Intelligence. New York: Columbia University Press.

Kilgore, James. 2017. “Electronic Monitoring: A Survey of the Research for Decarceration Activists.” Challenging E-Carceration, July 2017. Report. Accessed August 15, 2022.
http://www.realcostofprisons.org/writing/kilgore-survey-of-em-research.pdf.

Kleinberg, Jon, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. 2018. “Algorithmic Fairness.” AEA Papers and Proceedings, 108: 22–27.
https://doi.org/10.1257/pandp.20181018.

Kubiak, Sheryl P., Woo Jong Kim, Gina Fedock, and Deborah Bybee. 2015. “Testing a Violence-Prevention Intervention for Incarcerated Women Using a Randomized Control Trial.” Research on Social Work Practice 25(3): 334–348.
https://doi.org/10.1177/1049731514534300.

Lacy, Akela, Alice Speri, Jordan Smith, and Sam Biddle. 2020. “Prisons Launch ‘Absurd’ Attempt to Detect Coronavirus in Inmate Phone Calls.” The Intercept, April 21, 2020. Accessed July 12, 2021.
https://theintercept.com/2020/04/21/prisons-inmates-coronavirus-monitoring-surveillance-verus/.

LEO Technologies. 2020a. “LEO Technologies and Verus: Supporting Our Nation’s Correctional Facilities During the Covid-19 Pandemic.” LEO Technologies: Blog. March 19, 2020. Accessed July 12, 2021.
https://leotechnologies.com/leo-technologies-and-verus-supporting-our-nations-correctional-facilities-during-the-covid-19-pandemic/.

. 2020b. “What Is LEO Technologies?” LEO Technologies: Blog. June 29, 2020. Accessed July 12, 2021.
https://leotechnologies.com/what-is-leo-technologies/.

. n.d. “How Verus Works.” Accessed July 12, 2021.
https://leotechnologies.com/services/verus/.

Lorde, Audre. 1984. Sister Outsider: Essays and Speeches. Berkeley: Crossing Press.

McGranahan, Carole. 2016. “Theorizing Refusal: An Introduction.” Cultural Anthropology 31(3): 319–325.
https://doi.org/10.14506/ca31.3.01.

Media Justice. 2021. Points of Connection: Mapping Electronic Monitoring to Challenge E-Carceration. Virtual event held on March 23, 2021. Video, 1:35:21. Accessed July 12, 2021.
https://www.youtube.com/watch?v=w_j-r8xqIZM.

Melamed, Jodi. 2019. “Operationalizing Racial Capitalism: Administrative Power and Ordinary Violence.” Talk at Yale University, filmed on October 31, 2019. Video, 1:35:17. Accessed July 12, 2021.
https://www.youtube.com/watch?v=o3Z9sOGf6BA&t=2675s.

Muhammad, Khalil G. 2015. “The Condemnation of Blackness” Khalil Gibran Muhammad Book Talk, John Jay College of Criminal Justice, filmed on May 6 2015. Video, 1:43:20. Accessed July 11, 2021.
https://www.youtube.com/watch?v=STKb-ai6874.

. 2019. The Condemnation of Blackness: Race, Crime, and the Making of Modern Urban America. Cambridge, MA: Harvard University Press.

Murakawa, Naomi. 2014. The First Civil Right: How Liberals Built Prison America. New York and London: Oxford University Press.

Nagar, Richa, and Roozbeh Shirazi. 2019. “Radical Vulnerability.” In Keywords in Radical Geography: Antipode at 50, edited by Antipode Editorial Collective, Tariq Jazeel, Andy Kent, Katherine McKittrick, et al., 236–242. Hoboken, NJ: John Wiley & Sons, Inc.
https://doi.org/10.1002/9781119558071.ch44.

Naples-Mitchell, Katherine. 2021. “Fool’s Gold: How RCT Research Harms Communities Impacted by Criminal Punishment.” Charles Hamilton Houston Institute for Race & Justice, January 26, 2021. Accessed July 11, 2021.
https://charleshamiltonhouston.org/news/2021/01/fools-gold-how-rct-research-harms-communities-impacted-by-criminal-punishment/.

Norris, Samuel, Matthew Pecenco, and Jeffrey Weaver. 2021. “The Effects of Parental and Sibling Incarceration: Evidence from Ohio.” American Economic Review 111(9): 2926–2963.
https://doi.org/10.1257/aer.20190415.

North, Sandy. 2020. “New Study! Evaluating Counsel at First Appearance in Hays County, TX.” The Access to Justice Lab. July 7, 2020. Accessed July 11, 2021.
https://a2jlab.org/new-study-evaluating-counsel-at-first-appearance-in-hays-county-tx/.

Onuoha, Mimi. 2020. “When Proof Is Not Enough: Throughout History, Evidence of Racism has failed to Effect Change.” FiveThirtyEight: Blog. July 1, 2020. Accessed July 9, 2021.
https://fivethirtyeight.com/features/when-proof-is-not-enough/.

Prins, Seth J., and Adam Reich. 2018. “Can We Avoid Reductionism in Risk Reduction?” Theoretical Criminology 22(2): 258–278.
https://doi.org/10.1177%2F1362480617707948.

Raji, Inioluwa D., Morgan K. Scheuerman, and Razvan Amironesei. 2021. “You Can’t Sit With Us: Exclusionary Pedagogy in AI Ethics Education.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 515–525. Virtual Event Canada: Association for Computing Machinery (ACM).
https://doi.org/10.1145/3442188.3445914.

Roberts, Dorothy E. 2019. “Digitizing the Carceral State.” Harvard Law Review 132: 1695–1713. Accessed August 24, 2022.
https://harvardlawreview.org/2019/04/digitizing-the-carceral-state/.

Saloner, Brendan, Kalind Parish, Julie A. Ward, Grace DiLaura, et al. 2020. “Covid-19 Cases and Deaths in Federal and State Prisons.” Journal of the American Medical Association 324(6): 602–603.
https://doi.org/10.1001/jama.2020.12528.

Schneider, Avie, and Laura Sydell. 2019. “Microsoft Workers Protest Army Contract With Tech ‘Designed To Help People Kill.’” National Public Radio, business section, February 22, 2019. Accessed March 1, 2022.
https://www.npr.org/2019/02/22/697110641/microsoft-workers-protest-army-contract-with-tech-designed-to-help-people-kill.

Schnepel, Kevin T. 2020. “Covid-19 in U.S. State and Federal Prisons.” December 2020 Update. Washington DC: Council on Criminal Justice, December 2020. Accessed August 24, 2022.
https://build.neoninspire.com/counciloncj/wp-content/uploads/sites/96/2021/07/COVID-19-in-State-and-Federal-Prisons-December-Update-2.pdf.

Schrader, Stuart. 2019. Badges without Borders: How Global Counterinsurgency Transformed American Policing. 56, American Crossroads Series. Oakland, CA: University of California Press.

Shah, Nishant. 2015. “Networked Margins: Revisiting Inequality and Intersection.” In Digitally Connected: Global Perspectives on Youth and Digital Media, edited by Santa Cortesi and Urs Gasser, Gameli Adzaho, Bruce Baikie, et al., 9–12. Berkman Center Research Publication Number 2015–6. Accessed March 1, 2022.
http://www.ssrn.com/abstract=2585686.

Simpson, Audra. 2014. Mohawk Interruptus: Political Life across the Borders of Settler States. Durham, NC: Duke University Press.

Spade, Dean. 2015. Normal Life: Administrative Violence, Critical Trans Politics, and the Limits of Law. Revised and Expanded edition. Durham, NC: Duke University Press.

Stanford Computational Policy Lab. n.d. “Driving Social Impact through Technical Innovation.” Stanford Computational Policy Lab. Accessed April 2, 2019.
https://policylab.stanford.edu/.

Stevenson, Megan T. 2017. “Assessing Risk Assessment in Action.” Minnesota Law Review 103: 303–71.
https://doi.org/10.2139/ssrn.3016088.

Stoller, Kristin. 2019. “Texas Billionaire John Arnold Gives $39 Million To Reform America’s Broken Bail System.” Forbes, March 19, 2019. Accessed July 12, 2021.
https://www.forbes.com/sites/kristinstoller/2019/03/19/texas-billionaire-john-arnold-gives-39-million-to-reform-americas-broken-bail-system/.

Stop LAPD Spying Coalition. 2018. “Dismantling Predictive Policing in Los Angeles.” May 8, 2018. Accessed July 11, 2021.
https://stoplapdspying.org/wp-content/uploads/2018/05/Before-the-Bullet-Hits-the-Body-May-8-2018.pdf.

Sutherland, Tonia. 2019. “The Carceral Archive: Documentary Records, Narrative Construction, and Predictive Risk Assessment.” Journal of Cultural Analytics 4(1): 1–22.
https://doi.org/10.22148/16.039.

Thompson, Jonathan. 2020. “National Sheriffs’ Association Teams with LEO Technologies on Covid-19 Industry Action Group for Correctional Facilities.” National Sheriff’s Association, April 14, 2020. Accessed July 12, 2021.
https://www.sheriffs.org/National-Sheriffs%E2%80%99-Association-Teams-LEO-Technologies-COVID-19-Industry-Action-Group-for.

Tuck, Eve, and K. Wayne Yang. 2014a. “R-Words: Refusing Research.” In Humanizing Research: Decolonizing Qualitative Inquiry with Youth and Communities, edited by Django Paris and Maisha T. Winn. Thousand Oaks: SAGE Publications, Inc.
https://dx.doi.org/10.4135/9781544329611.n12.

. 2014b. “Unbecoming Claims: Pedagogies of Refusal in Qualitative Research.” Qualitative Inquiry 20(6): 811–818.
https://doi.org/10.1177/1077800414530265.

Wakabayashi, Daisuke, and Scott Shane. 2018. “Google Will Not Renew Pentagon Contract That Upset Employees.” The New York Times, June 1, 2018. Technology Section. Accessed March 1, 2022.
https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html.

Wang, Jackie. 2018. Carceral Capitalism. South Pasadena: Semiotext(e).

Weld, Kirsten. 2014. Paper Cadavers: The Archives of Dictatorship in Guatemala. American Encounters/Global Interactions Series. Durham, NC: Duke University Press.

Wright, Sarah. 2018. “When Dialogue Means Refusal.” Dialogues in Human Geography 8(2): 128–32.
https://doi.org/10.1177/2043820618780570.

Zahara, Alex. 2016. “Ethnographic Refusal: A How to Guide.” Discard Studies, August 8, 2016. Accessed July 11, 2021.
https://discardstudies.com/2016/08/08/ethnographic-refusal-a-how-to-guide/.

Zong, Jonathan. 2020. “From Individual Consent to Collective Refusal: Changing Attitudes toward (Mis)Use of Personal Data.” XRDS: Crossroads, The ACM Magazine for Students 27(2): 26–29.
https://doi.org/10.1145/3433140.

Zong, Jonathan, and J. Nathan Matias. 2020. “Building Collective Power to Refuse Harmful Data Systems.” Citizens and Technology Lab: Blog. August 12, 2020. Accessed July 8, 2021.
https://citizensandtech.org/2020/08/collective-refusal/.

Notes

  1. Given the broadness of the term “computational practitioners” it’s important to note that there is significant variation in the level of privilege and agency that individuals have access to when considering refusal as a course of action. For example, a software engineer at Google is likely to have much more leverage in a conversation regarding the design of a new digital platform than an hourly wage laborer doing manual data entry on that same platform. The price of walking away from such a project would also be much more burdensome for the hourly wage laborer. In this article, I am primarily concerned with exploring the potential of refusal for relatively privileged computational practitioners. This sub-group is quite large and can include highly paid salary workers, as well as students and early career researchers.

Copyright, Citation, Contact

Copyright © 2022. (Chelsea Barabas). Licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0). Available at estsjournal.org.

To cite this article: Barabas, Chelsea. 2022. “Refusal in Data Ethics: Re-Imagining the Code Beneath the Code of Computation in the Carceral State.” Engaging Science, Technology, and Society 8(2): 35–57.
https://doi.org/10.17351/ests2022.1233.

To email contact Chelsea Barabas: cbarabas@mit.edu.