THE COLLEGE HILL INDEPENDENT


Beyond Screens

In conversation with Khiara Bridges and Virginia Eubanks

by Julia Rock & Lily Meyersohn

Illustration by Mariel Solomon

published March 1, 2019


content warning: maternal death

 

"I’m literally dying,” Lashonda Hazard posted on Facebook early last month. The otherwise-healthy, pregnant 27-year-old had visited one of Providence’s premier hospitals, Women & Infants, that day, complaining of excruciating abdominal pain. Physicians checked on the pregnancy but sent Hazard home. Though she returned to the hospital the next day, she and her unborn baby died of severe preeclampsia while awaiting treatment. Providence is no exception to a national rule: in the US, Black women are 243 percent more likely to die from pregnancy or childbirth compared to their white counterparts.

When asked how we can avert preventable maternal deaths like Hazard’s, Dr. Khiara Bridges, who studies healthcare, reproduction, and racial justice, suggested that the answer often does not lie in new legislation like the Preventing Maternal Deaths Act, which was passed into law last December as one attempt to ameliorate this pressing issue. According to Bridges, that act primarily increases information collection and data analysis as opposed to embracing more sweeping changes inside the hospital or outside its walls. “As if information will save us,” Bridges noted last week at a talk on theorizing racism in healthcare.  “Information isn’t going to get us out of the racism problem.”

 

+++

 

This month, we had the opportunity to speak with Dr. Bridges and Dr. Virginia Eubanks, who both visited Providence to speak at Brown University. Eubanks and Bridges consider themselves intellectual peers; they have sat on panels together and write and research in overlapping fields. Others have brought Bridges and Eubanks together before, but the following side-by-side interviews allow the two to speak on their own terms.

Both scholars discuss how federal, state, and local governments in the US collect and use data to make policy decisions, and how current iterations of data collection and automated decision-making have increased the surveillance of people who receive public services. Bridges and Eubanks argue this heightened surveillance disproportionately and punitively impacts already marginalized people behind a facade of “neutral” technologies. We point to Lashonda Hazard’s death, then, as a horrifying story of the ways that income inequality and structural racism operate in this country and in Providence, which cannot be amended through the tools of information and which tools of information in fact actively perpetuate and exacerbate.

 

+++

 

PART I, in conversation with Dr. Khiara Bridges, a professor of law and anthropology and Associate Dean of Equity, Engagement & Justice at Boston University.

 

The College Hill Independent: You’ve spoken about the way that anthropological ethnography, as you conduct it, can be anti-colonial, allowing you to understand “the violence of racism, classism, xenophobia, and heteronormativity.” By knowing the violences, you can share them with audiences that are unfamiliar with them.

The violences you cover in your most recent book, The Poverty of Privacy Rights, are privacy violations. You theorize that low-income mothers don’t hold the same privacy rights as wealthy mothers for several reasons, especially because of one form of government intervention: “informational” privacy violations. Can you explain how that informational aspect functions?

 

KB: There’s inequality with regard to the information we’re collecting that produces an unequal and unjust outcome. The information has led policy makers to believe that a particular population requires a range of services, and should be coerced to receive those services by financial officers and health educators. The information has led policy makers to believe that this population is at risk of various negative health outcomes, as well as at risk of abusing and neglecting their children. Privately-insured people enjoy more privacy. They have the latitude to keep information to themselves, so we don’t have the knowledge about those populations. It could be—and it’s likely—that privately insured are at risk for the same things that poor populations are at risk for, but we’re missing it.

 

The Indy: And how does that dynamic manifest in healthcare?

 

KB: If you’re not finding gonorrhea in a woman at 36-weeks gestation because she’s insured, then you don’t know the rates at which insured woman have gonorrhea or chlamydia at 36-weeks gestation. You’re imagining risks in the poor so you’re catching them because you’re testing for them, but you’re not imagining risk in the wealthy because you’re not testing for them. We end up creating the world that we imagine. I talk about it in terms of a figure. There’s a figure produced by the information that we’ve gathered from poor people, and the healthcare they receive is designed for that figure. But many people live lives inconsistent with the figure. So it creates, essentially, bad healthcare.

 

The Indy: These questions about informational privacy are tied to surveillance, which increasingly occurs through genomic surveillance (largely through the collection of DNA in policing databases). How big does the problem of genomic science loom in your mind?

 

KB: A lot of folks who are concerned with racial justice are concerned about genomics, in part because people who are helming genomic studies are not being steeped in critical thought. They end up recreating the assumptions that their research might ultimately dispute if it were done correctly. Some of this genomics research is being offered as proving the existence of biological races. And part of the reason it’s being offered that way is because the folks conducting the research believe in biological races. They’re looking for five races; they’re dividing the data in terms of Black is distinct from white is distinct from Asian is distinct from Indigenous. Then it’s taken as truth because it’s performed by people who are involved in a discipline that has been constructed as the apotheosis of truth.

 

The Indy: Yeah—since the ’70s now, geneticists have repeatedly proven that race is not biological. That knowledge hasn’t stuck. This repeated refusal reminds me that so many Americans similarly fail to internalize that poverty is not due to biology, pathology, or behavior. Dr. Eubanks talks about “mistaking parenting while poor for poor parenting.”

 

KB: Because we have a moralized construction of poverty, we’re okay with penalizing for parenting while poor. I see it in my own work: a lot of the pathology that poor people are imagined to have, if it’s found, people feel comfortable imagining that it’s genetically determined. People feel comfortable in thinking that the reason that Black women are three to four times more likely to die during childbirth than white women is because there must be some genetic disposition towards…death, as opposed to interrogating a society that makes it deadly to be a person of color and then to try to reproduce.

Biological race is diverting our attention from the things that are actually making people of color’s lives shorter and less healthy. We’re focused on some imagined gene that is shortening and reducing the quality of people of color’s lives.

 

The Indy: You studied the role that race plays in the differential treatment of expecting mothers within the New York City medical setting in your first book, Reproducing Race: An Ethnography of Pregnancy as a Site of Racialization. The National Institute of Health sometimes mandates that race be used to group study participants, although those studies don’t define race the same way or define it vaguely. Going forward, how should racial categories be used in biomedicine?

 

KB: I think race certainly influences a person’s health, but not because there’s a gene specific to certain races. As Dorothy Roberts puts it, “race is a political category that has biological consequences.” Race is important in health research because we need to know how this social construct is impacting people’s biology; we need to know what our society is doing to people. I think the answer is having people who are trained in critical thought—and when I say critical thought, I mean people who are trained in thinking about the various ways we’ve thought about race over the centuries. With the track that leads into genomics research, you can avoid all sorts of important classes. We have to change the curriculum so population geneticists aren’t replicating the idea of race being in the biology of individuals.

 

The Indy: In the meantime, then, what do you wish we knew?

 

KB: I think what we’re all missing is a really robust theory of the relationship between structural racism and individual bias. When medical schools, for example, are thinking about what they can do to eliminate these disparities, they think in terms of individual bias. The hope is that we will have different outcomes if providers are aware. It’s definitely laudable; I would never say you shouldn’t be doing that. But it’s not all that ought to be done. Implicit biases happen in underfunded, overburdened public hospitals; in a society in which people of color, because of inherited disadvantage, have a hard time getting into medical schools and law schools and colleges and even high schools relative to their white counterparts.

 

The Indy: But do you see palpable change being enacted right now? Maybe the increased attention towards Black maternal mortality rates, for instance, is one positive development?

 

KB: I like to remind folks that Black maternal mortality (MMR) is currently three to four times the rate of white maternal mortality, but that that disparity hasn’t changed over the years. Pre-Civil Rights Movement—we’re talking about the ’50s, ’40s, ’30s—the gap [between Black MMR and white MMR] was that high. So we have to query what about our socio-political present has made the issue more visible. Visibility is insufficient if it’s visibility for visibility’s sake.

 

The Indy: Why is MMR making the news, then?

 

KB: We’re talking about pregnant women. Feminists have made this argument since time immemorial: women are only valuable and valued when they’re reproducing. When we’re faced with the fact that women are dying when they’re valuable, we care. But when a Black man has hypertension at the age of 40, that’s not a valuable person right there.

Ultimately, what sort of commitments do we have to reducing [MMR]? I feel strongly in saying I doubt we’re committed to making the world a healthier place for people of color because it takes hard work—and a lot of transformation. There’s good evidence that stress has contributed to Black maternal mortality rates. It’s stressful being a person of color—being embodied as a Black woman. So are we committed to making being a Black woman less stressful?

 

+++

 

PART II, in conversation with Dr. Virginia Eubanks, associate professor of political science at the University at Albany, SUNY, and founding member of the Popular Technology Workshops and Our Knowledge, Our Power, a grassroots welfare rights and anti-poverty organizing.

 

 

The Indy: In your most recent book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, you write about welfare eligibility systems, homeless services, and a predictive risk model for child abuse in Indiana, Los Angeles, and Allegheny County, PA, respectively. What led you to undertake this investigation into automated decision-making and the distribution of public services?

 

Dr. Virginia Eubanks: I come from a background in the community media and technology movements as well as welfare rights movements. Starting in the mid-’90s, I did work around building collaborative needs from access points to technology in poor and working class neighborhoods. The way I worked started to shift when I got involved in a residential YWCA in my hometown of Troy, New York—low-cost housing for 90 poor and working class women building collaborative technology tools together. At some point this incredibly generous and smart community of women sat me down and said, “All the questions you’re asking don’t have anything to do with our lives.” And so I said, “how can I do better?” And they said, “You know, you assume we don’t have any interactions with technology in our day-to-day lives, and that’s totally untrue.” They come into contact with technology in the criminal justice system and their neighborhoods and low-wage workplace and most importantly, in the public assistance office.

It reshaped how I think about the relationship between technology and economic and racial justice. It upended this popular idea at the time, the digital divide: that the big problem was access to technology. And it repointed my vision to the ways that our most innovative technologies in poor and working class neighborhoods often act to exploit and punish people rather than to connect them to a new world of opportunity.

 

The Indy: “Eugenics created the first database of the poor,” you write, explaining the relationship between the use of “scientific charity” and the eugenics movement. What does this historical trajectory illustrate about today’s digital poorhouse?

 

VE: The first [historical] moment that is important to understand is in 1819, when there’s this huge economic depression. There’s organizing by poor and working people for their survival and their rights, which economic elites respond to by commissioning studies that ask the question, is the problem poverty? Or is it what they called at the time “pauperism,” dependence on public benefits? Not surprisingly, it came back that the problem was not poverty, the problem was dependence on public benefits. So they invented this new technology to manage people’s access to public resources: the actual brick-and-mortar county poorhouse. This was an institution that was supposed to raise the barrier to receiving public benefits so high that nobody but the most desperate people would ask for assets, because to get public benefits, you had to “voluntarily” enter the poorhouse. That’s the moment we decided as a nation that the first thing public assistance should do is a moral diagnosis and decide whether you’re “deserving” or “undeserving,” creating this punitive system. Before providing any kind of assistance, this new class of people called caseworkers—originally police officers, but then young, college-educated white women who lacked other employment opportunities—were used to explore every aspect of a family’s life in order to decide whether or not helping them would be “morally proper.” This was called scientific charity. Those assumptions are deeply built into the tools we see today. It doesn’t take bad intention to reproduce those ideas. It just takes amplifying and speeding up the process we already have, which often relies on these flawed, racist, classist, and sexist assumptions about what creates poverty.

 

The Indy: It strikes me that your critique might be applicable to issues like climate change or healthcare; issues that politicians and entrepreneurs see as innovation problems but are really political problems. Is this a useful comparison?

 

VE: One of the questions that comes up almost every time I’ve done a talk is, “isn’t the problem the intention of the designers or the intention of their users?” This gets to the heart of technocracy, this idea that tools carry no value. That we are completely in control of them and it’s just about our intentions. This idea that digital tools are somehow blank doesn’t make sense outside of an ideology that is about technologies being apolitical solutions to social problems. [Digital tools] evolved in a particular system, in a particular culture, to do particular things, and these tools have evolved to solve political problems. We have a tendency to reach for technological solutions to big social problems when we’re trying to avoid a political conversation that we need to be having, and I believe that at worst we use these technological tools to avoid having those conversations.

 

The Indy: Does a focus on technological solutions get in the way of a conversation about political problems?

 

VE: At their worst, the kind of tools that I write about [allow] us to ignore some of the most pressing problems we face as a country because they produce this neutral, objective face. I talk to designers about two very basic questions to ask themselves before they start designing one of these tools. The first is, does the tool increase or support poor and working peoples’ self-determination or dignity? And the second is, if it were designed for anyone else, besides poor and working people, would it be accepted?  Would it be tolerated? I think we need to think very consciously and purposely about building tools that embody all of our values, not just the values of efficiency and cost savings, or the values of accuracy or fairness, but our deeper democratic values. We assume that if tools are neutral and objective that is the same thing as being fair and just. But if you build a tool in neutral, you’re building a tool for the status quo and the status quo in the US is quite troubling. And if we don’t build what I think of as certain “equity gears” into machines, then they will automate and reproduce the kinds of inequalities we already see.