THE COLLEGE HILL INDEPENDENT


A Recruit Reads the Fine Print

What does it mean to “change the world” in tech?

by Jessica Dai

Illustration by Colin Kent-Daggett

published October 19, 2018


content warning: ICE, policing

Job recruiting season at universities is a special circle of hell. Each fall, tech, finance, and consulting firms descend on elite institutions, preying upon the insecurity of ambitious undergraduates. They promise employment at a high-status, name-brand company; validation (or the perception thereof) for high performance; and six figures (or five, for the summer) to boot.

In particular, tech companies sell the narrative that working at their company is meaningful, in multiple senses of the word. The technical problems are intellectually challenging; employees work with the smartest people in their field. Moreover, the end-goals of the company and its products are nothing short of world-changing. On Facebook’s university recruiting website, for instance, the words “Do the Most Meaningful Work of Your Career” are splashed across the landing page in giant font. “At Facebook, we’re bringing the world closer together,” smaller text continues. Similarly, Twitter proclaims that “Twitter is a place you can help make something happen which is much bigger than you… to make a difference.” Google “creates the tools of the future.” Microsoft strives to “have the biggest impact on the world.”

The message is clear. Join us; change the world.

Yet when faced with criticism, most big tech companies present themselves as contextand content-'neutral' platforms or services, distancing themselves from their role as gatekeeper to the services their products provide, and therefore evading responsibility for the real-world implications of their technology. Companies like Facebook and Twitter claim to be platforms, and just that. There’s no way, they say, that they can properly predict or regulate what users choose to put on their platforms, and, as a consequence, it took years before any company deplatformed harassers like Milo Yiannopoulos or Alex Jones. And there is no shortage of controversy among non-platform tech companies—Amazon’s racially biased facial recognition tool, Microsoft’s and Salesforce’s contracts with Immigration and Customs Enforcement, Google’s involvement in Project Maven (the development of computer vision tools for the Department of Defense) and, more recently, Dragonfly (the creation of a censored version of Google search for deployment in China)—all projects explained away as simple business decisions. By this logic, ICE or the DoD are customers just like anyone else, and tech companies are simply selling them software-as-a-service just as they would any other customer.

The obvious contradiction, then, is that certain large tech companies simultaneously sell the narrative that they change the world, when it’s convenient for them to do so; then, when criticized, they say something along the lines of 'we don’t do anything at all.' To contextualize, the high-level vision for these products are often actually revolutionary, and their marketing is at least partially grounded in truth; it would be disingenuous to claim that companies like Facebook, Google, or Microsoft actively seek to harm the world, or that their core business and product offering relies on fundamentally making the world worse. Rather, their vision of world-changing centers around connecting the lives of billions of people around the world, with little regard for any side effects that may have.

 

+++

 

Consider Palantir, a company not nearly as behemothic as Google or Microsoft, where the primary product offering is the ability to analyze and draw insights from massive quantities of data. According to their website, Palantir “build[s] products that make people better at their most important work—the kind of work you read about on the front page of the newspaper.” Palantir’s work has indeed come up quite frequently in the news. One recent Bloomberg headline reads: “Palantir knows everything about you,” in reference to how Palantir products enable the compilation of detailed databases of mass surveillance of the general public, mapping relationships between individuals that are fully accessible without a warrant. Another headline, this time from the Intercept: “Palantir provides the engine for Donald Trump’s deportation machine,” and it’s not an exaggeration. According to publicly-available government funding records, Palantir’s Investigative Case Management system is “mission critical” to ICE’s functioning, because it enables ICE agents to analyze a vast trove of interconnected intelligence databases. It provides agents with access to information about a subject’s education, employment, family and personal relationships, immigration history, criminal records, biometric records (such as fingerprints taken at the border), and home and work addresses. Palantir has also run predictive policing systems across the country, including in Los Angeles and New Orleans, where, according to the Verge, even city council members were unaware that the predictive policing technology was being used—the “philanthropic” nature of the project as well as New Orleans’ governmental structure meant that the contract never went through a regular public procurement process. Predictive policing, criticized even by some members of law enforcement as a technology that infringes upon civil rights, is widely controversial because it catches communities—generally low-income and of color—in positive feedback loops of escalating police contact. Unlike the larger companies that advertise “changing the world,” therefore, Palantir’s core products directly target already marginalized populations and actively inflict material harm.

 

+++

 

Imagine my surprise, therefore, when I received an email from the Brown University Computer Science (CS) recruiting listserv about a lecture titled “The Troubled Future of Privacy... And How Engineers Can Fix It,” hosted by Palantir.

At a school like Brown, with a prestigious name and a well-known CS department, many companies put extra time and capital into their recruiting efforts in order to capture as much top talent as possible. In addition to the university-sponsored general career fair, companies can work directly with the CS department to gain extra access to students through what’s called the Industry Partners Program (IPP). Depending on how much companies are willing to donate (i.e. pay) to the department, partners are given permission to come to campus, hold info sessions, technical events, and interview students in Brown’s CS department building.

Palantir’s privacy talk about how engineers could “fix” the problem of privacy was one of these paid recruiting events. Typically, company-sponsored talks are either led by engineers presenting technical problems the company has tackled, or by recruiters providing general information on company positions and policies. Palantir’s talk, on the other hand, was led by a lawyer, John Grant—whose official title is “Director of Privacy and Civil Liberties Engineering”—and was neither highly technical nor a Palantir-specific pitch. Instead, the topics of Grant’s talk were much more general. He gave a brief history of what’s known in the security/privacy community as “The Crypto Wars,” a back-and-forth between government agencies (e.g. NSA/FBI) and the tech industry and academics about the extent to which digital communication ought to be opaque to surveillance. This began in the 1990s with the rise of RSA, a public-key cryptography method to ensure secure data transmission. Among other things, the government asked tech companies and academics to build in backdoors to all manufactured products so it would be able to access information from those devices and programs at will; the backlash among the technical community was so strong that the government dropped its case.

Grant pointed to this conflict as an example of engineers making principled ethical stands, and concluded with platitudes that anyone familiar with the ‘tech ethics’ space has heard countless times: for computer science students to be ‘conscious,’ to study the humanities, for there to be a mandatory code of ethics or a certification system like doctors’ Hippocratic oath or lawyers’ oaths of attorney. This, of course, begs the question of what constitutes a ‘conscious’ coder—and whether they would work for Palantir.

At least twice, Grant mentioned how software engineers are some of the “most powerful people in the world.” “The decisions you make are going to reshape the powers of governments, corporations, and individuals,” he said. “That’s why engineers need to be careful.” He’s not wrong about that—but the implicit message that working at Palantir would mean reshaping those powers in a just or socially responsible way is a rhetorical flourish that obscures the very real problematic uses of their tools to target and harm marginalized populations.

So, the talk wasn’t really about “the troubled future of privacy”—it was more of a history lesson about the development of privacy in the context of tech. And it was most definitely not about “how engineers can fix it”—no mention was made as to how individual engineers, or even Palantir as a corporation, are working to shape this “troubled future.” It’s telling that the calculations behind Palantir’s recruiting and marketing decisions were formulated around the assumption that students would respond best not to technical content, and not to information about the company itself, but instead about the same techno-chauvinistic/optimistic attitude of “saving the world.”

The vacuousness of Grant’s ultimate recommendations also speaks to the vacuousness of Palantir’s “social good” branding in general. The Privacy and Civil Liberties (PCL) group at Palantir, which Grant heads, was created at least partially in response to the backlash over another Palantir headline: In 2010, the hacktivist group known as Anonymous exposed that Palantir pitched the US Chamber of Commerce on a way to secretly sabotage its opponents, suggesting strategies like infiltrating liberal groups with fake identities and planting false information. According to Grant, the PCL team is a 14-person team consisting of engineers, lawyers, and philosophers. It’s unclear how much leverage a team of fourteen can meaningfully have over a company of over 2,000 engineers where the core product is a “data analytics platform” and the primary customers are clandestine government agencies known to target low-income, immigrant, and non-white communities.

 

+++

 

The irony is that Palantir’s products rely on and exacerbate blatant intrusions of privacy. Grant made a point of emphasizing that Palantir doesn’t buy and sell personal data—just constructs platforms for their customers to better understand the data they’ve already collected. But when I pressed him about the distinction between buying and selling data directly, and incentivizing customers (e.g. police) to gather as much data as possible (e.g. make as many stops or arrests as possible), Grant sidestepped the question, hand-waving instead about how Palantir built in safeguards to make sure “only the right people” saw specific pieces of information. But isn’t it better for civil liberties that the information isn’t collected, aggregated, and made digestible in the first place?

At one point, some attendees began to ask about the impacts of biased datasets within law enforcement contexts, and how they might affect the outcomes of predictive algorithms—hinting, perhaps, at the policing programs that Palantir has caught flak for. Grant provided three responses: one, that Palantir didn’t make machine learning algorithms, just a data analytics platform (just like how large tech companies sidestep critique by claiming status as an apolitical “platform”); two, that Palantir was aware of the problems in biased datasets; and three, that Palantir consulted with internal employee affinity groups like “Black @ Palantir” and “Latinx @ Palantir” about the potential effects of their products. The third response is particularly damning. Embedded in that answer is not just a concession that even if their products were “data analytics platforms,” they still disproportionately and materially harm Black and Latinx populations—but also that the internal company attitude towards those problems is one of dismissal and tokenization.

As the session came to an end and attendees began to pack up, one of the Palantir engineers that had been sitting in on the session made his way over to me. Though he couldn’t reveal the specifics of the implementation, there were robust technical checks in place to protect the privacy of individuals, he assured me. I referenced the Bloomberg article (“Palantir knows everything about you”), which stated that law enforcement didn’t need warrants to scan the database. He responded, “There’s… a lot of inaccuracies in that piece,” and again referenced internal technological methodologies, as though technical jargon from one engineer to another would override the apparent inaccuracy in a piece of journalism accessible to the public. It seemed important to him that this point was conveyed to me in particular, but it’s unclear who exactly he was trying to convince, and why: in a room full of potential hires, there’s no reason Palantir would have needed or wanted my personal perception of the company to change.

 

+++

 

Based on information publicly available on the Industry Partners Program website, the Brown CS department is receiving somewhere around $275,000 for the 2018-19 school year from these industry partners. There are a handful of different tiers of sponsorship—the higher the tier, the more the company pays—and the greater access they have to CS students at Brown. The vast majority of this money is re-invested in student programming: IPP funds initiatives like student and faculty travel to diversity conferences (like Richard Tapia, Grace Hopper, and Out4Undergrad); diversity initiatives like Women in CS and Mosaic+, for underrepresented minority students; the CS for Social Change group; Hack at Brown; the CS departmental undergrad group; and collaborations with other groups on campus, like the Nelson Center for Entrepreneurship.

To its credit, IPP is not meant to be a money-making scheme. Lauren Clarke, the head of the IPP program, made it clear in an interview with the Independent that the vast majority of partners reach out to the department first, and the department has never sought out partners just for the purpose of securing more funding. At the same time, Clarke explained, the department has never declined partners for reasons other than their inability or unwillingness to pay: that is, every company that has been willing to pay has been accepted as a partner, and the department has never initiated the termination of a partnership, which suggests that the department is agnostic as to the substance of what companies actually do. Interestingly enough, this may be changing. Though I hadn’t asked, Clarke mentioned that they were considering terminating the relationship with one current partner in particular, based on their business practices and the involvement of their products in certain practices that didn’t align with Brown’s values. She declined to name the company to which she was referring.

The entire point of the IPP program is to connect students with career opportunities for summers and post-graduation, which brings us back to recruiting. While tech companies like to parade their “world-changing” missions in recruiting pitches, they also know that the students they’re pitching to are often desperate for offers. For students early in their CS career, the pressure to secure jobs or internships is immense—especially to find employers that have sufficient scale and infrastructure to provide mentorship and skill-development opportunities. What makes certain companies “prestigious” or “desirable” is not just the name or the salary; it’s also the extent to which individual interns or new-grad employees are able to learn from other employees at the company, and work on “cutting edge” tech. Ostensibly, this “cutting edge” tech is also the most impactful, exciting, and world-changing.

Mission-driven recruitment, then, is not just a way to assuage students’ concerns about the ability to do “meaningful” work—it’s also a reflection of the tech industry’s boundless techno-optimism and the assumption that the world can always be made better, if only one innovated hard enough. For years, Silicon Valley’s rallying cry has been move fast and break things—something it’s maybe done a little too well. Tech undoubtedly changes the world—but as it’s doing so, it just might also be breaking the world.

 

JESSICA DAI B’21 is trying her best not to break things.