THE COLLEGE HILL INDEPENDENT


Private Eyes

Cambridge Analytica and the new frontier of internet privacy

by Marly Toledano

Illustration by Rémy Poisson

published April 20, 2018


This past month, Cambridge Analytica has made its way into our consciousness and, for those of us unfortunate enough to be among their victims, their infiltration happened in more ways than one. Through an aptly named survey, “This is Your Digital Life,” Cambridge Analytica gathered information from roughly 87 million Facebook users, leading to public outcry and a global reassessment of how we have come to trust the social network. Personality quizzes may have their perks, but many have become less eager to identify as a Disney Princess after discovering what their digital life really looks like—or more exactly, after learning that others were perhaps more interested in their results than they were. While Facebook’s incessant targeted advertisements and keen interest in our political beliefs have raised eyebrows in the past, this most recent scandal has been widely publicized, sharply underscoring what privacy experts, and our parents, have been saying for years. As troubling as these revelations are, they hardly even warrant being considered as such, as nothing discovered was really unknown. Even Congress is having trouble feigning surprise. As Senator Charles Grassley said in Mark Zuckerberg’s court hearings last Tuesday and Wednesday, “It is no secret that Facebook makes money off this data through advertising revenue, although many seem confused by or altogether unaware of this fact.” While little comfort can be drawn from statements like Grassley’s, it serves as a reminder that Facebook has simply been doing what it was designed to do: harvest user data for profit.

 

+++

 

Cambridge Analytica accessed Facebook user data through what looked like an academic research project. In exchange for a few dollars, users took a quick personality test that would supposedly help psychologists. Aleksandr Kogan, the mind behind the project, concedes that his app collected information on millions of people. In some cases, merely being friends with a test-taker compromised one’s personal data. Kogan then proceeded to give the information to Cambridge Analytica, a data mining company co-founded by Steve Bannon and funded by conservative hedge fund manager Robert Mercer. Christopher Wylie, a researcher-turned-whistleblower who brought this situation to the forefront of political discourse as a source for The Guardian calls Cambridge Analytica “Steve Bannon’s psychological warfare mindfuck tool.” While Wylie was well aware of these programs during his tenure at the company, he claims that he did not come forward until after the election for fear of being “crushed” by Bannon. “I didn’t fully appreciate the impact of what I helped create until 2016 happened,” he said. “Very soon after that, I started working for The Guardian, originally as an anonymous source,” he said. He told the Observer, “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons.”

Cambridge Analytica then used the data to inform strategies for conservative campaigns—both that of Donald Trump in the United States and Brexit in the United Kingdom. Cambridge Analytica garnered information about user personalities through a process called “psychographic” profiling which essentially constructs and predicts potential voters' personalities based on their likes, photos, and even, possibly, direct messages. In no indirect terms, Cambridge Analytica, according to its own website, “uses data to change audience behavior.” It is important to note that Cambridge Analytica lacks a clean record—the company has been involved in scandals regarding bribery and other illegal activity in order to secure conservative political aims, according to the New York Times.

Though Cambridge Analytica hasn’t admitted that information gathered from their profile quiz was used in support of Trump’s campaign, the CEO of the company, Alexander Nix, also claims they played an essential role in deciding the 2016 elections, leaving it unclear what exactly they did with the information they collected. An anonymous source involved with the company offered up another possibility while speaking to NPR, arguing that the digital efficacy of Trump’s campaign was so underdeveloped that Cambridge Analytica never used psychographic profiling but instead used much simpler tactics on the campaign. While the situation does call into question the scale of the influence of social media in the 2016 election and the potent and uncharted territories for future voter manipulation, it is unclear whether Kogan’s work changed the outcomes of the election in the way that, for example, James Comey’s letter to Congress did. According to Brad Parscale, a digital expert who worked for Trump’s campaign, the information did not help the Republicans in the election.

While it remains ambiguous what sway Cambridge Analytica  held over the 2016 elections, Facebook’s policies, which allowed this sort of data harvesting in the first place is clear and troubling. When Cambridge Analytica did its data mining, Facebook provided external developers with access to the information of users and their friends. Facebook claims to have been deceived by Kogan, who stated that the data collection was intended for academic research. When he instead passed it on to Cambridge Analytica, he broke an agreement with Facebook. In 2015, the network required that Cambridge Analytica delete all the information that was collected, but it remains unclear whether Cambridge Analytica followed through. While Facebook has made some efforts to make privacy settings more accessible to users, the company continues to collect personal data and share it with external apps. As Zuckerberg told Senator Orrin Hatch when asked how the company makes money: “We run ads.” The commodification of user data remains the very essence of Facebook’s business model.

 

+++

 

Like the countless applications that offer money or unlock new features in exchange for a download, there’s a higher cost to using Facebook apps than what we might choose to believe. In signing a user agreement we make a contract with a company that never promised to protect us. So why were we so shocked to find out that a cost came with publicizing our lives on social networks? At least in part, this surprise mirrors the public reaction to the Snowden leaks of the NSA’s surveillance activities in 2013. It reminds us that what we put out there is really out there—somebody’s watching. But the difference between the surveillance scandals lies in accountability. The government, at least in theory, owes protection to the public and uses the data it collects towards this end.

Facebook, on the other hand, complied with a user agreement they made up themselves. Facebook is a for-profit enterprise: it has no obligation to even set up an illusion of checks and balances. While the NSA at least in word defers to the executive branch, the private sector continues to operate of its own accord. When a governmental agency uses our information they are accountable to themselves and their own ends and, hopefully, to the public. In this case, private parties, each with their own political and financial agenda, had access to our information. And Cambridge Analytica does not have the interests of the public in mind—rather, their use of the information has everything to do with who hired them and how they can make money.

The NSA and Facebook collect the personal data of individuals—for both agents, that information is essential to their success. It’s startling that we have accepted this practice from our government, but it’s even more shocking that we have given this power to any number of internet services. As Snowden tweeted last month, “Businesses that make money by collecting and selling detailed records of private lives were once plainly described as surveillance companies. Their rebranding as social media is the most successful deception since the Department of War became the Department of Defense.”

In the aftermath of Snowden’s leaks, the NSA has made various policy changes including a five-year limit on holding information and the Freedom Act which halted the mass collection of personal phone records. The question now becomes when Facebook will be held to the same standards following this recent data breach.

 

+++

 

Zuckerberg has been in two days of court hearings this week because this method of gathering voter information is uncharted territory for legislators. Lawmakers have to grapple with how to take control of the free-for-all that still exists on the internet. But when asked whether he supported regulation, Zuckerberg further confused lawmakers by answering with a question, asking “What’s the right regulation?” At this point, lawmakers are struggling with whether they should treat Facebook as a media company or a tech company or even a financial institution—and whether the social network counts as a monopoly. Or they can take another approach, like the European Union, which has opted to take action on internet privacy by creating legislation that will impact Facebook alongside other media and tech companies that collect user data.

A lot of the blame for what happened with Cambridge Analytica has landed on Mark Zuckerberg, but that might just be because he’s a name we can all call to mind. In reality, because so few consequential regulations have any power over Facebook, it is hard to determine if they broke the law. Really, Zuckerberg never did anything that couldn’t have been predicted if users had been savvy about the way they used the internet: all he did was sit back and let third-parties make up their own rules in a still unregulated landscape. The public wants to morally reprehend Facebook, supporting a #deletefacebook movement endorsed by Whatsapp cofounder Brian Acton. The outcry has prompted Zuckerberg to address concerns with an apology—“It was my mistake. And I’m sorry,” he said to lawmakers. However, the public condemnation of Facebook has little political or economic backing. Facebook has no one to answer to except itself. And its agenda does not leave any room for privacy—the very model for the business finds its success in collecting data and targeting individuals with specific advertisements.

In the end, Facebook is a corporation, and while it’s nice to hear Zuckerberg concede to lawmakers that “For most of [Facebook’s] existence, [we] focused on all the good that connecting people can do ... It’s clear now that we didn’t do enough to prevent these tools from being used for harm, as well.” He could have predicted this kind of data harvesting and chose not to step in the way. People’s information is what his company banks on. Zuckerberg can recount a litany of past breaches of trust that Facebook has engaged in: “Fake news, foreign interference in elections, and hate speech, as well as developers and data privacy.” Still, as of yet, Facebook has failed to change its policies; third party apps can still access your friends’ data and it still does not do anything to prevent you from clicking the harmless-seeming terms of agreement on every online quiz you take from their platform.

The ideology that Zuckerberg, alongside other Silicon Valley tycoons including Travis Kalanick and Jeff Besos, takes advantage of is the still unregulated frontier. They appear to have let go of the reins to allow profit to pour in, carrying their powerful machinery in any direction. To  John Savage, a Computer Science professor at Brown University, this use of Facebook data came as no surprise. “I think that Facebook was so keen on finding a way to make money that they did not pay attention to the potential impact on their customers, and even after they were told that their technology had been misused… to have an influence on the elections of 2016, they still did not wish to acknowledge that there was a potential problem,” he said. “For me that represents a serious ethical lapse.” Critics have pointed to what resembles a Randian philosophy of success at any price. As a result, Facebook can make no claim at having a nonpartisan influence. As we can see in the case of Cambridge Analytica, it has instead become a playground for high-paying agendas.

 

+++

 

With Cambridge Analytica, the lack of internet privacy ceases to remain an abstraction, as it concretely influenced the 2016 election. But things did not necessarily have to go this far for us to become aware of the mind-boggling amount of information that Facebook keeps on every single one of its users and, sometimes, people who are not users at all—those are called “shadow profiles.” And it’s certainly not just Facebook, or Cambridge Analytica, doing this type of profile. It’s everything you download, charge, or log into—from Google to dating sites to AngryBirds.

Zuckerberg’s testimony on Tuesday and Wednesday made room for a reassessment of the future of internet privacy and how these networks will shape the political landscape in years to come. Zuckerberg admitted in the hearing that “I think we should have notified people, because it would have been the right thing to do.” Still, he maintains that the company never violated the 2011 consent decree with the Federal Trade Commission, which ordered that the network meet users expectations of privacy, because users agreed to give “This is Your Digital Life” access to friends’ data. Congress members urged Zuckerberg to take on the European Union’s model for maintaining internet privacy, known as the General Data Protection Regulation (GDPR), as an approach across the network. This new legislation, which will come into effect May 25, “is very strict,” said Savage. “It stipulates that if you collect information on European citizens, then you have a serious duty to protect that. You have to give evidence of that or you could be fined up to four percent of your gross revenues.” The GDPR will require users to explicitly give their consent before companies can obtain their personal data. Zuckerberg confirmed that the protections offered by the European legislation will be extended worldwide—although his response to whether Americans would have the exact same control remained unclear. Regardless, the GDPR will have a major impact on the way all tech companies that serve Europe handle user privacy.

Hopefully, in light of Cambridge Analytica, we can base our actions on an awareness of the actual cost of accepting the terms of use. While maybe we can’t take back our data, we can understand the mechanics of these networks and scroll through our feeds a little more wearily. Over the course of the next year, we will see how lawmakers continue to grapple with regulating these internet giants. The reality of the digital age is that tech and media companies serve a larger population than many governments without even the pretense of regulation. They’re collecting and storing our data for profit and they’re doing it on the agenda of whoever pays. Cambridge Analytica seems to have shaken up public understanding of the situation, pressing lawmakers to take a closer look at internet privacy. In Washington D.C., new bills have already been proposed in an effort to make an actionable response to Zuckerberg’s testimony. These include the CONSENT Act which would require that web services acquire permission from individuals to use their data and report data breaches. Another proposed bill, the MY DATA Act, would enable the Federal Trade Commission to create new rules to improve the security of internet service users. “I think [Facebook and other internet services] are going to be held to a higher standard,” said Savage. The commodification of user information might have been inevitable—but also inevitable is the regulation that follows. Cambridge Analytica has unwittingly initiated a conversation that might just make internet services more accountable in the future.

 

Marly Toledano B’20 gets her soy matcha from Shiru Cafe.