Every internet user leaves crumbs of data behind—clicks, scrolls, advertisement views, website visits, location check-ins—a mountain of information, compounding every day. Sitting in server farms (the massive warehouses filled with miles of hard drives), this data provides billions of tiny glimpses into users’ lives. With proper analysis, tech companies like Facebook, Google, and Twitter can use anyone’s data to predict their actions with a shocking degree of accuracy.
In China, much like in the US, a handful of tech companies have near-monopolistic control over data. WeChat is a messaging and social media service with nearly 1 billion users, offering integrated money transfers, food ordering, and ridesharing. Alibaba, China’s equivalent of Amazon, sold $25.3 billion worth of goods on a single day in 2017. Baidu is a search engine akin to Google, handling 76 percent of all internet searches in China as of last year. Recently, a partnership between these (and other) major tech giants and the Chinese government has been testing the bounds of what people’s data reveal about them, and is raising questions about social, moral, and economic worth in the digital age in the process.
Compared to the post-industrial United States, developing countries have historically had a very different, far more tenuous relationship to credit. Poor and rural people are often shut out of traditional credit markets due to their lack of collateral, and informal credit networks fill the void. These credit networks finance things like farmers’ yearly expenditures on seed, machinery, and labor. Yet these loans often depend on unreliable verbal contracts, charge high interest rates, and cannot fully meet the needs of borrowers.
While China’s economy continues to grow rapidly, its informal credit networks still mostly rely on cash, debit cards, and mobile wallets. These smaller, more personal services are now handling record numbers of transactions. Out of the 1.4 billion people in China, only 320 million of the relatively well-to-do have credit histories and access to credit. For everyone else, owning a credit card or taking out a large loan to buy a house is nearly impossible.
Besides problems surrounding credit, the Chinese government is also looking to tackle questions of trustworthiness among individuals, businesses, and corporations. In 2008, for example, tens of thousands of babies were hospitalized and several died after eating compromised baby formula. In 2013, 16,000 dead pigs, presumably struck down by disease, floated down the Huangpu River in Shanghai, which supplies drinking water for the city’s 26 million residents. The global market for low-quality counterfeit goods is worth almost half a trillion dollars annually, and as of 2013 about 85 percent of counterfeit goods worldwide were manufactured in China and Hong Kong. Lacking an index of trustworthiness, the Chinese government doesn’t have an ability to crack down on corporate misconduct and counterfeit goods; organizations can commit fraud or malfeasance, slip away quietly, and re-establish themselves elsewhere with relative ease. Authorities in China have been searching for better ways to regulate their economy by keeping tabs on businesses and instituting systems of reward.
Enter the Social Credit System (SCS). In 2014, the Chinese government issued a policy outline which asked major tech companies to develop systems that take in users’ online data—drawn from social media activity, financial records, networks of friends, consumption habits—and output a single number as your social credit score. After a prolonged period during which the government closely watched the progression of these pilot programs, the system was adopted by the Chinese government and set to expand to all Chinese citizens by 2020. Companies will be obligated to share users’ financial and social media information and the government will do the calculation of credit scores. The idea is that the score should act similarly to the American system of credit: a number representing your past behaviors that can potentially let you take out a loan on a house purchase or get a credit card.
However, the system also takes the user’s pattern of online behavior into account. Li Yingyun, the spokesperson for a potential SCS, noted that "someone who plays video games for 10 hours a day, for example, would be considered an idle person, and someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility." The scope and goal of the SCS goes beyond establishing standards of financial trustworthiness, although that is a primary goal. According to the 2014 policy direction, the system was established “to strengthen social sincerity, stimulate mutual trust in society, and [reduce] social contradiction” as well as to build “a Socialist harmonious society.” In other words, social credit makes judgements about people's morals: “an idle person” versus one with “a sense of responsibility.”
The SCS is one of the first attempts in the world to codify social mores taking social media content and online data as its source, and the effects of its internet-based calculation are far-reaching. Because tech companies have served as the incubator for this soon-to-be-nationwide policy and will continue to figure it into their products after it becomes mandatory in 2020, many of its effects take place on social media. Baihe, a major dating site, uses a matching algorithm that favors people with high scores over those with low scores, and posting scores publicly has become a symbol of status and desirability among its users. As it exists today, a social credit score of 600 lets you take out a loan of about $790 for online shopping on Alibaba sites. At 650, you get VIP check-in to major airports and can rent a car without leaving a deposit. Once you reach 666, you can take out a cash loan of $7,900. In many ways, the SCS acts similarly to a society-wide loyalty rewards system.
Come 2020, however, the Chinese government will enter the social credit score market, rewarding those with high credit and punishing those without credit. First, the rewards: a score above 700 lets you travel to Singapore with ease, and one around 750 puts you on the fast track to a rare European visa. There’s a historical precedent in China of restricting freedom of movement by blacklisting dissidents and ‘undesirables’ from taking trains, planes, and travelling internationally. According to the head of the Executive Department of the Supreme Court in China, Meng Xiang, “44 government departments” have agreed “to limit 'discredited' people on multiple levels” based off of their social credit score. Moreover, this score is not derived solely from financial or online activity alone. One key factor is your online social circle. If a friend or family member is caught protesting or posting objectionable material online, your score goes down as well as theirs.
On a basic level, the whole world monetizes data on the internet in the same way: a user creates data by clicking on things, sending messages, changing locations, et cetera; that data is stored by the computers running the website, sometimes used to redesign the website and make it more appealing, finally, that data is sold to advertising firms, who use it to create hyper-targeted ads. People are then more likely to buy the advertised products; this purchasing activity creates new data, and the cycle continues on and on. For Americans, this is pretty much the extent of the process. The government can surveil online communications routed through private companies, but that depends on a Foreign Intelligence Surveillance Act (FISA) warrant. There is at least an assumption of privacy, although it is based on shaky ground.
While most Americans continue to browse with this assumption, Chinese citizens have no pretense that their internet use isn't being watched and censored. For instance, the “Great Firewall of China” has existed since the late '90s and blocks Chinese people’s access to Google, Facebook, most Western new outlets, pornographic websites, and any other content considered immoral by the government. In many ways, broad internet censorship, with the intent of policing morals, was the first digital link in the chain that has led to social credit. The underpinning principles of social credit also rely on the fact that the internet is monitored in China. In the 2014 policy directive outlining the SCS, little mention is made of government processes to actually retrieve peoples’ data in the first place. Instead, there is a well-established precedent that sharing user data with the government is an obligation of major tech companies in China.
Although hyper-targeted social media advertising makes the idea of social credit more conceivable to Americans, most of them are unnerved when confronted with the reality of this system in China. Many Western media sources jump to paint China as a monolithic, authoritarian dystopia. For instance, one Wired headline about the SCS read: “Big Data Meets Big Brother.”
And yet, how different are these practices from Google, Facebook, and Amazon’s ability to predict your actions, or the monitoring of the silent surveillance state run by the NSA?
Five years ago, Edward Snowden leaked the details of PRISM, which allowed the NSA to mine American data through back doors in social media. Only a few weeks ago, Congress voted to renew article 702 of FISA, which allows for the bulk data collection of foreigner citizens and sweeps up many communications of Americans in the process. The ethical and moral questions raised by the SCS apply to the modern age of 'Big Data' generally, and stretch far beyond the borders of China.
Take, for example, the increasingly integral role algorithms play in US court systems. Many state court systems have adopted privately-developed software that, much like the SCS, takes in points of data about an individual—education, income, zip code, past crimes—to assign them a score. This score is then used by judges to make decisions about bail, parole, and sentencing.
A 2016 Propublica investigation compares the stories of Brisha Borden and Vernon Prater, who were both arrested at the same time in the same Florida county for petty thefts of about $80. When their data was fed into a 'risk assessment' algorithm that gives people a risk score from one to ten, Brisha (an 18 year-old Black woman with no prior record) received a score of eight, and Vernon (a 41 year-old white man with a prior five year stint in prison) received a score of three. Two years afterwards, however, Vernon was the one serving another eight year prison term and Brisha had committed no other crimes. Not only did the algorithm get it wrong, but an aggregate investigation into these practices showed their inaccuracies and tendency to perpetuate racial disparities.
Considering factors like race and gender is usually strictly prohibited in these algorithms on the grounds of being discriminatory practices. But this often fails to account for high levels of spatial segregation and inequities in access to resources like education—in other words, your residence and level of education can be proxies for race and gender. Some supporters of these systems, pointing to studies about how judges have a 65 percent chance of granting parole early in the day and after lunch breaks and a near zero percent chance of granting parole right before lunch and at the end of the day, say that automated sentencing algorithms provide a measure of safety and consistency against the natural variations of judges' decisions. Detractors point to an over-reliance on seemingly objective numbers as dehumanizing and ultimately misleading, as the systems are trained on past data that contains inequalities. As of yet, the formal definition of how these systems should be used by judges in American courtrooms is undefined.
Increasingly around the world, the power of the state can hinge on reducing someone’s life to a single number through a process that is perceived as objective. This reductive process is often opaque, conducted in the inaccessible realm of programmers at private companies who are under no obligation to share their code. And so in the digital age, people are considered to be users, a set of data points within a digitally atomized world, their agency is deterministically reduced from human to a system of equations.
China’s SCS is more a symptom of unsettling times than a cause of them. All over the world, small groups of unelected programmers are effectively writing policy in the form of code. This process raises questions about the role of the government in our society and within online spaces about whether morality can or should be hardcoded into our lives.
The era of big data also raises questions about individuality. These questions are not new; the tension between desires of the individual and desires of larger social formations is as old as organized human society. However, the fact that computer models can distill the world (and people within it) into analyzable and malleable numbers represents a scaling-up of society’s ability to predict (and influence) individuals’ behavior. Obviously, a single score can’t represent the complexity and depth of a person. And yet, still, scores are being used to inform critical decisions about people’s lives; freedom of movement, incarceration, economic mobility, as well as many other important issues are increasingly falling under the purview of 'objective' computer models of the world.
It’s uncertain how technologies like those used in China’s SCS will proliferate in the years ahead. However, we can be sure that the process of big data analysis is unlikely to stop anytime soon. As we continue to allow these processes to spread to realms of our lives previously reserved for the state, we are quietly assenting to new uses (and potential misuses) of our data. The Social Credit System is in the midst of construction, and in many ways is indicative of the future—yet it’s a future which we may already occupy here in the West.
HAL TRIEDMAN B'20's Social Credit Score is 666.