THE COLLEGE HILL INDEPENDENT


Tourist Trap

Geosure tries to build a safer world—but for whom?

by Liam Greenwell

Illustration by Sophia Meng

published February 14, 2019


 

I open Geosure and the map populates with nearby scores. There’s a lot of green, implying “low risk.” One of the nearest yellow icons sits in Boston’s South End, a 41. Worldwide, the map is scattered with numbers, ranging from 1 to 100, and colors, green to red. There are more than 35,000 location-based “security” scores on the app. Often broken down to the neighborhood level, Geosure claims to seek a “safer, more predictable world” through this data. Brought to market in 2017, the company’s app has been recommended by NBC News and Travel & Leisure and sits at 4.8 stars on the App Store.

Geosure uses 75 main data points from hundreds of different agencies to inform the primary facet of its score. These include local and regional crime stats; data from the United Nations, Center for Disease Control, FBI, Interpol, and more; macroeconomic factors (such as the rate of inflation versus GDP, which correlates to social unrest); the number of medical facilities; and the number of police employed per square mile. In addition, Geosure’s scores are informed by unstructured data analysis powered by machine learning, which includes local language news processing and crowdsourced “experience reports.”

The company’s website asks, “Travelers know where the water is safe. But what about the streets?” Aggregating vast quantities of data to create a supposedly-objective number, Geosure says it gives users that answer, hoping that consumers and companies will treat it as fact. Similar aggregations, based on crime data, have recently come under scrutiny for the inherent bias, racial and otherwise, that infect these systems that many proponents nonetheless tout as infallible. Geosure wants to set itself apart from these systems but, skeptical that a truly objective number was at all possible, I tested a few cities in the app.

Delhi, India is a 74, implying “high risk” according to the app. Providence, meanwhile, is a 29; Atlanta is a 58; Damascus is at “highest risk” at a 98. This means Providence is 45 points away from Delhi, whereas Delhi is, in turn, only 24 points away from Damascus. I wasn’t sure how to read this: Damascus is still in the middle of a war, while Dehli is experincing no open conflict and is home to some of the best healthcare in the world. Without a written explanation as to what these scores mean in practical terms, I was confused about what they meant.

Seeking clarification, I reached out to Daniel Madden, Partner and Chief Strategy Officer of Geosure Global. In a phone interview, he told the College Hill Independent that the lack of explanation accompanying the scores is an attempt to work against “information overload” in other travel safety datasets, such as the U.S. Department of State’s travel advisory system. “Ambiguity creates anxiety for travelers and paints entire countries or regions as unsafe,” he told me. Geosure wants to create a timely, granular system that “anyone can understand.”

Still, I told Madden that I didn’t quite understand this easy-to-understand system. He was surprised, at first, that I had once felt safer walking around Delhi than the 74 score would suggest. But then he clarified that the scale is not linear in terms of discomfort, equating it to a temperature scale. A location above 80 would have more of an “acute risk to your person” than somewhere like Delhi.

It nonetheless seemed faulty to claim that one safety score can be applicable to everyone. We know that, on a basic level, different governments advise their citizens differently on travel: in 2017, Canada warned its travelers that “the frequency of violent crime [is] generally more prevalent in the U.S. than in Canada”; the same year, the government of the Bahamas warned its traveling citizens to “exercise appropriate caution” around American police because of the prevalence of shootings of young black men in the US. Geosure attempts to tailor its scores to individuals, with separate scores for female and LGBTQ-identifying travelers. These scores are informed by local laws and specialized statistics. Geosure is the first to publish scores like these, and this has garnered the app positive press attention.

But if the scores are based on flawed or incomplete data, as the crime prediction systems often are, then the scores could be informed by those biases. And even if the data were perfect, the practical meaning of a score for a given traveler would change wildly. It would have to confront race, class, and more, as well as whether the chosen data would be applicable to a specific traveler’s feeling of safety. For example, the number of police in an area might make some feel safer­­—but it might make others, for instance men of color from the Bahamas, feel that the area is less secure. The effectiveness of the score relies on the assumption that people experience feelings of safety in broadly the same ways, based on the mutual intelligibility of the “importance” of certain statistics. But this assumption is flawed, and suggests the company’s audience is smaller than it would like to admit.

 

+++

 

One year ago, the U.S. Department of State revamped its travel advisory system. It now has four levels corresponding to the degree of threat: from “exercise normal precautions” to “do not travel.” A spokesman from the Department of State’s Bureau of Consular Affairs told the Indy in an email exchange that its advisory system “makes it easy for U.S. citizens to access clear, timely, and reliable safety and security information about every country in the world.” Each country has its own report, with descriptions about the reasons for each score.

Geosure’s other competition includes location-based crime “heat maps,” mostly developed to aid homebuyers or realtors in selecting a neighborhood. These companies, which include NeighborhoodScout and AreaVibes, conglomerate data from law enforcement agencies and run models to predict the frequency of crimes compared to other neighborhoods.

On NeighborhoodScout, Providence has a score of 10, which means it is more dangerous than 90 percent of US cities. But, Kate DeVagno, a marketer at NeighborhoodScout pointed out, different neighborhoods have different scores—the area around Brown University is a 66, meaning it’s less crime-prone. She called the data an “objective assessment” of a community’s risks when I asked about inherent bias in the product. “We stand behind the accuracy of our data.” But these prediction models confront the same problems as the ones used in law enforcement: as the American Civil Liberties Union has claimed, the models are only as good as the data fed in. Crime data is emblematic of the “historically biased criminal justice system,” not exempt from it.

Daniel Madden from Geosure is convinced by neither the State Department nor the heat map approach. Geosure is betting that individuals and companies want something different from either a government-produced, human-written analysis or localized predictions based on a few data points. Part of that bet involves the belief that consumers will trust Geosure’s claim of universality. Whereas the State Department operates on a federal level with a specific focus on American travelers and NeighborhoodScout is limited to US cities, Geosure wants you to believe that its score is applicable everywhere, and to everyone—and that it is not susceptible to the same biases to which it admits crime data is susceptible.

“We utilize a big data predictive analytics approach in order to eliminate any inherent bias,” Madden said, citing the 75 major data points and the two “unstructured” sources of data (AI-informed language processing and crowdsourced reports) that go into the score. But there are two major problems with this approach. First, Geosure dismisses crime data as susceptible to bias, but still bases the weightiest part of its score on other data potentially susceptible to the same biases. Moreover, Geosure takes no responsibility for the actual collection of data, and the company does not regularly audit its collection methods or the data’s accuracy. Madden said that Geosure counters this by aggregating its data to minimize the effect of outliers. “We’re not reinventing the statistics wheel here. We’re minimizing errors in our regression by using a very, very large sample,” he said.

Though Madden’s point may be true for a dataset with all 75 points, it’s unclear how much (and which) kinds of data go into any given score. All scores look the same on the app, and there’s no indication that some might be more reliable than others. This is especially pertinent for those ratings in parts of the world that are both stigmatized internationally and subject to more uncertainty about day-to-day security—a place where Geosure’s scores could be legitimately helpful for a risk-management firm, for example.

That’s the second problem: there are many places in the world where those 75 data points are hard to come by, and so the Geosure score might by necessity be based more and more on the kinds of data that Madden criticizes as being insufficient on their own. Just last year, the UN pointed to “alarming gaps” in its own data on migrant children because of the “extremely challenging” nature of gathering data on vulnerable classes, especially in war-torn regions or places with little government oversight. Geosure is not trying to track migrant children, but the UN’s statement reveals that the progenitors of much of this data are aware of issues that surround it. The scores remain untrustworthy, then, as long as the details of their creation are shrouded in generalities rather than specific disclosures about the method of building each score.

 

+++

 

Geosure’s app is free, and the company intends to keep it that way. Recently, they have embedded their score within other apps, such as TripIt, a travel organizer. On their website, they target their services to risk-management, insurance, and study abroad companies.

Geosure plans to add 6,000 data points for additional North American neighborhoods soon (as of this month, Providence only has a city-wide score). The company sees itself as part of a virtuous cycle, whereby a good score can lead to “greater trust in government institutions...increased tourism, more jobs, better performing economies,” and “a higher quality of life,” according to its website.

But Geosure also must reckon with the question of whether it is only reporting a public service or if it is also creating new, unfounded trepidation—or overconfidence—about the security of certain locations. Its process rests entirely on the validity of both its data and its process for manufacturing that data into an “easy-to-understand” number. That is, the process does not take into account which data are used in a given location, let alone any metric to measure the trustworthiness of that data or when or how AI-based or crowdsourced tools are implemented or not.

But so far, there seems to be little self-reflection about the power of Geosure’s claim of objectivity, with these concerns in tow. The score remains an artificial system that gives consumers little hint as to how the sausage gets made, or how to change their behavior based on a given score to make their travel more secure.

There is also the serious concern, despite the company’s catering to women and LGBTQ-identifying travelers, that the company has not seriously probed its assumptions of who the service is, and could be, for. The app’s relative rankings reveal a tension between a focus on an objective, data-based system and one that relies on an intuitive understanding of what relative rankings mean. Geosure does not confront the possibility that one's reasons for feeling “secure” are tied their identity and, notably, does not currently have plans to add a ranking that accounts for race in meaningful ways.

There is clearly a power structure involved in the act of scoring the security of a place on the other side of the world (supposedly objectively) without having members of those communities take part in the system’s creation. And a similar dynamic of power is inherent to tourism itself. A visitor gazes at the supposed “authenticity” of a foreign place, evaluating and interacting (or not interacting) with locals in prescribed ways, locals who sometimes don’t have the financial means to engage in the same ritual elsewhere. Behind the utopian claims that Geosure is working for a “safer world” is the implicit assumption that there are those in the world who can choose whether to engage in less-secure areas—and those who have no choice.

Despite its overtures about inclusive scoring, Geosure must still prove that its scores are not only applicable to people who are implicated in neocolonial assumptions about the act of travel, assumptions which are irresponsible if not confronted. Until then, Geosure remains a service that claims universal application but whose scores—the creation and rigor of which are still suspect—only apply to a small subset of the global audience it claims to reach.

 

LIAM GREENWELL B’20 is currently browsing the App Store.