Everyday, I cut through the alleyway off Angell and Thayer street. There’s a dumpster, there. On a good day, whatever’s left in there reeks to high hell and I hold my nose and shimmy past. On a bad day, I carefully leap over some broken glass and weave over a piece of what appears to be a wooden fence sprawled across the entryway, balancing myself against the wall and placing my hand on what I tell myself is graffiti, but if we’re being perfectly honest here is probably some sort of mold.

Why the hell do I do this? It’s the shortest route to my apartment. Every moment that I waste walking in some inefficient way, I will pay for later. I will sit at my desk, frantically typing out code as the clock dings on my last hour to finish a project before the deadline, and every wasted moment of my day will rush into my head and I will vow to never waste so much time again.

Computer programmers are taught to think obsessively about time efficiency. Can a program read less, store less, process less, and still complete the same task? They’re trained to know the answer to these questions before their hands hit the keyboard.

The problem is, time is like a slowly depleting gas tank on a plane. We all know we’re going to run out eventually. Unlike other limits in computer science like computer memory or Internet bandwidth, there isn’t always some way to generate more of it. All we can do is try to optimize, or, in the case of the plane, to tweak altitude and engine speed over and over to keep the plane in the air as long as possible.

When I’m trying to navigate the city efficiently, finding an optimal path isn’t hard. There’s what, 3, maybe 4 ways to get to Amy’s on Wickenden from my apartment? But not all problems that we encounter are this easy. Suppose, for instance, that I walk out of my apartment and someone has placed a very large hedge maze between myself and Amy’s. “Agh,” I’d probably say. “I guess it was only a matter of time.”

I go into the hedge maze and come to a fork. I choose one option, and then come to a fork. I choose a path again, and come to another fork. And another. And another. I soon realize there are thirty forks in each path through the hedge maze. To find the only path through the maze, I’d have to turn correctly, right or left, at all thirty forks. That’s one out of about a billion possible paths.

Solving mazes like this one can’t be done easily. The only possible strategy is trial and error, randomly guessing a single path out of a billion each time. Without the maze, things are different. Walking through the city blocks, I only have to make a couple of decisions, so there are very few possibilities. I can quickly decide which path to take. In the maze, however, there are many more decisions to make, leading to an explosion in the number of choices I face. In fact, if there are 30 forks, then I have 2^{30} paths, an exponential number, to choose from. It’ll take me a huge amount of time to find the shortest path. Exponential functions grow very rapidly, so increasing the number of forks will make the situation even worse. Contrast this with algorithms that run in less than exponential time: if, for example, we only had one fork in our maze with *n* choices. Then, the number of paths we have to search through to find the shortest is exactly *n*. Instead of choosing from an exponential number of paths, we’re now choosing from a polynomial number of paths. Polynomials are a type of mathematical expression that grow much less rapidly than exponentials, so an algorithm that finds its solution in an amount of time that grows according to a polynomial will run relatively quickly.

Problems like maze-solving, where there is no better strategy than trial and error, are known to computational complexity theorists as NP. Formally, NP is the set of problems where the time to solve each problem is not given by a polynomial expression. For the purposes of this article, let’s just say that problems in NP take super long to solve, and in contrast P problems—or as mathematicians call them polynomial time problems—are problems that can be solved quickly.

But how can we be sure that there’s no better way to solve the maze? Actually, we can’t. No one has been able to definitively prove that P≠NP, or in other words, that some problems can’t be solved in polynomial time and, therefore, can only be solved in non-polynomial time (NP). The Clay Math Institute in New Hampshire listed this question, known as P versus NP, as one of their seven Millennium Problems in 2000, offering $1 million to whomever can provide a valid proof. Though it’s unsolved, P versus NP is a relatively old problem. It was first mentioned in 1956, in a now-famous letter from Austrian mathematician Kurt Gödel to Hungarian polymath John von Neumann. The NP question so interested Gödel that he sent this letter to a seriously ill von Neumann’s sickbed, wishing him health in the first paragraph. Fifty years later, the study of computational complexity has evolved into a fully-fledged field of its own.

It’s done so with good reason. So much of modern technology, and increasingly society, is based around the assumption that P≠NP. When we’re encrypting a password, for instance, we want it to be extremely difficult to crack so that a hacker can’t figure out what the password is without having to test billions and billions of possibilities. But, if there was, say, a way for a hacker to figure out a password in P time, then all our online accounts—everything from money in the bank, to Facebook photos, to our precious Spotify playlists—would be up for grabs. And it’s not just computer programs whose capability is limited by P≠NP. If it takes computers a long time to find exact answers to certain problems, then it will take just as long for us to do so.

And finding the best way through the maze to Amy’s isn’t the only NP problem out there waiting to be solved. Nowadays, the medical field constructs massive causality networks, which, if they could be solved, would determine the best way to identify a patient’s disease based only on their symptoms, and what their best course of treatment would be. Unfortunately, solving these networks cannot be done in polynomial time. These same models could be used in lieu of macroeconomic approximations, and could tell us how exactly to increase GDP. If all algorithms ran quickly, artificial intelligence would move forward leaps and bounds, and it would become easy to construct inorganic brains with the tremendous learning power of the human brain. If P=NP, we’d find ourselves in a very different world from the one we live in today.

+++

One famous NP-hard problem is that of finding so-called minimal surface areas: if you have in your hand some loop of string, whose interior creates a gap in space, what’s the least amount of material that could span this gap? Filling such gaps with the least possible material is a problem of great interest, especially for economical roofers who’d like to know how many shingles to buy before attempting to fix a hole in a strangely shaped roof.

It’s an old piece of folklore in the computer science community that soap bubbles find these minimal surface areas. Dip your loop of string in soapy water, and the resulting bubble-film uses the least possible amount of soap. Since the surface area problem lives in NP, it was only a matter of time before someone put two and two together. In 2005, Selmer Bringsjord and Joshua Taylor at the Rensselaer Polytechnic Institute published a paper, modestly titled “P=NP,” laying claim to that million-dollar prize by means of a proof involving soap bubbles. Since finding minimal surface areas is NP-hard, they argue, and since soap bubbles are supposed to do this quickly, it would seem that our physical reality was somehow solving an NP problem almost instantly. Therefore, they announce, P must be equal to NP.

Scott Aaronson, a researcher and computer scientist at the Institute for Advanced Study—an independent postdoctoral research center located in Princeton, New Jersey that has been home to famous intellectuals like Albert Einstein and Alan Turing—wouldn’t stand for this claim. He started obsessively dipping complicated loops of string in soapy water, with fortunate results. He found that not only do soap bubbles not use the least possible area, but also that even when they do, it takes them a really long time. In this sense, P≠NP would have profound implications. It seems not to be restricted merely to algorithms running on computers or inside human heads. In fact, it seems to limit the behavior of our physical reality in a very real way, ensuring that even physical systems cannot simulate solutions to NP-hard problems.

What makes an academic like Aaronson so riled up that he decides to get his hands dirty (or rather, extremely clean)? NP problems have the property that they “reduce” to one another. This means that, if you have a computer program that can solve a particular NP problem and then you’re given a new NP problem, there’s always some way for you to rethink the new NP problem so that you can use the same computer program to solve it as well. How that rethinking takes place is often very complex so we won’t explore it here, but the concept is a powerful one. If there is a computer program that can solve an NP problem in polynomial time, then we can use it to solve all NP problems in polynomial time. That would mean that basically there’s no such thing as an NP problem, because they could all be solved as quickly as P problems. That’s what it means for P to equal NP. And if that happens, the floodgates open.

For Aaronson, P=NP would lead to total disaster. He references one particular problem from NP to make this point, which is the problem of constructing the most succinct program that’s capable of intelligently recreating some piece of real world data. This problem, in Aaronson’s view, is difficult for a reason: in order to simplify something really complex—like a table of all historical stock market data, or Shakespeare’s complete works—to its simplest form, one must really understand what they’re working with. Aaronson’s idea seems far-fetched, but it makes sense: it seems like a succinct and intelligent representation of all stock market data would encode within itself the laws governing the market, and that the most condensed version of Shakespeare’s works would contain the principles at the heart of his work. But if P=NP, computers could construct these representations extremely quickly. Having constructed these representations, we’d acquire the knowledge necessary to game the stock market and invest our way to the 1%, or to write a play just as good as one of Shakespeare’s. Stock brokers and playwrights (or at least those playwrights who are Shakespeare wannabes) spend their lives trying to accomplish precisely these things, but P = NP would imply that their life works could be trivially comprehended by an algorithm.

+++

Aaronson explored this idea of work and time with an example from the movie 1986 *Star Trek IV: The Voyage Home*. This film finds the Enterprise crew in the year 1986, to which they’ve time traveled from 2286 in order to get some humpback whales (which are needed because a “probe” that is wreaking havoc on Earth will only stop if, you guessed it, it hears the call of a real life humpback whale). Aaronson summarizes, “The problem is that building a tank to transport the whales requires a type of plexiglass that has not yet been invented [in the year 1986]. In desperation, the crew seeks out the company that will invent the plexiglass in the future, and reveals the molecular formula to that company so they will make it for them. The question is, where did the work of inventing the formula take place?”

Spock and gang escape the notion that tasks must take a certain amount of time to complete. They give the formula to the people in the past without them having to go through all the billions of molecular formulas for plexiglass that won’t work. Time travel offers free human accomplishments. The same absurdity arises when P = NP: computer programs can quickly do that people would normally do, offering up free human accomplishments. Any notion that time has value, that it can be saved, spent, or obtained – is thrown out the window.

In this sense, P versus NP is a question about the value of human work. Computers have replaced humans in a wide array of fields, but we maintain that they will never be able to perform tasks that we consider intrinsically human, like the creation of art. But if P=NP, these processes that we have set aside as “human” may suddenly become computable and meaningless.

+++

Jorge Luis Borges’ 1944 story “The Library of Babel” is set in a universe composed entirely of small hexagonal rooms filled with books. A society, living in this library, believes that it stretches on forever, and also that each book is unique. One prominent early librarian offered the following syllogism: if no two books are alike, and there are infinitely many books, the library must contain every possible book. The society is awed by this proof, which implies that all possible knowledge exists in the books surrounding them. At first, they celebrate: somewhere, on some shelf, there must exist a book containing the solutions to all personal issues, or a book containing the procedure for world peace. The citizens of the library comb through the shelves obsessively. But they soon grow disillusioned and stop, for how can they expect to find the texts they desire with so many meaningless others to search through? Just like someone trying to solve a tough maze, these librarians are reduced to the process of elimination. They can do no better than to read through each and every book until they happen upon their grail.

So a superstition develops among the citizens of the library. They say that some librarian, in some hexagon, has found “a book that is the cipher and perfect compendium of all other books.” This librarian is known as the Book-Man, and his omniscience is godly. The Book-Man knows what others could only know from a task that would take them an infinite amount of time, all from reading a single book. He is, in this way, the P=NP-Man, wielding the power of having completed an immeasurably large task in only a short amount of time.

The narrator of “The Library of Babel,” who has been describing the library as a hypothetical to the reader, finishes by deriving a certain satisfaction from the nonexistence of the Book-Man. It’s somehow comforting to him that the key to life’s greatest questions is not readily available. Aaronson seems intent on trying to show P≠NP with similar motivations: those time consuming tasks which we devote our lives to can’t be done in an instant by some computer. Even if our solutions to life’s problems are “sub-optimal,” there seems to be a comfort in not knowing for sure.

In fact, it seems this desire to have P≠NP is nothing new. The Old Testament had an omniscient God with limitless knowledge, like our Book-Man. But Christianity reformed this notion in the New Testament, responding with a form of God who was human and limited, who had to spend time to achieve goals. This was 2000 or so years ago, but the desire for these limits seems suggestive of a particular way of deriving meaning from life. Without these limits, spending time to achieve future goals is a useless exercise. Thinking P=NP might not just be difficult (if not impossible) to prove mathematically—it may be a terrifying proposition, one which challenges a fundamental belief almost all humans share about the value of work and life: that the time we spend doing things we don’t want to do is worthwhile, and that our problems can’t instantly be solved by a computer. Maybe the proposition is so terrifying that it’s precisely what would drive a computer scientist to spend hundreds of hours dipping slightly differently bent strings into a vat of bubble juice.

Speculating about what precisely makes P vs NP such a difficult problem is hard to pin down, but it seems like the question is poking at a fundamental assumption of humanity. It may be that, in a way, P≠NP is axiomatic—that proving it is so difficult because it’s been assumed to be true for so long. Or it may be that instead P=NP, but that proving so is so fundamentally scary that many of the best mathematicians avoid it.

The nature of P vs NP seems to be that it challenges the way we think about everything from literature to economics. Actually finding the answer to P vs NP might not actually be as important as considering the question itself, of finding the ways in which it challenges the limits of our knowledge. There are hundreds of mathematicians and computer scientists all over the world ruthlessly trying to solve this problem. Answers to P vs NP are presented and struck down almost monthly. The point being, maybe sometimes, it makes sense not to jump the broken glass behind Angell Street. Maybe sometimes you should walk by the crêperie and peak into the tattoo parlor. After all, if P=NP, you haven’t been as efficient you thought, anyway.

**Dash Elhauge B’17 **&** CHARLIE WINDOLF B’17** suck at mazes.