THE COLLEGE HILL INDEPENDENT


The Ex-Machina Fallacy

How randomness separates us from AI

by Griffin Kao

Illustration by Julia Illana

published February 8, 2019


The screen lights up with a new text message from Scott, a professor at the University of Texas: How many legs does a camel have?

 

Eugene types back: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty.

 

His smiley face seems to suggest that like other 13-year-olds, he’s playful and maybe just a little bit childish. But in any case, it doesn’t seem like his response is a serious attempt to answer the question. All business, Scott pointedly ignores the light-hearted response: How many legs does a millipede have?

 

Again, Eugene doesn’t give an accurate answer and seems to get a little distracted: Just two, but Chernobyl mutants may have up to five. I know you are supposed to trick me.

 

Reading these messages, the judges of the 2014 Turing test competition are perplexed by the bizarre yet entirely intelligible conversation. Long thought to be the threshold for intelligent behavior equivalent to and indistinguishable from that of a human, the Turing test is comprised of a human evaluator trying to identify the machine in a natural language conversation between human and computer. So the judges here know that either Scott or Eugene must not be who they say they are. As it turns out, it’s Eugene Goostman who isn’t human, but rather a chatbot programmed by three Russian computer scientists, and he manages to fool a third of the judges. Surpassing the required 30 percent to “pass” the Turing test, he becomes the first program to ever do so—a milestone indicative of the lengthy strides scientists have made in the field of artificial intelligence (AI) in the past few decades.

The intelligence Eugene exhibits hints at a future in which machines and humans can be completely identical—at least from the outside. This is a terrifying thought. If machines are just as creative and insightful as us, it’s only a matter of time before artificial intelligence renders humans as obsolete as the last generation of iPhones. In fact, the Weak AI Hypothesis, which states that a computer program can be built to act as “intelligently” as people, is held to be true by nearly every computer scientist working in the field of AI. And this seems to make perfect sense in a world where Eugene texts like a normal 13-year-old boy and other programs, like Google’s AutoML software, which can actually teach itself to program other machine learning software, can accomplish tasks most people can’t even complete.

 

+++

 

In 1980, philosopher John Searle proposed an argument that directly disproved that the Turing test can be used to determine whether machines can think. Searle’s “Chinese room” thought experiment begins with a hypothetical premise: a person who has no understanding of the Chinese language sits in a closed room. In the room is a book that details how to respond to each and every sequence of Chinese characters with an appropriate, corresponding sequence of Chinese characters. Using these instructions, when Chinese text is passed through a slot in the door, the person inside the room can respond to the text in a way that gives the appearance of understanding to someone outside the room.

Searle claims that there is essentially no difference between this room and the way an intelligent computer works. In both cases, the agent simply follows step-by-step instructions to produce behavior that is then interpreted as intelligent by an external user. And in both cases, Searle contends that we cannot determine a single step in this process where someone or something understands what’s being said or what’s being inputted. Conversely, considering the way you think, you might find that when someone says something to you, you convert their speech into an inexpressible “language” that constitutes comprehension before outputting a response. If someone asks you, “What’s the weather like today?”, you would first process their question to understand it. And since you understand it, the question might set off a chain of thoughts like, I was sweating so much on that run which means it was really hot…oh my god, you know what else is hot…those flamin’ hot Doritos I tried today, which then leads to a spoken response.

Searle’s thought experiment doesn’t necessarily exclude the possibility that machines can simulate human intelligence since it only disproves that machines think like humans. However, the true understanding he points out may have implications for whether we can build a machine that is functionally the same as a human or if that understanding manifests itself as an insurmountable distinction between human and machine behavior. When we look at specific tasks, like speech recognition or speech generation, we see from qualitative analyses like the Turing test that machine behavior can be identical to human behavior. For example, you and Siri might both respond with “warm” to the question, “What’s the weather like today,” although you would have arrived at the response in different ways.

More generally and abstractly, what happens when we examine behavior across a wide variety of circumstances? If we monitored the responses of machines and humans to that same weather question on a thousand different days, and then tried to predict what each would answer the next day, it’s possible that we would be less accurate in making predictions for the human. It’s possible the human might surprise us and say, “Why do you keep asking me this question?” after giving us reasonable answers every day before. In this case, we would find that the true understanding separating us from machines may result in a degree of randomness in human behavior that can never be matched by machines. Scientists cannot program into our intelligent systems this unattainable threshold of stochasticity, or randomness. In light of how often computer scientists take the Weak AI Hypothesis for granted, this may seem preposterous. But no real randomness exists in the way our computers operate, which, in conjunction with the possibility of some randomness in human behavior, provides strong evidence against the idea that machines can act exactly like humans.

                                                                                                                     

+++

 

In most programming languages, we can use built-in functionality to generate “random” numbers, but these number generators are actually classified as “pseudo-random” because they only appear to be random. In fact, they’ve been heavily documented as deterministic, which means a given sequence of numbers can always be reproduced at a later time if the starting point of the sequence is known. Even true random number generators (TRNG)­—programs that generate random numbers from a physical process rather than from an algorithm—give the closest approximation to randomness that we can find in computers, but are still not actually random. Since such devices often utilize microscopic phenomena (like radioactive decay, thermal noise, or the photoelectric effect) in their environment to produce “randomness,” their behavior is technically deterministic. For example, one TRNG chooses a number based on the unpredictable half-life of the element Americium 241, which means the outputted number can be attributed entirely to environmental input. More generally, computer output is always predictable because a program always acts according to a set of strictly defined principles, much like the Chinese interpretation book in Searle’s thought experiment.

Ultimately, this means computers will never generate new ideas. They will never be able to paint a masterpiece or write a novel in their own style. Instead, they are relegated to warping input to produce seemingly original work. Indeed, generative adversarial networks (GANs), which represent the cutting edge of deep learning, do exactly that; GANs are trained to produce ‘new’ instances of a given data type, most commonly an image. These new instances are actually just the reshaped versions of their quasi-random inputs. For instance, a GAN might produce a picture of a face if someone gives it the number four, whereas a human could draw a face without being given that number.

Unlike the determinism of computers, stochasticity in human behavior is harder to prove. It may seem easy to conflate randomness in human behavior with the idea of free will, but that remains a tangential issue, because ultimately, randomness does not necessarily imply choice. We can actually derive some of the randomness in our actions from biological processes; there is a significant body of scientific evidence which supports stochasticity at the molecular biological level. French genetics expert Thomas Heams, for one, has detailed examples of random biological phenomena like DNA mutation and gametogenesis (the cell division by sex cells). Further yet, randomness in biology has been repeatedly linked to human resilience, asserting that the phenotype variability—or variability in our observable characteristics—that creates diversity stems from stochastic gene expression. This means it’s possible that personality traits, like your sense of humor or your affinity for English, may be partially the result of the genes you were given. And genetic randomness may result in randomness in personality that allows us to develop new qualities that, in turn, allow us to adapt to changing circumstances.

Let’s place this randomness in the context of our earlier examination of how we engage in conversation. After listening to someone ask you about the weather and distilling the meaning of their question, you make the decision of how exactly to respond. It’s this decision that is so influenced by your personality, which may be the product, at least in part, of stochastic genetic composition. This process of human conversation is again in contrast to the purely deterministic input and output of AI. Since scientists cannot program that randomness, humans cannot create machines that act just like people.

 

+++

 

I’m scared of a future in which the very systems we’ve programmed replace us in shaping the world. I envision computers making vital decisions for us at the top, like whether we choose to abandon Earth to global warming or find a new planet, as well as smaller, day-to-day decisions at the bottom, like who to have lunch with on a given day. I fear that if we have machines smart enough to do the decision-making for us, our collective laziness will make it too tempting to give up our autonomy and take the back seat. I study computer science with a particular interest in artificial intelligence—and it’s the confluence of bright minds in the field working to make our robots smarter that reminds me of this dystopian vision. But as I dive further into the field of AI, I’m also simultaneously reassured against that possibility by the knowledge of how we design our intelligent systems.

While this isn’t a fear that grips the entire AI community, many experts do spend a lot of time contemplating the philosophy of AI. Dr. Bram van Heuveln is a professor at Rensselaer Polytechnic Institute who has published a number of papers on the subject. When asked about the Weak AI Hypothesis, he emphatically told the College Hill Independent, “If you didn’t believe in the Weak AI Hypothesis, you wouldn’t be in the field of AI.” But later, he said that the most crucial question in examining the Weak AI Hypothesis is whether “we can capture the relevant properties that our brains are implementing.” Relevance is subjective, but I presume he means properties that are true of human behavior. Dr. Heuveln, among other computer scientists, points to evaluations like the Turing test, which compare human and machine behavior on specific tasks, as evidence that we can program computers to behave like humans. Yet all of these evaluations would, again, fail to capture general stochasticity and do not provide conclusive evidence that we can create machines that perfectly simulate human behavior.

The idea that someone can have an entirely original thought, one whose origin cannot be traced in any way to an environmental factor, but rather an intangible consciousness or their genetic makeup, is inspiring. It means that humans are genuinely creative, a message of human triumph that implies a future in which, by some means, we adapt and overcome critical global issues like climate change and poverty with more innovative solutions. And it also means that no matter how technologically advanced our society becomes, no matter how many smart people there are working to improve artificial intelligence, programs like Eugene will never be able to speak for us.

 

GRIFFIN KAO B’20 likes people more than robots.