THE COLLEGE HILL INDEPENDENT


Little Lost Robot

by Charlie Windolf

Illustration by Andres Chang

published November 4, 2016


The mood was subdued at the Association for the Advancement of Artificial Intelligence’s 1984 annual gathering. By that year, the field had expected to produce some kind of robo-utopia, but it had pretty much failed to deliver, and prospects were bleak. Attendees, among them the late computer scientist Marvin Minsky, coined the term “AI Winter” to describe the frosty responses they were receiving on grant proposals. For the next few decades, winter was there to stay.

 

+++

 

30 years earlier, back in 1956, Minsky spent a summer at Dartmouth attending the conference that created the field of AI. At the time, Minsky was working on a machine called SNARC, short for Stochastic Neural Analog Reinforcement Calculator. SNARC was a hodgepodge of vacuum tubes, motors, and spare parts, including an old autopilot from a B-24 bomber. But the machine’s disarray was, to some extent, by design: it was built to simulate the mess of biological brains. 

SNARC was built around Hebb’s rule, a then-recent idea about biological learning conceived by the psychologist Donald Hebb. Hebb was studying the basic problem of classical conditioning: how do brains associate experiences with rewards? When adults learn to like coffee, what does that mean for their brains? 

In Hebb’s time, scientists knew a good bit about neurons. It was clear that they were wired together into a communicating network: neurons received electrical messages from other neurons on little tendrils called dendrites. If they received strong enough impulses, they would fire, sending the message along their axon to other neurons’ dendrites. Then if those neurons were receiving enough on their dendrites, they would fire as well, and so on.

Some parts of this network are fixed, encoded in the genome, like the neurons that keep us breathing. But Hebb realized that in order for the brain to learn, something inside it had to be changing, and he conjectured that it was the connections between neurons. Hebb thought: what if when neurons fire, they strengthen the dendrites that had just received messages? If that were true, then individual neurons might be able to ‘learn’ which of their upstream neighbors are more and less related. In the coffee example, this might mean that a ‘reward’ neuron learns that the ‘coffee-taste’ neuron is associated with the ‘caffeine-rush’ neuron, turning coffee into a pleasurable thing in its own right.

That rule seems way too simple to account for something as complicated as learning. To some extent, it is too simple: modern neuroscience knows that Hebbian learning is only a part of how neurons self-organize. But, it was enough for SNARC to work.

 

A ‘neuron’ from SNARC

 

Minsky described SNARC in a profile published in the New Yorker in 1981. He built 40 ‘neurons,’ each from “six vacuum tubes and a motor.” He wired them together with “synapses that would determine when the neurons fired,” so that the whole network could function as a connected brain. The whole thing took up most of a room.

To test out his machine, Minsky imagined that it was simulating the brain of a rat running through a maze. Well, a really dumb rat: 40 neurons is nothing compared to the 18 million you’d find in your average rodent. He placed the virtual rat inside a virtual maze, and tried to get it to learn its way to a piece of virtual cheese.

To do that, he gave SNARC three ‘eye’ neurons. These would tell it what choices were available at a given time: the ‘left’ eye neuron would fire if it could go left, the ‘forward’ eye neuron would fire if it could go forward, and the ‘right’ if it could go right. SNARC also had three ‘decision’ neurons: when these fired, the rat would move in the directions they represented. The rest of the neurons were connected randomly between the perception and decision neurons.

To train the virtual rat, Minsky set it loose in the maze. At first, it would run around randomly. But whenever it made a choice that brought it closer to the cheese, Minsky had wired the brain to self-adjust according to Hebb’s rule, reinforcing the pathways that helped it make that good decision. If the rat had just gone forwards, then the forward neuron would strengthen its connections to the neurons that had just told it to fire, and those neurons would do the same with the ones that had just told them to fire, and so on.

Imagine this rat running through a simple maze, where the food is straight ahead. All it has to do is always go forwards. Whenever it happens to go forwards, the neural pathways that made that choice are reinforced, until they dominate all other paths. Eventually, the network would learn to ignore the ‘left’ and ‘right’ perception neurons completely. 

Minsky realized that in order for the rat to find its way through more complicated mazes, it would need some form of memory. So, he found a nice trick: he wired the output from the brain’s decision neurons as inputs to the network’s next step, which let it peek at its previous decision when trying to make a new one.

But having only decision neurons in the loop didn’t help much: it just meant the brain knew its immediate past. If our brains were wired like that, we might only remember the last word we said. So Minsky added another loop, using internal neurons instead of decision neurons, since while the decision neurons had to spit out decisions, the internal neurons could hold whatever information they liked.

Minsky wasn’t sure how the rat would use this second loop. He figured it might become a sort of ‘short term memory’ loop, where information about the rat’s progress through the maze might live. But he could never be quite sure what was in this loop just from looking at it, in the same way that neuroscientists can’t tell what you’re thinking about just by looking at an MRI. Still, with this second loop, SNARC was able to learn much more complicated mazes, although it was never perfect.

 

+++

 

SNARC wasn’t very well received at Dartmouth that summer. Non-neural computers had been solving mazes for a few years by then, so building whole fake brains seemed uncalled for. They were easy to solve on normal computers; one strategy was to try all of the possible paths and see which one worked. Or, if the maze was too big for that, try the ones that move you toward the goal first, in the hopes that the solution is one of those and you’ll find it faster.

These kinds of strategies, where decisions are made by searching through all the possible choices, were a big topic that summer. Another attendee, Arthur Samuel, presented a search-based checkers program that he had written on an IBM mainframe, which was so well received that IBM’s stock jumped 15 points overnight. To pick moves, his checkers program would try all of the possibilities for one side, and then try all of the opponent’s possible responses, and all of the responses to those, simulating a few back and forths and choosing the best outcome. Checkers wasn’t too hard for search-based AIs, but it seemed totally out of reach for analog brains like SNARC: how would one even begin to teach those 40 neurons a game like checkers?

The big hit that summer was a program called Logic Theorist, an attempt to build a program that would simulate the way mathematicians work. Logic Theorist received mathematical axioms as input, and would search through the branching tree of possible deductions to prove theorems. It worked surprisingly well: when given the axioms from Bertrand Russell and Alfred North Whitehead’s Principia Mathematica, a 20th-century text that constructed a new foundation for mathematics, Logic Theorist proceeded to prove the first 38 theorems in the book. Famously, Logic Theorist’s proof of the fact that two of the angles in an isosceles triangle are equal was shorter and more easily understood than Russell and Whitehead’s.

The surprising success of neat symbolic reasoning programs overshadowed Minsky’s messy approach for the next couple of decades. AI organized around the idea that intelligence could be written down in symbols, that there were rules you could write down that would solve every problem. But by the ’80s, nobody had come up with the rules for some really simple problems. One was reading handwritten text: there’s just no way to write down the rules for all the possible ways the number one can look. Different people write it different ways: it might be slanted, it might have the little serif at the top, or a base, who knows.

The problem was worse in fields like machine translation: how can you write down rules to convert English to Russian? A lot of funding was poured into this question during the Cold War, but results were often strange. In one notorious example, a machine tried to translate the idiom “The spirit is willing, but the flesh is weak” into Russian, but produced something like “The vodka is good, but the meat is rotten” instead. Machines struggled with idioms and other phrases requiring the kind of common knowledge that people take for granted. Gigantic efforts were made to compile databases of common knowledge for machines, but it became clear pretty soon that there was just too much. By the mid-’80s, this sort of trouble had frozen most government funding channels, bringing on the winter. 

 

+++

 

While progress in AI slowed, lots of other things happened. One was the internet: by the late ’90s, there were huge repositories of data out there. Images, text, all kinds of stuff. AI researchers found all this data really useful. They no longer had to come up with hard rules for translating between languages, but could instead look for statistical regularities in human-translated sources like movie subtitles and UN session transcripts. These models had to be ‘trained,’ like SNARC, and their translations were only as good as the data they were trained on.

Another development was in gaming. The market for realistic first-person shooter games produced these bizarre, powerful consoles with their own specialized processors called Graphics Processing Units (GPUs). GPUs were built to do stuff that video games needed done: move a million triangles by one inch, or draw a million dust particles to the screen all at once.

It took until the early aughts for people to realize it, but GPUs were good for things other than gaming. They were built to carry out lots of math operations simultaneously, which physicists and chemists found useful for simulations involving lots of particles. One of the first AI researchers to realize this was the computer vision scientist Yann LeCun. LeCun was studying the human visual system when he realized that it could be simulated easily on a GPU. 

The visual system has these layers of neurons: the first is something like the sensor in a digital camera, with neurons activated by light coming in. They send this raw information to the next layer, which picks out edges. The next layer might pick out shapes, and the next might start to build objects, and so on. LeCun realized that each layer could be thought of as a bunch of neurons doing similar mathematical operations, so he used a GPU to implement it. What he produced was one of the first ‘deep’ neural networks: instead of having one group of neurons like SNARC did, LeCun’s networks had layers upon layers of neurons, which passed on more and more abstract representations of images to the next. LeCun was one of the first researchers to make deep neural nets that could recognize handwriting with near-human accuracy.

Networks like the ones LeCun developed are running all over the place: social networks use them for face recognition (tag your friends!), or to figure out the contents of images to suggest tags. They’re how Snapchat finds your face so it can put dog ears on your head, and they also see the road for self-driving cars. There are other deep nets out there too: ones with short-term memory structures, like the loops in SNARC, have proven to work well with time-based data, like speech and language. Networks like these let Siri hear its users’ commands. A couple of months ago, Google Translate started using these nets for all of their translations. Deep neural nets haven’t even been out there for a decade yet, but AI is already more optimistic than ever.

But it’s a strange optimism. In terms of number of neurons, these nets look something like fruit fly brains, which have about a quarter of a million neurons and ten billion synapses. And as you might expect, these fruit fly brains aren’t so good at navigating human society. Take for example the net that Google used to suggest tags in Google Photos. Last year, Brooklyn programmer Jacky Alcine noticed that it was tagging photos of a Black friend of his as a gorilla. On Twitter, he wrote, “Google Photos, y’all fucked up. My friend's not a gorilla. What kind of sample image data you collected that would result in this, son?” Alcine went on to point out that these networks can only predict from what they’ve seen, and that Google had clearly put too many white people in the network’s training data.

Microsoft’s chatterbot Tay, trained to mimic the language of a 19-year-old American girl, was another disaster. Soon after it went live on Twitter this March, a group of trolls realized that it was still learning how to speak. They immediately began feeding it a ton of garbage, and soon enough Tay was tweeting racist and sexist language to anyone who would listen. Microsoft took the bot offline quickly, but accidentally put it back online a few days later, at which point it tweeted, “kush! [I'm smoking kush infront the police]”  and asked followers if they would “puff puff pass?” Tay has been offline ever since.

Even as AI is solving bigger problems than it could before, it’s also become a bigger problem itself. With the old rule-based AI from before the winter, at least we knew what was going on inside the machine. Now, we train our computers instead of programming them, and they make about as much sense to us as our own brains do. Which is, well, not very much.

 

CHARLIE WINDOLF B’17 is not getting into a car driven by a fruit fly any time soon.