I remember the feeling of driving a car for the first time. The freedom I felt, most likely ingrained in me from movies like Mad Max, Taxi Driver, and Death Proof, and from Ford F-150 advertisements, was probably just petty masculinity; but that car made me feel invincible. By then I had heard about Google’s self-driving car project, and as I clumsily drove up my street, the idea of replacing human-controlled cars with autonomous, self-driving machines seemed ridiculous. “Who would want a computer to interfere with this feeling?” I wondered. Later, as it started to rain and windshield wipers whisked away the raindrops without my command, I realized the machine was doing more work than I had originally thought.
The strange truth I had come to realize is that self-driving cars aren’t in our distant future—they’re already here. I would like to think that my car that day was operated solely by me and my rugged-individualism, but my car could automatically shift gears, deploy the airbags in a collision, and even trigger the windshield wipers if it detected water on the windshield.
It’s easy to think of driving as a single task. When we talk about driving, we usually talk about the task of going from one point to another in a car, ideally without breaking any laws or hurting the occupants. But driving is comprised of smaller tasks: changing lanes, shifting gears, exiting freeways, stopping at intersections. It is in these small tasks where automation takes place. Think of soap dispensers, automatic doors, supermarket checkouts, streetlights, flights, personal banking. As small tasks inside of larger tasks become more and more automated, we hardly notice that our everyday tasks are assisted by computers.
As we program past the smaller tasks and entrust ever-larger responsibilities to machines, the debate over what constitutes human ability and intelligence intensifies: Sure, Deep Blue could beat Kasparov in chess, but can it handle a conversation? IBM Watson can answer Jeopardy questions, but can it enjoy a song? Siri can listen to my requests, but can it understand sarcasm? It’s going to take a lot for a self-driving car to be considered as good as a human driver, though that won’t stop the companies in Silicon Valley from trying.
Google, Uber, and Tesla are already making investments in self-driving technology. Apple, too, is rumored to be in the business of driver automation. Massive hiring projects are already underway to support the research and development facilities of Silicon Valley companies.
Market disruptions within the automobile industry are guaranteed to happen, because, as the Economist noted, these self-driving car developments will likely shrink traditional car manufacturers’ share of the market. The competition, fueled by the tremendous financial reward for early market capitalization, is unprecedented. “This is an arms race,” said University of Michigan professor Larry Burns for an article in the Atlantic, “You’re going to see a new age for the automobile.”
As it turns out, this race for self-driving cars has been going on for a while. The first driverless car dates back to the 1920s when Houdina Radio Control’s radio-controlled “Linrrican Wonder” traveled up Broadway. Then, a decade later, the 1939 World’s Fair installation “Futurama” captured the public’s imagination. The “Futurama” was a streamlined cityscape with a completely automated expressway network teeming with self-driving cars. As the Great Depression began to subside, the private and public sectors alike longed for the slick future the exhibit envisioned. Throughout the next twenty years, GM and RCA developed scale models of automated highway systems to test electric current-guided cars. The Space Race would lead to the next major development in the early 1960’s: the ‘Stanford Cart’, originally intended to be a lunar rover, was equipped with video cameras and sophisticated image-processing abilities, and, according to tech folklore, successfully crossed a chair-filled room without human intervention. The coming decades would bring German engineer Ernst Dickmanns’ VaMoRs and VaMP, which reportedly could drive relatively unassisted from Munich, Germany to Odense, Denmark. Carnegie Mellon University’s NavLab and Stanford led technological development throughout the 2000’s fostered by the Defense Advanced Research Projects Agency’s Grand Challenge. The DARPA’s contest offered cash prizes to the development team whose autonomous cars could best navigate increasingly complicated obstacle courses, and heavily popularized the budding technology to the public. Soon the Silicon Valley tech companies took notice started to vie for dominance in the market. In December 2014, Google emerged victorious when it unveiled the first marketable, self-driving prototype.
In terms of looks, Google’s model didn’t exactly fit in with the traditional muscle and power design ethos of American car manufacturers. The car was smaller than the average Sedan and looked like a cartoon VW Bug with cutesy mouse-like headlights and rounded windows. Mounted on the car’s roof was a black cylinder which stored the light-detecting and ranging unit, also known as the LIDAR. The car drove by rapidly spinning the LIDAR 360 degrees to capture a high-resolution map of the car’s surroundings, which the navigation unit then used to guide the car safely through the streets. The only human input the car required was pushing the start button and selecting the destination on the navigation system. For the remainder of the car ride, the passenger would simply sit in the seat while the car navigated to its destination; however, an operable steering wheel remained in case the car encountered dangerous situations.
The prototype was heavily scrutinized. Reviews by Forbes and Gizmodo stated that the car was fundamentally useless and imagined that it would only serve to crowd and complicate the current driving system, rather than revolutionize it. MIT Technology Review wasn’t exactly kind either, beginning its review with a biting question: “Would you buy a self-driving car that couldn’t drive itself in 99 percent of the country? If your answer is yes, then check out the Google Self-Driving Car.” Despite the impressive feat of Google’s historic unveiling, the popular skepticism of a self-driving machine remained. However, the widespread assumption that humans made better drivers than machines would prove to be overly optimistic.
Humans, as it turns out, are terrible drivers. In 2013 alone, The National Highway Traffic Safety Administration found that 32,719 people died in car accidents. Most of these accidents were caused by distractions, fatigue, and intoxication. According to recent study by McKinsey and Company, by 2050 these fatal accidents could be reduced by 90%. In nearly 1.7 million miles of testing their prototype, Google has reported only twelve minor collisions; all of these collisions were actually caused by other drivers. That’s not to say there haven’t been any hangups. 272 minor incidents of autonomous technology failure were reported in a fourteen month timespan of test driving. More recently, a Google car struck a public bus when it attempted to avoid sandbags in the middle of an intersection in Mountain Valley, California. While no injuries occurred, these accidents have raised the anxieties of Californian driving agencies and regulators.
These incidents, however, haven’t hurt the Google engineering team’s resolve: they have a different mission. As Google CEO Sergey Brin said in a recent interview: “We don’t claim that the cars are going to be perfect. Our goal is to beat human drivers.” After all, self-driving cars don’t need to work perfectly, but simply cause marginally fewer accidents than humans. To present another analogue: if an intelligent computer performed a surgical operation at a greater success-rate than a human, surely most of us would opt to have the machine to perform the surgery. Indeed it would seem irrational to put thousands of lives at risk by not implementing autonomous cars because we couldn’t figure out a way to save a select few. Google’s cars, with programming flaws and all, already cause far less accidents than we do.
The concept of self-driving cars replacing human drivers becomes more convoluted when we consider the consequences of a computer taking on the role of an ethical actor. Take, for instance, a scenario in which a pedestrian crossing-light malfunctions and ten people are led into the middle of a road in the immediate path of a self-driving car. The car can’t stop in time but it can avoid hitting the pedestrians by steering the car into a wall, risking the life of the passenger. What decision will the car make? If you consider this high-cost scenario to be unlikely, there are plenty of instances of minor ethical decisions the car might have to make: If a car stops abruptly to avoid an accident, should it consider the safety of the car driving behind it? If a car’s headlights or tires become damaged during its drive, should it shutdown to comply with government safety regulations? To circumvent these ethical problems, Google’s engineers intend to make their cars intelligent enough so these scenarios will never happen. But this solution seems short-sighted; surely a car will run into an ethical scenario like this at least once.
However, it’s not the self-driving cars that have to make a decision, strictly speaking. After all, humans are the ones who program the car’s software to make these decisions in the first place. We’re able to give the program our basic human abilities like quick reaction times and calculations, but when it comes time to program these ethical decisions, we react with confusion and fear. The question then isn’t ‘Can machines drive better than a human?’ but ‘What does it even mean to drive like a human?’ How can we outsource our ethics when we don’t know what our ethics are?
For these reasons, it’s unlikely that we will ever have completely automated cars. Rather, we will probably have cars that make the majority of our driving decisions while we steer the wheel in the case of a perilous situation. When I drove for the first time, the car made me feel more alive. I still remember that sense of freedom and responsibility whenever I get behind the wheel. When I’m driving a car, I decide what is important.
JULIAN FOX B‘18, for one, welcomes our robot overlords.