THE COLLEGE HILL INDEPENDENT


Look Ma, No Humanity

a new video game controller leads the way for killer robots

by by Bess Kalb

When the news broke at 5:25 PM on February 20, the Associated Press briefed the English-speaking world with the headline "Brain-Reading Headset to Sell for $299." Had AP business writer Barbara Ortutay been around for Exodus, she probably would have reported, "Moses Brings Down Commandments Weighing a Relatively Portable 24 Pounds."

The article began with a bit of catchy enticement: "Hands cramping from too many videogames? How about controlling games with your thoughts instead?" This set the tone for the rest of the article--breezy and vaguely promotional. From the glorified sales pitch it can be gleaned that the headset, available to the masses later this year, "can detect emotions such as anger, excitement and tension, as well as facial expressions and cognitive actions like pushing and pulling objects." The machine's ability to detect its environment--a human's unconscious state--and to take actions that maximize, for now, its wearer's chances of success, grants the headset the capacity of artificial intelligence by proxy.

The headset, which bears an unfortunate resemblance to the device in A Clockwork Orange, is an assemblage of metal bands, with rubber head nodes and, depending on the wearer's bone structure, two plastic cheek crushers. As the headset might determine by its 'self,' the featured model wearing the prototype seemed happy, comfortable and submissive.

Yes, now the iPod-affording consumer can play Halo while simultaneously enjoying the citrus flavor tidal wave of Mountain Dew MDX and playing the melody bar of "Heart and Soul." Hoorah! But what happens if (when) this emotion-perception-reaction loop is programmed to respond not to its wearer's but to a target's emotional state? If (when) the response in this scenario is "shoot at," the technology becomes a precarious hybrid of polygraph test and trigger finger. Granted, the biofeedback perception technology is hardly new, but this headset's integration of human emotion-detection and machine-determined response represents a remarkable innovation. Moreover, it adds a chilling new component to a worldwide discussion of artificial intelligence agents in defense capacities.

The developers, Emotiv Systems Inc., acknowledge that their techno-sexy invention might spark the imaginations of those without any interest in videogames. Emotive's mission statement is to "evolve the interaction between humans and electronic devices beyond the limitations of conscious interface." The company "has created technologies that allow machines to take both conscious and non-conscious inputs directly from your mind." Capitalizing on this Auto-Freud2000 capacity, Emotiv plans to work with IBM's Digital Convergence sector to explore "applications beyond video gaming." As major trends in AI development confirm, "applications beyond video gaming" is Digital Convergence-speak for "droid armies."

Applications beyond Zelda: Ocarina of Time
The Borg-related philosophical quandaries raised by this technology's capacity to explode the consciousness-machine dichotomy are vast, complex and nerdy. The headset speaks to the question of how increasingly sophisticated AI technology has been, is being and will one day be used to kill people. Given such a task, a glitch or system error would be unforgivable. It would be wrongful death with no possible recourse. Tragically, this scenario is not hypothetical. In October 2007, an accident at the South African National Defense Force's Lohatlha training grounds left nine soldiers dead and 15 wounded. According to Defense Minister Mosiuoa Lekota, during a routine training session with robotic targeting machine guns (35mm Oerlikon GDF MK-5), a programming "malfunction" caused the gun to fire its rounds continuously and indiscriminately.
To collectively ponder this notion, last Wednesday Britain's Royal United Service Institute for Defense and Military Studies held a conference on "The Ethics of Autonomous Military Systems." At the conference, Britain's AI luminaries presented analytical research on plans outlined by the US and other nations to "roboticize" their military forces. One pressing topic was the Israeli military's development of an AI defense system that would "kick in" with its own intuitive capabilities when battle situations "exceed the psychological limits of human command." Commander of Israeli Defense Forces Daniel Milo told Defense News that the system would be armed with enough artificial intelligence to "take over completely from flesh-and-blood operators" during engagements.

An additional primary agenda item was the US Department of Defense's December 2007 publication of an "Unmanned Systems Roadmap" proposing to spend about $4 billion by 2010 on robotic weapons. The Roadmap would replace the 4,000 semi-autonomous drones already deployed by the US in Iraq with robots that could identify potential threats without human help--a task that would be handsomely aided by, for example, a fear-detecting retinal/pulse/sweat gland scanner. One can only pray that an anxious, sneaky alley cat doesn't meander into target range. Or a civilian with one of those permanently guilty-looking faces, like Woody Allen.

Delivering the keynote speech at the event, Noel Sharkey, Sheffield University professor of Artificial Intelligence and Robotics, cautioned against steps toward "discriminative power." Speaking to Reuters after the conference, Dr. Sharkey commented that he was "really scared."
Professor Sharkey legitimated his fear, and quickly spread it throughout the room, with an anecdote about the development of suicide bomber robots. He cautioned, "With the prices of robot construction falling dramatically and the availability of ready-made components in the amateur market, it wouldn't require a lot of skill to make autonomous robot weapons." He continued, "Once the new weapons are out there, they will be fairly easy to copy. How long is it going to be before the terrorists get in on the act. Maybe use them in suicide missions." It's worth pointing out again the existence of a $299 AI contraption, available at Best Buy.

When there's a will there's a way
Several hours after coming across the headset article, I stepped away from the internet and called my 17-year-old brother, Will. A scholar of Asimov and analyst of Battlestar Galactica, Will seemed an appropriate source of insight, or at least validation. One Thanksgiving, he used the phrase "eventual robotocratic takeover," and it made our grandmother "miserable."

"About that robot headset..."

"Jesus Christ let it go."

"But the implications. Wouldn't you say there are Cylon implications?"

"Battlestar Galactica is a TV show that I enjoy. I'm sorry I made you watch it over break. I'm really, really sorry."

"But shouldn't people be warned? I'm Will Smith in I, Robot. I'm the one who says 'I don't want to have to say I told you so.'"

"It won't happen for a long, long time. The headset is a cool thing that represents a piece of some much larger equation whose end product we can't fully understand. If there even is an end product. You aren't Will Smith. How much time have you spent thinking about this?"

"Will?"

"Bess?"

"Do they get to Kobol?"

"Go to sleep." He sighed. After a quick silence, a dial tone hiccupped in.